Obtaining deltaT for use in software

Obtaining deltaT for use in software

We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

I'm currently developing a javascript application in which I want to calculate the approximate position of the sun. This works quite fine but requires the value for deltaT (TT-UT) to be set depending on the year for which I want to calculate the solar position.

Currently, I'm using a default value of 67 for my calculation. However, since I want to calculate the solar position for several years I'm looking for a convenient way to obtain the deltaT value for each year.

To all of you, that have some experience with programming: Is there any interface (API) that provides me with the desired values? Of course, it would also be sufficient to get the universal and terrestrial time so that I can calculate deltaT on my own.

I'm not aware of any APIs that provide $Delta T$, but you may be able to parse for the value $Delta T$.

Of course if you want to calculate the approximate position of the Sun, a few second more or less should not matter too much ;-)

See Where can I find/visualize planets/stars/moons/etc positions? for an extremely general answer on how to compute positions, but the files you're looking for specifically are the "leap second kernels" at

There was a mild kerfuffle on spice-discussion lists when NASA failed to update the kernels after the end-of-2016 leap second was announced, but they have updated it now.

A historical table from 1961 to the present of TAI-UTC is maintained here:

Delta T can be calculated by adding 32.184s, the difference between TT and TAI, to the value (TAI-UTC) in the table.

So currently Delta T is about 68 and will likely be 69 in a few years. It's increasing by one every 3 years or so.

However, values of UTC are adjusted with leap seconds so we can use precision clocks that aren't continuously tweaked. UT1 is the precise measure of "Earth" time. You can modify this to reference UT1 using the DUT1 value which is distributed with NIST time signals, WWV etc. It's value is published here:

The actual equations used by NASA are located here:

I failed to find any pre-written code and consequently wrote my own in Swift. The equations are fairly straightforward and a list of the possible errors these equations may produce is linked to that page as well.

Here are the polynomials:

Using the ΔT values derived from the historical record and from direct observations (see: Table 1 and Table 2 ), a series of polynomial expressions have been created to simplify the evaluation of ΔT for any time during the interval -1999 to +3000.

We define the decimal year "y" as follows:

y = year + (month - 0.5)/12

This gives "y" for the middle of the month, which is accurate enough given the precision in the known values of ΔT. The following polynomial expressions can be used calculate the value of ΔT (in seconds) over the time period covered by of the Five Millennium Canon of Solar Eclipses: -1999 to +3000.

Before the year -500, calculate:

ΔT = -20 + 32 * u^2 where: u = (y-1820)/100

Between years -500 and +500, we use the data from Table 1, except that for the year -500 we changed the value 17190 to 17203.7 in order to avoid a discontinuity with the previous formula at that epoch. The value for ΔT is given by a polynomial of the 6th degree, which reproduces the values in Table 1 with an error not larger than 4 seconds:

ΔT = 10583.6 - 1014.41 * u + 33.78311 * u^2 - 5.952053 * u^3 - 0.1798452 * u^4 + 0.022174192 * u^5 + 0.0090316521 * u^6 where: u = y/100

Between years +500 and +1600, we again use the data from Table 1 to derive a polynomial of the 6th degree.

ΔT = 1574.2 - 556.01 * u + 71.23472 * u^2 + 0.319781 * u^3 - 0.8503463 * u^4 - 0.005050998 * u^5 + 0.0083572073 * u^6 where: u = (y-1000)/100

Between years +1600 and +1700, calculate:

ΔT = 120 - 0.9808 * t - 0.01532 * t^2 + t^3 / 7129 where: t = y - 1600

Between years +1700 and +1800, calculate:

ΔT = 8.83 + 0.1603 * t - 0.0059285 * t^2 + 0.00013336 * t^3 - t^4 / 1174000 where: t = y - 1700

Between years +1800 and +1860, calculate:

ΔT = 13.72 - 0.332447 * t + 0.0068612 * t^2 + 0.0041116 * t^3 - 0.00037436 * t^4 + 0.0000121272 * t^5 - 0.0000001699 * t^6 + 0.000000000875 * t^7 where: t = y - 1800

Between years 1860 and 1900, calculate:

ΔT = 7.62 + 0.5737 * t - 0.251754 * t^2 + 0.01680668 * t^3 -0.0004473624 * t^4 + t^5 / 233174 where: t = y - 1860

Between years 1900 and 1920, calculate:

ΔT = -2.79 + 1.494119 * t - 0.0598939 * t^2 + 0.0061966 * t^3 - 0.000197 * t^4 where: t = y - 1900

Between years 1920 and 1941, calculate:

ΔT = 21.20 + 0.84493*t - 0.076100 * t^2 + 0.0020936 * t^3 where: t = y - 1920

Between years 1941 and 1961, calculate:

ΔT = 29.07 + 0.407*t - t^2/233 + t^3 / 2547 where: t = y - 1950

Between years 1961 and 1986, calculate:

ΔT = 45.45 + 1.067*t - t^2/260 - t^3 / 718 where: t = y - 1975

Between years 1986 and 2005, calculate:

ΔT = 63.86 + 0.3345 * t - 0.060374 * t^2 + 0.0017275 * t^3 + 0.000651814 * t^4 + 0.00002373599 * t^5 where: t = y - 2000

Between years 2005 and 2050, calculate:

ΔT = 62.92 + 0.32217 * t + 0.005589 * t^2 where: t = y - 2000

This expression is derived from estimated values of ΔT in the years 2010 and 2050. The value for 2010 (66.9 seconds) is based on a linearly extrapolation from 2005 using 0.39 seconds/year (average from 1995 to 2005). The value for 2050 (93 seconds) is linearly extrapolated from 2010 using 0.66 seconds/year (average rate from 1901 to 2000).

Between years 2050 and 2150, calculate:

ΔT = -20 + 32 * ((y-1820)/100)^2 - 0.5628 * (2150 - y)

The last term is introduced to eliminate the discontinuity at 2050.

After 2150, calculate:

ΔT = -20 + 32 * u^2 where: u = (y-1820)/100

All values of ΔT based on Morrison and Stephenson [2004] assume a value for the Moon's secular acceleration of -26 arcsec/cy^2. However, the ELP-2000/82 lunar ephemeris employed in the Canon uses a slightly different value of -25.858 arcsec/cy^2. Thus, a small correction "c" must be added to the values derived from the polynomial expressions for ΔT before they can be used in the Canon

c = -0.000012932 * (y - 1955)^2

Since the values of ΔT for the interval 1955 to 2005 were derived independent of any lunar ephemeris, no correction is needed for this period.

The following software is available for use by department faculty, staff, and students on institution-owned computer designated for your exclusive use.

  • Microsoft Windows 10 Professional (32 and 64 bit)
  • Microsoft Windows 10 Enterprise (32 and 64 bit)
  • Microsoft Windows 10 Education (32 and 64 bit)
  • Windows 8.1 Professional (32 and 64 bit)
  • Windows 8 Enterprise (32 and 64 bit)
  • Windows 8 Professional (32 and 64 bit)
  • Windows 7 Enterprise (32 and 64 bit)
  • Windows Vista Enterprise (32 and 64 bit)
  • Office 2016 Pro Plus (32 and 64 bit)
  • Office 2016 for OS X
  • Office 2013 Pro Plus (32 and 64 bit)
  • Office 2011 for OS X
  • Office 2010 [Warning: No more UCLA MCCA (KMS/MAK) activation -- Office 2010 reached its end of support on October 13, 2020, but other standalone installations should still work.]
  • Office 2008 for OS X
  • Office 2007
  • Visual Studio Pro 2015
  • Visual Studio Pro 2008


  • Fast access to large datasets (millions of rows/hundreds of columns)
  • View/edit table data in a scrollable browser
  • View/edit table and column metadata
  • Re-order and hide/reveal columns
  • Insert 'synthetic' columns defined by algebraic expression
  • Sort rows on the values in a given column
  • Define row subsets in various ways
  • View interactive and configurable plots of column-based quantities against each other distinguishing different data sets:
    • Plot types are histogram, plane, sky, cube, sphere, time
    • Features include variable transparency, error bars, point labelling, colour axes, all-sky plots, configurable density shading, vectors, ellipses, areas, polygons, lines, contours, density maps, KDEs, analytic functions, plain text/LaTeX axis annotation, .
    • Plots can be exported in bitmapped or vector formats, and a command to script the same plot is displayed
    • FITS BINTABLE (binary table) or TABLE (ASCII table) extensions
    • VOTables in any of the format variants (TABLEDATA, FITS, BINARY, BINARY2) or versions
    • ASCII tables in a number of variations
    • CDF files
    • Feather files
    • Comma-Separated Values (CSV)
    • Enhanced Character-Separated Values (ECSV)
    • Results of SQL queries on relational databases
    • IPAC format
    • AAS Machine-Readable Tables
    • Apache Parquet
    • GBIN files
    • FITS BINTABLE (binary table)
    • VOTables in any of the format variants (TABLEDATA, FITS, BINARY, BINARY2) or versions
    • Plain ASCII text
    • Comma-Separated Values (CSV)
    • Enhanced Character-Separated Values (ECSV)
    • New table exported to an SQL-compatible relational database
    • IPAC format
    • Apache Parquet
    • HTML TABLE element
    • LaTeX tabular environment

    Obtaining deltaT for use in software - Astronomy

    AA+ is a C++ implementation for the algorithms as presented in the book "Astronomical Algorithms" by Jean Meeus. Source code is provided with the book, but it includes (IMHO) a restrictive license, as well as not having been updated for the 2nd revision of the book which includes new and interesting chapters, on areas such as the Moons of Saturn and the Moslem and Jewish Calendars. To make the most of my code, you will really need a copy of the book. This can be purchased from Amazon or directly from the publishers Willman-Bell.

    Example areas covered include the positions of the planets, comets, minor planets and the Moon, calculation of times of Rising, Setting and Transit, calculation of times of Equinoxes and Solstices plus calculation of the positions of the moons of Jupiter and Saturn as well as many other algorithms presented in the book. This is one of the biggest frameworks I have ever developed and includes 415+ thousand lines of code!

    Obtaining deltaT for use in software - Astronomy

    Thank you for your interest to the article and library.

    I don't quite understand what you mean by "real world application for close loop control system": actually ANY real world functional control system is a close-loop one. Probably the first well-known such system is steam engine with fly-ball (centrifugal) governor invented by James Watt & Matthew Boulton back in 1788.

    I would like to know how to apply your solution into a simple PID control system.
    For example: How to regulate an output pressure in a plant using your solution? 1 input (set point) and 1 output (PV)

    Most of standard PID control loop is similar to the one shown below. Your solution is much more different from standard PID.
    Sorry, I am not really good in high level Math.

    Here is a simple software loop that implements a PID algorithm:
    previous_error := 0
    integral := 0

    error := setpoint − measured_value
    integral := integral + error × dt
    derivative := (error − previous_error) / dt
    output := Kp × error + Ki × integral + Kd × derivative
    previous_error := error
    goto loop

    Thank you for your interest to the article. This approach is much more complex w.r.t. a simple PID. So, to use it you should have reason for this. If a simple PID satisfies your requirements then no need for such complications.
    The main idea that we control our system with a set of feedbacks by all coordinates (actual or simulated) and inputs. Actually a simple PID we can easily restructured to feedbacks - and feedbacks to PID (may be of higher order). In case of linear system PID with constant parameters can be generated from feedback coefficients - consider the first numeric example in the article. The system provides displacement of a mass to 10 m on 9 s without displacement overshoot. And regulator with a single input and two negative feedbacks on displacement and velocity may be converted to one with a single feedback of displacement and PID.
    IMHO an approach described in the article (variant of iLQR - iterative Linear-Quadratic Regulator) is especially useful for complex non-linear systems.

    Recursion is a powerful tool in terms of the ability to use it to write fairly concise code that is powerful in capability. The main problem with recursion is that it creates a great deal of extra overhead for the processor compared to a more linear solution. So, in the context of needing optimal code to keep a control system responsive in a dynamically changing environment, I would advise that recursive algorithms should be avoided. You could prove me wrong with simple recursive algorithms running on powerful processors, but getting into the habit of using such a paradigm in this type of situation is bound to lead to trouble down the road as you progress to using more and more complex recursive algorithms.

    What is the square optimal control problem? The term is not used anywhere else in the article.

    Thank you for reading.
    Article discusses just one possible way of control policy generation. PID is also just one possibility of the control - along with many others.
    Actually PID algorithms may be implemented with the set of feedbacks like shown in Close-Loop Control System picture in the article - it is intuitively understood and can be easily proofed for linear case.
    Best regards.

    Suppose we have a tank which we fill with water through some valve. We have sensor of water volume (or level), say, some float connected to valve actuator. Valve can be in one of two positions: either open or closed. As soon as float indicates that level of water has reached 100 per cent the valve is closed. This is trivial control system with no real dynamic, and of course sophisticated control approach discussed in the article is no needed for such simple and evident case. According to the Russian proverb, to use such complications in this simple case is like shoot at sparrow with artillery gun.

    Now consider more complex problem. We need to move some object on 10 m in relatively short time, but during this movement its velocity should not exceed certain value (this is the first numeric example in the article).

    great article (you've got 5).
    Please can you update mathematical expressions in section numerical example?
    IE browser always shows only 'Math expression error'.

    General News Suggestion Question Bug Answer Joke Praise Rant Admin

    Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.

    Tips and tricks for Solar imaging and processing with ASI290MM and ASI174 cameras

    Let say that we have Solar H-alpha telescope and we would like to buy an appropriate camera for Solar imaging in H-alpha light to make our first image of the Sun in H-alpha light. What should we know?

    Choosing the right ASI camera for Solar H-alpha imaging:

    The best camera for Solar H-alpha imaging should have three main features:

    • a high frame rate capable for taking many frames together in a short video sequence,
    • camera should be monochrome and
    • camera should have a sensitive CMOS chip with high dynamic range.

    Based on these features I would recommend two our models – ASI290MM or ASI174MM.
    Tips and trick for Solar imaging and processing

    Solar imaging and processing could be done in any astro image software: FireCapture, GenikaAstro, etc. The most important are the right camera settings. Set the camera Gain on 0 or on the lowest possible value, use few milliseconds (eg. 10, 15, 20 msec) exposure time and turn off the gamma value (50 and off). In general, keep the histogram values around 80-90% if it is over 100% then some parts of the image are over-exposed and will appear as a bright white patch containing no surface details in the final image and the structure will be lost. Record the Sun when seeing is the best. Wait patiently and be ready to hit the record button during these times. You can also use Solar Scintillation Monitor (SSM) for real-time seeing monitoring and catch the best seeing.

    If the image suffers from Newton’s rings use a simple mechanical tilt adapter to remove them easy and quickly.

    Obtaining a precise focus is difficult but very important. No matter how much post-processing is done, an out of focus image will never as good as any image with good focus. I recommend you to use very innovative focus toll GenikaAstro and this step will become very easy and precise.

    During the imaging session is recommended to take images for flat frame. A proper flat frame can remove dust from the camera chip and vignetting effect. The easiest way to do the flat is to get a very thin polythene bag and cover the front of the telescope or second option is to simply defocus the image. Take around 200-500 images and create the master flat in AutoStakkert!3 software. Please keep in mind that every camera rotation on additional ROI requires a new flat image.

    Once a series of *.avi or *.ser video files are made they need to be processed in AutoStakkert!3 to give a single final image composed from the best frames of the video. The best dynamics of the chromosphere is guaranteed at 100-150 frames in one stack. A smaller alignment point (AP) size is better for finer details. Do not forget to use proper master flat frame image during the stacking process!

    Sharpening the stacked image can be achieved by many different techniques and softwares. The simplest and very effective method for sharpening is using the free software called ImPPG. ImPPG uses Lucy Richard deconvolution technique. The Sigma slider can be finely tuned to estimate the point spread function of the particular image and the results can be seen quickly.

    After stacking process select the best image file for post-processing and add an artificial color of a final image. In this step, you can use Adobe Photoshop or any similar image processing tool.

    150-mm Solar telescope equipped with ASI290MM during the imaging near the sea. Your solar telescope should be mounted at the locations that minimizes the first meters of turbulence. Water surrounded locations show a minimal turbulence at low layers of atmosphere. The best seeing is usually in the morning from 10AM to 12AM.

    Our Sun is a fascinating target in H-alpha light even when the solar activity is low. If you will frequently observe the Sun, you will notice how active and dynamic it is from time to time. During imaging be creative and experiment with settings and processing techniques. There is no only one rule! We will be happy to hear your results and findings.

    The 3rd AAS Chandra/CIAO Workshop (2 day workshop)

    Thursday, 7 January | 11:00 – 18:00 (ET)
    Friday, 8 January | 11:00 – 18:00 (ET)

    Chandra/CIAO workshops are aimed at helping users, especially graduate students, post-doctoral fellows and early-career researcher to work with Chandra data and the Chandra Interactive Analysis of Observations (CIAO) software. Several workshops have been previously organized at the Chandra X-Ray Center (see for more details) and this is the third time a CIAO workshop is organized in connection with the AAS. The workshop will feature talks on introductory and advanced X-ray data analysis, statistics, and topics in Chandra calibration. The workshop will also include hands-on sessions where students can practice X-ray data analysis following a workbook of CIAO exercises or perform their own analysis with members of the CIAO team ready to assist. Participants are required to have their own laptop with CIAO installed (we will help with the installation if needed).

    0.9.1 (2021-06-09)

    • Add support to configure multiple InfluxDB Sink connectors.
    • Add user guide documentation on how to reset the InfluxDB Sink connector consumer group offsets.
    • Update cp-kafka-connect image with new version of the InfluxDB Sink Connector. See #737 for details.

    0.9.0 (2021-05-03)

    • Add create mirrormaker2 command
    • Add create jdbc-sink command
    • Update dependencies

    0.8.3 (2021-03-04)

    • Add upload command
    • Initial support to MirrorMaker 2 and Confuent JDBC Sink Connectors
    • Update dependencies

    0.8.2 (2021-01-25)

    • Update cp-kafka-connect image with new version of the InfluxDB Sink connector. This version bumps the influxdb-java dependency from version 2.9 to 2.21. In particular 2.16 introduced a fix to skip fields with NaN and Infinity values when writing to InfluxDB.
    • Reorganize developer and user guides.
    • Add documentation in the user guide on how to run the InfluxDB Sink connector locally.
    • Update dependencies

    0.8.1 (2020-10-18)

    • Fix bug preventing to read InfluxDB password from the environment
    • Update cp-kafka-connect image with Confluent Platform 5.5.2
    • Update dependencies

    0.8.0 (2020-08-05)

    • Use data classes for the application and connector configuration.
    • Plugin like organization, to support new connectors add a cli and a config file.
    • Add support to the Amazon S3 Sink connector

    0.7.2 (2020-03-31)

    • Add support to the InfluxDB Sink Connector.
    • Add –timestamp option to select the timestamp field to use in the InfluxDB Sink connector.
    • Fix Header Converter Class configuration setting.
    • Fix tasks.max configuration setting name.
    • Add connector name configuration setting to support multiple connectors of the same class.
    • Handle empty list of topics properly.

    Copyright 2020 Association of Universities for Research in Astronomy, Inc. (AURA)

    Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

    The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.


    Hyperspectral Imaging Applications

    Hyperspectral imaging is the process of photographically acquiring data from across the electromagnetic spectrum, with the aim of obtaining spectral information in each pixel of a captured image. This detailed technique uses multiple imaging methodologies to photograph a scene across the spectral range in many contiguous spectral bands.

    Hyperspectral imaging spectrometers cover wavelengths beyond the visible spectrum, with a broad spectral range between 0.2 μm and up to 2.5 μm, with an outstanding spectral resolution. This allows hyperspectral instruments to capture multi-dimensional imagery to finely-tunable degrees, from an astronomical scale to microscopic one.

    Delta Optical Thin Film previously explored the process of hyperspectral imaging and using continuously variable bandpass filters to optimize data acquisition, but this article will focus on the applications of hyperspectral imaging in more detail:

    Hyperspectral imaging is used in astronomy to attain spatially resolved spectra relating to objects or clusters at great distances from the earth. It has enabled astronomers to map the astronomical distributions of far-off galaxies, and to remotely analyze a planet’s surface or atmospheric composition by providing spectrum data per-pixel with numerous adjacent wavelength bands.

    Improved optical filters have enabled cheaper and more efficient hyperspectral imaging techniques for astronomy applications, improving telescopic equipment and informing observations and conclusions about structures within and beyond our galaxy.

    Food & Drink

    With extremely narrow adjacent spectral bands, hyperspectral imaging equipment can accurately detect the presence of chemicals or foreign material in products that are meant for human consumption. This process can be integrated with mechanical hardware to distinguish contaminated goods and remove them from the production line, improving inspection procedures in food and drink factory environments.

    Hyperspectral sorting devices require ultra-precise instrumentation capable of determining trace materials with high degrees of accuracy, necessitating accurate optical filters to block or omit unnecessary wavelengths.

    Precision Agriculture

    While hyperspectral imaging spectrometers for astronomy require expensive and robust telescopic equipment, the applicable instrumentation is more versatile when hyperspectral imaging is applied to our own planet. New imaging equipment is being used for crop monitoring by analyzing the light reflected from crops at various stages of growth. Researchers use satellite imagery or drones equipped with hyperspectral cameras to assess a crops’ physiological condition and react to perceived nutritional changes or diseases.

    This process is known as precision agriculture. It is designed to optimize the agronomic industry with minimal invasiveness to ensure a consistent and organic food supply, and to acquire data capable of informing future agricultural best practices.


    Military advancements are characterized by measures and countermeasures – for example, military personnel have learned to obscure their heat signatures from sophisticated infrared imaging systems. However, hyperspectral imaging provides such a broad range of spectra that it is difficult to counteract by conventional camouflaging methods. This improves the accuracy of target acquisition, with potential uses in determining an individual’s emotional or physiological state by analyzing their unique signatures.

    Hyperspectral Imaging from Delta Optical Thin Film

    Delta Optical Thin Film provide a range of products suited to established and emerging hyperspectral imaging techniques, including custom continuously variable bandpass filters to suit unique disciplines. Our standardized Bifrost filters are available with central wavelength ranges of 450 nm – 880 nm, or 800 nm – 1088 nm.

    If you would like any more information about performing hyperspectral imaging with Delta Optical Thin Film products, please do not hesitate to contact us.

    Access our latest webinar!

    Sign up for our newsletter, and in return you will receive a link to the recording of our latest webinar How to Benefit from Continuously Variable Filters in Fluorescence Microscopy, Confocal Microscopy and Spectroscopy?


    "1. Short focal-length optics
    2. Fast focal-ratios
    3. Avoid tiny pixels
    4. Use a color camera instead of mono with filters"
    I definitely agree with 1 & 2.
    Tiny pixels is debatable depending on what one considers to be "tiny"!? And 1 & 2 can offset "tiny" to a degree.
    Color is nice but some of the best astro-images I've ever captured over 30+ years of shooting are monochrome. I think it depends on personal choice. I like mono. A lot of people really like color.