Aquinus
Resident Wat-man
- Joined
- Jan 28, 2012
- Messages
- 13,174 (2.77/day)
- Location
- Concord, NH, USA
System Name | Apollo |
---|---|
Processor | Intel Core i9 9880H |
Motherboard | Some proprietary Apple thing. |
Memory | 64GB DDR4-2667 |
Video Card(s) | AMD Radeon Pro 5600M, 8GB HBM2 |
Storage | 1TB Apple NVMe, 4TB External |
Display(s) | Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays |
Case | MacBook Pro (16", 2019) |
Audio Device(s) | AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers |
Power Supply | 96w Power Adapter |
Mouse | Logitech MX Master 3 |
Keyboard | Logitech G915, GL Clicky |
Software | MacOS 12.1 |
Back over in the WCG 13th Birthday Challenge (11/16-11/22/2017)- Calling all crunchers!!! thread, @Norton had said:
I had poked at the APIs that IBM exposes and while they're not exactly well documented or consistent, I was able to pull apart the member API. First of all, you can't get older historical data for what work you've done in the past. IBM only exposes the last several days worth of results that have been processed. There comes a point where results become stale and no longer show up in the API. This requires a service that's capable of constantly checking and storing both new results and the difference on exists ones since the state of a result gets updated over time (time it was received, if it verified, it's status, etc.)
Once I figured out what was going on, I whipped out my handy go-to dev tools and went to work. I've made a basic schema in PostgreSQL and a small service that is capable of fetching the remote data, parsing it, and storing it in PostgreSQL. For me, the next step would be to turn it into a stream-based service based on the members stored in the database (which contains the username and verification code, both of which are required to fetch a member's stats.) That will be enough for a service that can run long-term and start building a historical database of the information coming out of WCG (IBM.) However, that will take time to gather enough data to be useful if any way. So, I would like to ask TPU's crunchers, what would you like to see from the data that gets gathered from all the crunching that you do?
Stats can be broken down obviously by date and time but, the "results" table in my database mimics the API in the sense that I capture everything that gets sent over the wire:
How would you like to see this data broken down and represented? What would like to get out of this information? Once I get the polling of data setup, I can get this running on the 3820 crunching in the attic and expose whatever I'm doing to all of you who are interested. Also, if you would like to donate your crunching statistics to the cause of science, I could use your username and verification code to make the API calls to watch your stat history as well.
Questions, comments, suggestions?
Have a project for one of you web gurus....
SETI.Germany is offering code to setup a personal stats webpage that will read the database from WCG and allow a cruncher to view their stats in greater detail.
*Note that this is similar to what FreeDC does with their stats pages.
I have no clue how to do this but am hoping that a Team member has experience with PHP and MYSQL and is willing to have a look.
See below for more details:
https://www.seti-germany.de/wcg/39_en_Personal WCG-Stats.html
I had poked at the APIs that IBM exposes and while they're not exactly well documented or consistent, I was able to pull apart the member API. First of all, you can't get older historical data for what work you've done in the past. IBM only exposes the last several days worth of results that have been processed. There comes a point where results become stale and no longer show up in the API. This requires a service that's capable of constantly checking and storing both new results and the difference on exists ones since the state of a result gets updated over time (time it was received, if it verified, it's status, etc.)
Once I figured out what was going on, I whipped out my handy go-to dev tools and went to work. I've made a basic schema in PostgreSQL and a small service that is capable of fetching the remote data, parsing it, and storing it in PostgreSQL. For me, the next step would be to turn it into a stream-based service based on the members stored in the database (which contains the username and verification code, both of which are required to fetch a member's stats.) That will be enough for a service that can run long-term and start building a historical database of the information coming out of WCG (IBM.) However, that will take time to gather enough data to be useful if any way. So, I would like to ask TPU's crunchers, what would you like to see from the data that gets gathered from all the crunching that you do?
Stats can be broken down obviously by date and time but, the "results" table in my database mimics the API in the sense that I capture everything that gets sent over the wire:
Code:
wcg=> \d wcg.results
Table "wcg.results"
Column | Type | Modifiers
-------------------+-----------------------------+-----------
result-id | bigint | not null
member-id | integer | not null
app-id | smallint | not null
claimed-credit | double precision | not null
cpu-time | double precision | not null
elapsed-time | double precision | not null
exit-status | smallint | not null
granted-credit | double precision | not null
device-id | integer | not null
mod-time | bigint | not null
workunit-id | bigint | not null
name | text | not null
outcome | smallint | not null
received-time | timestamp without time zone |
report-deadline | timestamp without time zone | not null
sent-time | timestamp without time zone | not null
server-state | smallint | not null
validate-state | smallint | not null
file-delete-state | smallint | not null
Indexes:
"results_pkey" PRIMARY KEY, btree ("result-id")
Foreign-key constraints:
"app-fk" FOREIGN KEY ("app-id") REFERENCES apps("app-id")
"device-fk" FOREIGN KEY ("device-id") REFERENCES devices("device-id")
"member-fk" FOREIGN KEY ("member-id") REFERENCES members("member-id")
wcg=> select * from wcg.results limit 20;
result-id | member-id | app-id | claimed-credit | cpu-time | elapsed-time | exit-status | granted-credit | device-id | mod-time | workunit-id | name | outcome | received-time | report-deadline | sent-time | server-state | validate-state | file-delete-state
------------+-----------+--------+------------------+-------------------+-------------------+-------------+------------------+-----------+------------+-------------+-----------------------------------------------------+---------+---------------------+---------------------+---------------------+--------------+----------------+-------------------
1944779575 | 2 | 10 | 0 | 0 | 0 | 0 | 0 | 4147721 | 1511446829 | 375352955 | ZIKA_000291302_x4mvn_Saur_SplApr_Inhib_chA_A_0398_1 | 0 | | 2017-12-03 09:20:29 | 2017-11-23 09:20:29 | 4 | 0 | 0
1944622928 | 2 | 5 | 0 | 0 | 0 | 0 | 0 | 4147721 | 1511442807 | 373622333 | OET1_0005179_x4GV3p_rig_28905_1 | 0 | | 2017-12-03 08:13:27 | 2017-11-23 08:13:27 | 4 | 0 | 0
1944623069 | 2 | 5 | 0 | 0 | 0 | 0 | 0 | 4147721 | 1511442807 | 373621728 | OET1_0005179_x4GV3p_rig_23807_1 | 0 | | 2017-12-03 08:13:27 | 2017-11-23 08:13:27 | 4 | 0 | 0
1940100016 | 2 | 9 | 113.678794959877 | 3.42911111111111 | 3.43268434694444 | 0 | 0 | 4147721 | 1511446829 | 376093930 | MCM1_0138293_1973_1 | 1 | 2017-11-23 09:20:29 | 2017-11-30 00:49:38 | 2017-11-23 00:49:38 | 5 | 0 | 0
1941690661 | 2 | 7 | 168.486486174654 | 6.002775 | 6.00404577805556 | 0 | 168.486486174654 | 4147721 | 1511450842 | 377230844 | FAH2_001911_avx17587-3_000003_000019_005_0 | 1 | 2017-11-23 10:27:16 | 2017-11-23 22:31:56 | 2017-11-22 22:31:56 | 5 | 1 | 0
1940659594 | 2 | 9 | 0 | 0 | 0 | 0 | 0 | 4147721 | 1511442128 | 376490337 | MCM1_0138300_5560_1 | 0 | | 2017-11-30 08:02:08 | 2017-11-23 08:02:08 | 4 | 0 | 0
1940474040 | 2 | 9 | 112.753212327416 | 3.39741944444444 | 3.39905427027778 | 0 | 0 | 4147721 | 1511458069 | 376358739 | MCM1_0138297_2767_0 | 1 | 2017-11-23 12:27:49 | 2017-11-30 04:42:42 | 2017-11-23 04:42:42 | 5 | 0 | 0
1944779602 | 2 | 10 | 0 | 0 | 0 | 0 | 0 | 4147721 | 1511446829 | 375353038 | ZIKA_000291303_x4mvn_Saur_SplApr_Inhib_chA_A_0196_1 | 0 | | 2017-12-03 09:20:29 | 2017-11-23 09:20:29 | 4 | 0 | 0
1940102407 | 2 | 9 | 112.552704046652 | 3.39271944444444 | 3.39701266 | 0 | 0 | 4147721 | 1511442807 | 376096095 | MCM1_0138293_1581_0 | 1 | 2017-11-23 08:13:27 | 2017-11-30 00:49:38 | 2017-11-23 00:49:38 | 5 | 0 | 0
1944779633 | 2 | 10 | 0 | 0 | 0 | 0 | 0 | 4147721 | 1511446829 | 375353024 | ZIKA_000291303_x4mvn_Saur_SplApr_Inhib_chA_A_0124_1 | 0 | | 2017-12-03 09:20:29 | 2017-11-23 09:20:29 | 4 | 0 | 0
1939997280 | 2 | 9 | 114.485224757391 | 3.45490833333333 | 3.45533908916667 | 0 | 114.388920818186 | 4147721 | 1511455369 | 376022608 | MCM1_0138291_7325_1 | 1 | 2017-11-23 08:13:27 | 2017-11-29 22:43:14 | 2017-11-22 22:43:14 | 5 | 1 | 0
1941887450 | 2 | 7 | 31.5086387184113 | 1.19480111111111 | 1.19673545694444 | 0 | 31.5086387184113 | 4147721 | 1511442813 | 377364209 | FAH2_001534_avx38743-1_000009_000085_007_0 | 1 | 2017-11-23 08:13:27 | 2017-11-24 01:13:54 | 2017-11-23 01:13:54 | 5 | 1 | 0
1940376831 | 2 | 9 | 112.894154366055 | 3.40276111111111 | 3.40330310388889 | 0 | 0 | 4147721 | 1511458069 | 376276524 | MCM1_0138296_1826_1 | 1 | 2017-11-23 12:27:49 | 2017-11-30 03:42:09 | 2017-11-23 03:42:09 | 5 | 0 | 0
1939862654 | 2 | 9 | 115.446118926124 | 3.478325 | 3.48295114305556 | 0 | 0 | 4147721 | 1511433982 | 375910450 | MCM1_0138290_2082_0 | 1 | 2017-11-23 05:46:22 | 2017-11-29 21:24:03 | 2017-11-22 21:24:03 | 5 | 0 | 0
1940379864 | 2 | 9 | 111.656539888297 | 3.36397222222222 | 3.3659940225 | 0 | 0 | 4147721 | 1511458069 | 376279210 | MCM1_0138296_0333_1 | 1 | 2017-11-23 12:27:49 | 2017-11-30 03:42:09 | 2017-11-23 03:42:09 | 5 | 0 | 0
1940695147 | 2 | 8 | 0 | 0 | 0 | 0 | 0 | 4147721 | 1511446829 | 376511213 | MIP1_00026328_0590_0 | 0 | | 2017-12-03 09:20:29 | 2017-11-23 09:20:29 | 4 | 0 | 0
1936221814 | 2 | 5 | 49.0047204465838 | 0.882578055555556 | 0.883728396944444 | 0 | 0 | 4147721 | 1511426529 | 373321970 | OET1_0005178_x4GV3p_rig_36946_0 | 1 | 2017-11-23 03:42:09 | 2017-12-02 22:31:56 | 2017-11-22 22:31:56 | 5 | 0 | 0
1939980888 | 2 | 9 | 114.041287003792 | 3.43873611111111 | 3.4415333825 | 0 | 0 | 4147721 | 1511442128 | 376009182 | MCM1_0138291_0595_0 | 1 | 2017-11-23 08:02:08 | 2017-11-29 22:31:56 | 2017-11-22 22:31:56 | 5 | 0 | 0
1940060297 | 2 | 9 | 113.045950913802 | 3.41115833333333 | 3.41189960444444 | 0 | 0 | 4147721 | 1511442807 | 376070391 | MCM1_0138292_6023_1 | 1 | 2017-11-23 08:13:27 | 2017-11-29 23:45:08 | 2017-11-22 23:45:08 | 5 | 0 | 0
1940137074 | 2 | 8 | 34.1818833143194 | 0.806429444444445 | 0.807077143888889 | 0 | 34.1818833143194 | 4147721 | 1511446838 | 376118938 | MIP1_00026200_0646_0 | 1 | 2017-11-23 09:20:29 | 2017-12-03 02:26:13 | 2017-11-23 02:26:13 | 5 | 1 | 0
(20 rows)
How would you like to see this data broken down and represented? What would like to get out of this information? Once I get the polling of data setup, I can get this running on the 3820 crunching in the attic and expose whatever I'm doing to all of you who are interested. Also, if you would like to donate your crunching statistics to the cause of science, I could use your username and verification code to make the API calls to watch your stat history as well.
Questions, comments, suggestions?