Send the cURL request to the http://jsonplaceholder.typicode.com/posts/1 page. What does the result look like? How is it different from the data in (known from the introduction to computer science) xml format?
$ curl 'http://jsonplaceholder.typicode.com/posts/1'
Take a look at the API of poznan.pl
http://www.poznan.pl/api/
Using the cURL tool, get information about events in XML format from the API’s web page poznan.pl.
$ curl 'http://www.poznan.pl/mim/public/ws-information/?co=getCurrentDayEvents'
Use the cURL tool to get a list of streets in the JSON format from the API’s page poznan.pl
$ curl 'http://www.poznan.pl/featureserver/featureserver.cgi/ulice/all.json'
Limit the previous call to the first 100 streets
$ curl 'http://www.poznan.pl/featureserver/featureserver.cgi/ulice/all.json?maxFeatures=100'
If you want to see the returned JSON with 100 streets in Poznan in a more readable form, you can use the jq
command
$ curl 'http://www.poznan.pl/featureserver/featureserver.cgi/ulice/all.json?maxFeatures=100' | jq
The program jq
is a powerful tool, you can use it to search and filter data in JSON format. For example, the list of street names can be built in the following way
$ curl 'http://www.poznan.pl/featureserver/featureserver.cgi/ulice/all.json' | jq '.features[].properties.a3'
Browse the API on the page
https://openweathermap.org/api
What data and in what format can you get on this site?
Register on the site
https://openweathermap.org/api
Download the private key.
Use the cURL
tool to get from https://openweathermap.org/ the weather in the city with the first letter such as the first letter of your surname.
Basic tasks are checked automatically, hence it is important to send to the appropriate address and maintain the indicated format.
The results of tasks are available in the text file at http://kino.vm.wmi.amu.edu.pl/dtin/######.txt, where ###### is a six-digit index number.
Send to http://kino.vm.wmi.amu.edu.pl:6080/dtin/z4.1/######, where ###### is a six-digit student index number, the PUT
request containing a JSON file in the message body for which the following jq
filter will return the sum of the prime factors of your index number (for example, for the number \(60 = 2 \cdot 2 \cdot 3 \cdot 5\), the result should be 12):
[.factors[].value] | add
Do not forget to set the correct Content-Type
header.
jq
filters
Send to http://kino.vm.wmi.amu.edu.pl:6080/dtin/z4.2/###### or http://kino.vm.wmi.amu.edu.pl:6080/dtin/z4.3/######, where ###### is a six-digit student index number, the POST
request containingjq
filter, which for specified files will return the appropriate response.
The input JSON file shows the history of certain events (block) and the history of the value of a certain parameter (hashrate). Its general structure is as follows:
{
"blockHistory": [1519739287, 1519739455, 1519739710 ...],
"hashrateHistory": [
{
"hr": 105355000000,
"time": 1519739200
},
{
"hr":104900000000,
"time":1519739600
}
...
]
}
Both the values in the blockHistory
array and the time
attribute are given as Unix time, i.e. as the number of seconds since the beginning of the 1970.
The task will be checked on the following file.
The expected result of the filter operation:
For the input file, we want to get a common history of hashrate and block values sorted chronologically. The output file is to have the following structure:
[
{
"time": 1519739200,
"type": "hr",
},
{
"time": 1519739287,
"type": "block"
},
{
"time": 1519739455,
"type": "block"
},
{
"time": 1519739600,
"type": "hr"
},
{
"time": 1519739710,
"type": "block",
},
...
]
jq
- variables and functions
For the input file, we want to obtain information about the * hashrate * parameter value recorded immediately before each event, sorted chronologically. The output file is to have the following structure:
[
{
"lastHr": 105355000000,
"time": 1519739287
},
{
"lastHr": 105355000000,
"time": 1519739455
},
{
"lastHr": 104900000000,
"time": 1519739710
},
...
]
-f
parameter to save the filter in a separate file. This will make it easier for you to prepare a solution.Write a simple Internet Crawler, which for a given topic page on Wikipedia, will display a list of all topics from the See also section. The search should be performed recursively.
For example, for the topic https://en.wikipedia.org/wiki/Online_chat, in addition to the topics directly included in See also such as: https://en.wikipedia.org/wiki/Chat_room or https://en.wikipedia.org/wiki/Instant_messaging the results should also include these See also topics (and so on) for example: https://en.wikipedia.org/wiki/Social_media or https://en.wikipedia.org/wiki/Media_psychology.