Skip to content

Commit 701b56b

Browse files
authored
Merge pull request #6010 from IQSS/develop
v4.15.1
2 parents 9a0b627 + 17ad1ee commit 701b56b

42 files changed

Lines changed: 1505 additions & 133 deletions

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

conf/docker-aio/run-test-suite.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,4 +8,4 @@ fi
88

99
# Please note the "dataverse.test.baseurl" is set to run for "all-in-one" Docker environment.
1010
# TODO: Rather than hard-coding the list of "IT" classes here, add a profile to pom.xml.
11-
mvn test -Dtest=DataversesIT,DatasetsIT,SwordIT,AdminIT,BuiltinUsersIT,UsersIT,UtilIT,ConfirmEmailIT,FileMetadataIT,FilesIT,SearchIT,InReviewWorkflowIT,HarvestingServerIT,MoveIT,MakeDataCountApiIT,FileTypeDetectionIT -Ddataverse.test.baseurl=$dvurl
11+
mvn test -Dtest=DataversesIT,DatasetsIT,SwordIT,AdminIT,BuiltinUsersIT,UsersIT,UtilIT,ConfirmEmailIT,FileMetadataIT,FilesIT,SearchIT,InReviewWorkflowIT,HarvestingServerIT,MoveIT,MakeDataCountApiIT,FileTypeDetectionIT,EditDDIIT -Ddataverse.test.baseurl=$dvurl

doc/sphinx-guides/source/_static/installation/files/etc/systemd/solr.service

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,7 @@ WorkingDirectory = /usr/local/solr/solr-7.3.1
99
ExecStart = /usr/local/solr/solr-7.3.1/bin/solr start -m 1g
1010
ExecStop = /usr/local/solr/solr-7.3.1/bin/solr stop
1111
LimitNOFILE=65000
12+
LimitNPROC=65000
1213
Restart=on-failure
1314

1415
[Install]

doc/sphinx-guides/source/api/native-api.rst

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -812,6 +812,19 @@ Example::
812812

813813
Also note that dataFileTags are not versioned and changes to these will update the published version of the file.
814814

815+
.. _EditingVariableMetadata:
816+
817+
Editing Variable Level Metadata
818+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
819+
820+
Updates variable level metadata using ddi xml ``$file``, where ``$id`` is file id::
821+
822+
PUT https://$SERVER/api/edit/$id --upload-file $file
823+
824+
Example: ``curl -H "X-Dataverse-key:$API_TOKEN" -X PUT http://localhost:8080/api/edit/95 --upload-file dct.xml``
825+
826+
You can download :download:`dct.xml <../../../../src/test/resources/xml/dct.xml>` from the example above to see what the XML looks like.
827+
815828
Provenance
816829
~~~~~~~~~~
817830
Get Provenance JSON for an uploaded file::
@@ -1472,3 +1485,5 @@ Recursively applies the role assignments of the specified dataverse, for the rol
14721485
GET http://$SERVER/api/admin/dataverse/{dataverse alias}/addRoleAssignmentsToChildren
14731486

14741487
Note: setting ``:InheritParentRoleAssignments`` will automatically trigger inheritance of the parent dataverse's role assignments for a newly created dataverse. Hence this API call is intended as a way to update existing child dataverses or to update children after a change in role assignments has been made on a parent dataverse.
1488+
1489+

doc/sphinx-guides/source/conf.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -65,9 +65,9 @@
6565
# built documents.
6666
#
6767
# The short X.Y version.
68-
version = '4.15'
68+
version = '4.15.1'
6969
# The full version, including alpha/beta/rc tags.
70-
release = '4.15'
70+
release = '4.15.1'
7171

7272
# The language for content autogenerated by Sphinx. Refer to documentation
7373
# for a list of supported languages.

doc/sphinx-guides/source/installation/config.rst

Lines changed: 33 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -226,7 +226,7 @@ Then create a password alias by running (without changes):
226226

227227
.. code-block:: none
228228
229-
./asadmin $ASADMIN_OPTS create-jvm-options "\-Ddataverse.files.swift.password.endpoint1='${ALIAS=swiftpassword-alias}'"
229+
./asadmin $ASADMIN_OPTS create-jvm-options "\-Ddataverse.files.swift.password.endpoint1='\${ALIAS=swiftpassword-alias}'"
230230
./asadmin $ASADMIN_OPTS create-password-alias swiftpassword-alias
231231
232232
The second command will trigger an interactive prompt asking you to input your Swift password.
@@ -658,16 +658,19 @@ For Google Analytics, the example script at :download:`analytics-code.html </_st
658658

659659
Once this script is running, you can look in the Google Analytics console (Realtime/Events or Behavior/Events) and view events by type and/or the Dataset or File the event involves.
660660

661-
DuraCloud/Chronopolis Integration
662-
---------------------------------
661+
BagIt Export
662+
------------
663663

664-
It's completely optional to integrate your installation of Dataverse with DuraCloud/Chronopolis but the details are listed here to keep the :doc:`/admin/integrations` section of the Admin Guide shorter.
664+
Dataverse may be configured to submit a copy of published Datasets, packaged as `Research Data Alliance conformant <https://www.rd-alliance.org/system/files/Research%20Data%20Repository%20Interoperability%20WG%20-%20Final%20Recommendations_reviewed_0.pdf>`_ zipped `BagIt <https://tools.ietf.org/html/draft-kunze-bagit-17>`_ bags to `Chronopolis <https://libraries.ucsd.edu/chronopolis/>`_ via `DuraCloud <https://duraspace.org/duracloud/>`_ or alternately to any folder on the local filesystem.
665665

666-
Dataverse can be configured to submit a copy of published Datasets, packaged as `Research Data Alliance conformant <https://www.rd-alliance.org/system/files/Research%20Data%20Repository%20Interoperability%20WG%20-%20Final%20Recommendations_reviewed_0.pdf>`_ zipped `BagIt <https://tools.ietf.org/html/draft-kunze-bagit-17>`_ bags to the `Chronopolis <https://libraries.ucsd.edu/chronopolis/>`_ via `DuraCloud <https://duraspace.org/duracloud/>`_
666+
Dataverse offers an internal archive workflow which may be configured as a PostPublication workflow via an admin API call to manually submit previously published Datasets and prior versions to a configured archive such as Chronopolis. The workflow creates a `JSON-LD <http://www.openarchives.org/ore/0.9/jsonld>`_ serialized `OAI-ORE <https://www.openarchives.org/ore/>`_ map file, which is also available as a metadata export format in the Dataverse web interface.
667667

668-
This integration is occurs through customization of an internal Dataverse archiver workflow that can be configured as a PostPublication workflow to submit the bag to Chronopolis' Duracloud interface using your organization's credentials. An admin API call exists that can manually submit previously published Datasets, and prior versions, to a configured archive such as Chronopolis. The workflow leverages new functionality in Dataverse to create a `JSON-LD <http://www.openarchives.org/ore/0.9/jsonld>`_ serialized `OAI-ORE <https://www.openarchives.org/ore/>`_ map file, which is also available as a metadata export format in the Dataverse web interface.
668+
At present, the DPNSubmitToArchiveCommand and LocalSubmitToArchiveCommand are the only implementations extending the AbstractSubmitToArchiveCommand and using the configurable mechanisms discussed below.
669669

670-
At present, the DPNSubmitToArchiveCommand is the only implementation extending the AbstractSubmitToArchiveCommand and using the configurable mechanisms discussed below.
670+
.. _Duracloud Configuration:
671+
672+
Duracloud Configuration
673+
+++++++++++++++++++++++
671674

672675
Also note that while the current Chronopolis implementation generates the bag and submits it to the archive's DuraCloud interface, the step to make a 'snapshot' of the space containing the Bag (and verify it's successful submission) are actions a curator must take in the DuraCloud interface.
673676

@@ -695,7 +698,27 @@ Archivers may require glassfish settings as well. For the Chronopolis archiver,
695698

696699
``./asadmin create-jvm-options '-Dduracloud.password=YOUR_PASSWORD_HERE'``
697700

698-
**API Call**
701+
.. _Local Path Configuration:
702+
703+
Local Path Configuration
704+
++++++++++++++++++++++++
705+
706+
ArchiverClassName - the fully qualified class to be used for archiving. For example\:
707+
708+
``curl -X PUT -d "edu.harvard.iq.dataverse.engine.command.impl.LocalSubmitToArchiveCommand" http://localhost:8080/api/admin/settings/:ArchiverClassName``
709+
710+
\:BagItLocalPath - the path to where you want to store BagIt. For example\:
711+
712+
``curl -X PUT -d /home/path/to/storage http://localhost:8080/api/admin/settings/:BagItLocalPath``
713+
714+
\:ArchiverSettings - the archiver class can access required settings including existing Dataverse settings and dynamically defined ones specific to the class. This setting is a comma-separated list of those settings. For example\:
715+
716+
``curl http://localhost:8080/api/admin/settings/:ArchiverSettings -X PUT -d ":BagItLocalPath”``
717+
718+
:BagItLocalPath is the file path that you've set in :ArchiverSettings.
719+
720+
API Call
721+
++++++++
699722

700723
Once this configuration is complete, you, as a user with the *PublishDataset* permission, should be able to use the API call to manually submit a DatasetVersion for processing:
701724

@@ -711,7 +734,8 @@ The submitDataVersionToArchive API (and the workflow discussed below) attempt to
711734

712735
In the Chronopolis case, since the transfer from the DuraCloud front-end to archival storage in Chronopolis can take significant time, it is currently up to the admin/curator to submit a 'snap-shot' of the space within DuraCloud and to monitor its successful transfer. Once transfer is complete the space should be deleted, at which point the Dataverse API call can be used to submit a Bag for other versions of the same Dataset. (The space is reused, so that archival copies of different Dataset versions correspond to different snapshots of the same DuraCloud space.).
713736

714-
**PostPublication Workflow**
737+
PostPublication Workflow
738+
++++++++++++++++++++++++
715739

716740
To automate the submission of archival copies to an archive as part of publication, one can setup a Dataverse Workflow using the "archiver" workflow step - see the :doc:`/developers/workflows` guide.
717741
. The archiver step uses the configuration information discussed above including the :ArchiverClassName setting. The workflow step definition should include the set of properties defined in \:ArchiverSettings in the workflow definition.

doc/sphinx-guides/source/installation/external-tools.rst

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,8 @@ Support for external tools is just getting off the ground but the following tool
1919

2020
- `File Previewers <https://github.com/QualitativeDataRepository/dataverse-previewers>`_: A set of tools that display the content of files - including audio, html, `Hypothes.is <https://hypothes.is/>` annotations, images, PDF, text, video - allowing them to be viewed without downloading. The previewers can be run directly from github.io, so the only required step is using the Dataverse API to register the ones you want to use. Documentation, including how to optionally brand the previewers, and an invitation to contribute through github are in the README.md file.
2121

22+
- Data Curation Tool: a GUI for curating data by adding labels, groups, weights and other details to assist with informed reuse. See the README.md file at https://github.com/scholarsportal/Dataverse-Data-Curation-Tool for the installation instructions.
23+
2224
- [Your tool here! Please get in touch! :) ]
2325

2426

doc/sphinx-guides/source/installation/prerequisites.rst

Lines changed: 10 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -228,26 +228,27 @@ Solr launches asynchronously and attempts to use the ``lsof`` binary to watch fo
228228

229229
# yum install lsof
230230

231-
Finally, you may start Solr and create the core that will be used to manage search information::
231+
Finally, you need to tell Solr to create the core "collection1" on startup:
232232

233-
cd /usr/local/solr/solr-7.3.1
234-
bin/solr start
235-
bin/solr create_core -c collection1 -d server/solr/collection1/conf/
236-
233+
echo "name=collection1" > /usr/local/solr/solr-7.3.1/server/solr/collection1/core.properties
237234

238235
Solr Init Script
239236
================
240237

241-
For systems running systemd, as root, download :download:`solr.service<../_static/installation/files/etc/systemd/solr.service>` and place it in ``/tmp``. Then start Solr and configure it to start at boot with the following commands::
238+
Please choose the right option for your underlying Linux operating system.
239+
It will not be necessary to execute both!
240+
241+
For systems running systemd (like CentOS/RedHat since 7, Debian since 9, Ubuntu since 15.04), as root, download :download:`solr.service<../_static/installation/files/etc/systemd/solr.service>` and place it in ``/tmp``. Then start Solr and configure it to start at boot with the following commands::
242242

243-
cp /tmp/solr.service /usr/lib/systemd/system
243+
cp /tmp/solr.service /etc/systemd/system
244+
systemctl daemon-reload
244245
systemctl start solr.service
245246
systemctl enable solr.service
246247

247-
For systems using init.d, download this :download:`Solr init script <../_static/installation/files/etc/init.d/solr>` and place it in ``/tmp``. Then start Solr and configure it to start at boot with the following commands::
248+
For systems using init.d (like CentOS 6), download this :download:`Solr init script <../_static/installation/files/etc/init.d/solr>` and place it in ``/tmp``. Then start Solr and configure it to start at boot with the following commands::
248249

249250
cp /tmp/solr /etc/init.d
250-
systemctl restart solr.service
251+
service start solr
251252
chkconfig solr on
252253

253254
Securing Solr

doc/sphinx-guides/source/user/account.rst

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -142,7 +142,11 @@ You can also convert your Dataverse account to use authentication provided by Gi
142142
My Data
143143
-------
144144

145-
The My Data section of your account page displays a listing of all the dataverses, datasets, and files you have either created, uploaded or that you have access to edit. You are able to filter through all the dataverses, datasets, and files listed there using the filter box. You may also use the facets on the left side to only view a specific Publication Status or Role.
145+
The My Data section of your account page displays a listing of all the dataverses, datasets, and files you have either created, uploaded or that you have a role assigned on. If you see unexpected dataverses or datasets in your My Data page, it might be because someone has assigned your account a role on those dataverses or datasets. For example, some institutions automatically assign the "File Downloader" role on their datasets to all accounts using their institutional login.
146+
147+
148+
You are able to filter through all the dataverses, datasets, and files listed on your My Data page using the filter box. You may also use the facets on the left side to only view a specific Publication Status or Role.
149+
146150

147151
Notifications
148152
-------------

doc/sphinx-guides/source/user/dataset-management.rst

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ If there are multiple upload options available, then you must choose which one t
6666

6767
You can upload files to a dataset while first creating that dataset. You can also upload files after creating a dataset by clicking the "Edit" button at the top of the dataset page and from the dropdown list selecting "Files (Upload)" or clicking the "Upload Files" button above the files table in the Files tab. From either option you will be brought to the Upload Files page for that dataset.
6868

69-
Certain file types in Dataverse are supported by additional functionality, which can include downloading in different formats, subsets, file-level metadata preservation, file-level data citation; and exploration through data visualization and analysis. See the File Handling section of this page for more information.
69+
Certain file types in Dataverse are supported by additional functionality, which can include downloading in different formats, subsets, file-level metadata preservation, file-level data citation with UNFs, and exploration through data visualization and analysis. See the File Handling section of this page for more information.
7070

7171

7272
HTTP Upload
@@ -229,6 +229,11 @@ You will not have to leave the dataset page to complete these action, except for
229229

230230
If you restrict files, you will also prompted with a popup asking you to fill out the Terms of Access for the files. If Terms of Access already exist, you will be asked to confirm them. Note that some Dataverse installations do not allow for file restrictions.
231231

232+
Edit File Variable Metadata
233+
---------------------------
234+
235+
Variable Metadata can be edited directly through an API call (:ref:`API Guide: Editing Variable Level Metadata <EditingVariableMetadata>`) or by using the `Dataverse Data Curation Tool <https://github.com/scholarsportal/Dataverse-Data-Curation-Tool>`_.
236+
232237
File Path
233238
---------
234239

doc/sphinx-guides/source/user/dataverse-management.rst

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -20,13 +20,13 @@ Creating a dataverse is easy but first you must be a registered user (see :doc:`
2020
#. Once you are logged in click on the "Add Data" button and in the dropdown menu select "New Dataverse".
2121
#. Once on the "New Dataverse" page fill in the following fields:
2222
* **Name**: Enter the name of your dataverse.
23-
* **Identifier**: This is an abbreviation, usually lower-case, that becomes part of the URL for the new dataverse. Special characters (~,\`, !, @, #, $, %, ^, &, and \*) and spaces are not allowed. **Note**: if you change the Dataverse URL field, the URL for your Dataverse changes (http//.../'url'), which affects links to this page.
23+
* **Identifier**: This is an abbreviation, usually lower-case, that becomes part of the URL for the new dataverse. Special characters (~,\`, !, @, #, $, %, ^, &, and \*) and spaces are not allowed. **Note**: if you change this field in the future, the URL for your Dataverse will change (http//.../'identifier'), which will break older links to the page.
2424
* **Email**: This is the email address that will be used as the contact for this particular dataverse. You can have more than one contact email address for your dataverse.
25-
* **Affiliation**: Add any Affiliation that can be associated to this particular dataverse (e.g., project name, institute name, department name, journal name, etc). This is automatically filled out if you have added an affiliation for your user account.
26-
* **Description**: Provide a description of this dataverse. This will display on the landing page of your dataverse and in the search result list. The description field supports certain HTML tags (<a>, <b>, <blockquote>, <br>, <code>, <del>, <dd>, <dl>, <dt>, <em>, <hr>, <h1>-<h3>, <i>, <img>, <kbd>, <li>, <ol>, <p>, <pre>, <s>, <sup>, <sub>, <strong>, <strike>, <ul>).
27-
* **Category**: Select a category that best describes the type of dataverse this will be. For example, if this is a dataverse for an individual researcher's datasets, select Researcher. If this is a dataverse for an institution, select Organization & Institution.
28-
* **Choose the sets of Metadata Elements for datasets in this dataverse**: By default the metadata elements will be from the host dataverse that this new dataverse is created in. Dataverse offers metadata standards for multiple domains. To learn more about the metadata standards in Dataverse please check out the :doc:`/user/appendix`.
29-
* **Select facets for this dataverse**: by default the facets that will appear on your dataverse landing page will be from the host dataverse that this new dataverse was created in. The facets are simply metadata fields that can be used to help others easily find dataverses and datasets within this dataverse. You can select as many facets as you would like.
25+
* **Affiliation**: Add any Affiliation that can be associated with this particular dataverse (e.g., project name, institute name, department name, journal name, etc). This is automatically filled out if you have added an affiliation for your user account.
26+
* **Description**: Provide a description of this dataverse. This will display on the landing page of your dataverse and in the search result list. The description field supports certain HTML tags, if you'd like to format your text (<a>, <b>, <blockquote>, <br>, <code>, <del>, <dd>, <dl>, <dt>, <em>, <hr>, <h1>-<h3>, <i>, <img>, <kbd>, <li>, <ol>, <p>, <pre>, <s>, <sup>, <sub>, <strong>, <strike>, <ul>).
27+
* **Category**: Select a category that best describes the type of dataverse this will be. For example, if this is a dataverse for an individual researcher's datasets, select *Researcher*. If this is a dataverse for an institution, select *Organization or Institution*.
28+
* **Choose the sets of Metadata Fields for datasets in this dataverse**: By default the metadata elements will be from the host dataverse that this new dataverse is created in. Dataverse offers metadata standards for multiple domains. To learn more about the metadata standards in Dataverse please check out the :doc:`/user/appendix`.
29+
* **Select facets for this dataverse**: Choose which metadata fields will be used as facets on your dataverse. These facets will allow users browsing or searching your dataverse to filter its contents according to the fields you’ve selected. For example, if you select “Subject” as a facet, users will be able to filter your dataverse’s contents by subject area. By default, the facets that will appear on your dataverse landing page will be from the host dataverse that this new dataverse was created in, but you can add or remove facets from this default.
3030
#. Selected metadata elements are also used to pick which metadata fields you would like to use for creating templates for your datasets. Metadata fields can be hidden, or selected as required or optional. Once you have selected all the fields you would like to use, you can create your template(s) after you finish creating your dataverse.
3131
#. Click "Create Dataverse" button and you're done!
3232

0 commit comments

Comments
 (0)