Should I try multiple optimizers when fine-tuning a pre-trained Transformer for NLP tasks? Should I tune their hyperparameters?
Resource URI: https://dblp.l3s.de/d2r/resource/publications/conf/eacl/GkoutiMTA24
Home
|
Example Publications
Property
Value
dcterms:
bibliographicCitation
<
http://dblp.uni-trier.de/rec/bibtex/conf/eacl/GkoutiMTA24
>
dc:
creator
<
https://dblp.l3s.de/d2r/resource/authors/Ion_Androutsopoulos
>
dc:
creator
<
https://dblp.l3s.de/d2r/resource/authors/Nefeli_Gkouti
>
dc:
creator
<
https://dblp.l3s.de/d2r/resource/authors/Prodromos_Malakasiotis
>
dc:
creator
<
https://dblp.l3s.de/d2r/resource/authors/Stavros_Toumpis
>
foaf:
homepage
<
https://aclanthology.org/2024.eacl-long.157
>
dc:
identifier
DBLP conf/eacl/GkoutiMTA24
(xsd:string)
dcterms:
issued
2024
(xsd:gYear)
rdfs:
label
Should I try multiple optimizers when fine-tuning a pre-trained Transformer for NLP tasks? Should I tune their hyperparameters?
(xsd:string)
foaf:
maker
<
https://dblp.l3s.de/d2r/resource/authors/Ion_Androutsopoulos
>
foaf:
maker
<
https://dblp.l3s.de/d2r/resource/authors/Nefeli_Gkouti
>
foaf:
maker
<
https://dblp.l3s.de/d2r/resource/authors/Prodromos_Malakasiotis
>
foaf:
maker
<
https://dblp.l3s.de/d2r/resource/authors/Stavros_Toumpis
>
swrc:
pages
2555-2574
(xsd:string)
dcterms:
partOf
<
https://dblp.l3s.de/d2r/resource/publications/conf/eacl/2024-1
>
owl:
sameAs
<
http://bibsonomy.org/uri/bibtexkey/conf/eacl/GkoutiMTA24/dblp
>
owl:
sameAs
<
http://dblp.rkbexplorer.com/id/conf/eacl/GkoutiMTA24
>
rdfs:
seeAlso
<
http://dblp.uni-trier.de/db/conf/eacl/eacl2024-1.html#GkoutiMTA24
>
rdfs:
seeAlso
<
https://aclanthology.org/2024.eacl-long.157
>
swrc:
series
<
https://dblp.l3s.de/d2r/resource/conferences/eacl
>
dc:
title
Should I try multiple optimizers when fine-tuning a pre-trained Transformer for NLP tasks? Should I tune their hyperparameters?
(xsd:string)
dc:
type
<
http://purl.org/dc/dcmitype/Text
>
rdf:
type
swrc:InProceedings
rdf:
type
foaf:Document