# FAQs

### About the project

<details>

<summary>What is Evalverse?</summary>

**Evalverse** is a freely accessible, open-source project designed to support your LLM (Large Language Model) evaluations. We provide a simple, standardized, and user-friendly solution for the processing and management of LLM evaluations, catering to the needs of AI research engineers and scientists. Even if you are not very familiar with LLMs, you can easily use Evalverse.

</details>

<details>

<summary>Why should I use Evalverse?</summary>

* **Unified evaluation with Submodules:** For unified and expandable evaluation, Evalverse utilizes Git submodules to integrate external evaluation frameworks such as [lm-evaluation-harness ](https://github.com/EleutherAI/lm-evaluation-harness)and [FastChat](https://github.com/lm-sys/FastChat). Thus, one can easily add new submodules to support more external evaluation frameworks. Not only that, one can always fetch upstream changes of the submodules to stay up-to-date with evaluation processes in the fast-paced LLM field.
* **No-code evaluation request:** Evalverse supports no-code evaluation via Slack requests. The user types `Request!` in a direct message or Slack channel with an activate Evalverse Slack bot. The Slack bot asks the user to enter the model name in the Huggingface hub or the local model directory path and executes the evaluation process.
* **LLM evaluation report:** Evalverse can also provide evaluation reports on finished evaluation in a no-code manner. To receive the evaluation report, the user first types `Report!`. Once the user selects model and evaluation criteria, Evalverse calculates the average scores and rankings using the evaluation results stored in the Database and provides a report with a performance table and a visualized graph.

</details>

<details>

<summary>How to use Evalverse?</summary>

We suggest kicking off your journey by exploring [Quickstart](https://evalverse.gitbook.io/evalverse-docs/lets-start/quickstart). If you have any questions on your journey, feel free to share it on Discord.

</details>

### Support

<details>

<summary>How to cite Evalverse project?</summary>

If you want to cite our Evalverse project, feel free to use the following bibtex.

```
@misc{evalverse,
  title = {Evalverse},
  author = {Jihoo Kim, Wonho Song, Dahyun Kim, Yoonsoo Kim, Yungi Kim, Chanjun Park},
  year = {2024},
  eprint={2404.00943},
  archivePrefix={arXiv},
  primaryClass={cs.CL}
}
```

</details>

<details>

<summary>I have a question or something to share with.</summary>

The Discord channel is where you should head for general inquiries or seeking assistance. Regarding bugs, please report them on the [GitHub Issues](https://github.com/UpstageAI/evalverse/issues) directly.&#x20;

Typically, you can anticipate a response within 1 to 2 business days.

</details>

<details>

<summary>I found a bug.</summary>

Please report it on the [GitHub Issues](https://github.com/UpstageAI/evalverse/issues).

Typically, you can anticipate a response within 1 to 2 business days.

</details>


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://evalverse.gitbook.io/evalverse-docs/documents/faqs.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
