Utilizing Spring AI With LLMs To Generate Java Assessments – DZone – Uplaza

The AIDocumentLibraryChat undertaking has been prolonged to generate check code (Java code has been examined). The undertaking can generate check code for publicly obtainable GitHub tasks. The URL of the category to check might be supplied then the category is loaded, the imports are analyzed and the dependent lessons within the undertaking are additionally loaded. That offers the LLM the chance to contemplate the imported supply lessons whereas producing mocks for checks. The testUrl might be supplied to present an instance to the LLM to base the generated check. The granite-code and deepseek-coder-v2 fashions have been examined with Ollama.

The purpose is to check how properly the LLMs will help builders create checks.

Implementation

Configuration

To pick the LLM mannequin the application-ollama.properties file must be up to date:

spring.ai.ollama.base-url=${OLLAMA-BASE-URL:http://localhost:11434}
spring.ai.ollama.embedding.enabled=false
spring.ai.embedding.transformer.enabled=true
document-token-limit=150
embedding-token-limit=500
spring.liquibase.change-log=classpath:/dbchangelog/db.changelog-master-ollama.xml

...

# generate code
#spring.ai.ollama.chat.mannequin=granite-code:20b
#spring.ai.ollama.chat.choices.num-ctx=8192

spring.ai.ollama.chat.choices.num-thread=8
spring.ai.ollama.chat.choices.keep_alive=1s

spring.ai.ollama.chat.mannequin=deepseek-coder-v2:16b
spring.ai.ollama.chat.choices.num-ctx=65536

The spring.ai.ollama.chat.mannequin selects the LLM code mannequin to make use of.

The spring.ollama.chat.choices.num-ctx units the variety of tokens within the context window. The context window incorporates the tokens required by the request and the tokens required by the response. 

The spring.ollama.chat.choices.num-thread can be utilized if Ollama doesn’t select the correct quantity of cores to make use of. The spring.ollama.chat.choices.keep_alive units the variety of seconds the context window is retained.

Controller

The interface to get the sources and to generate the check is the controller:

@RestController
@RequestMapping("rest/code-generation")
public class CodeGenerationController {
  personal ultimate CodeGenerationService codeGenerationService;

  public CodeGenerationController(CodeGenerationService 
    codeGenerationService) {
    this.codeGenerationService = codeGenerationService;
  }

  @GetMapping("/test")
  public String getGenerateTests(@RequestParam("url") String url,
    @RequestParam(title = "testUrl", required = false) String testUrl) {
    return this.codeGenerationService.generateTest(URLDecoder.decode(url, 
      StandardCharsets.UTF_8),
    Non-compulsory.ofNullable(testUrl).map(myValue -> URLDecoder.decode(myValue, 
      StandardCharsets.UTF_8)));
  }

  @GetMapping("/sources")
  public GithubSources getSources(@RequestParam("url") String url, 
    @RequestParam(title="testUrl", required = false) String testUrl) {
    var sources = this.codeGenerationService.createTestSources(
      URLDecoder.decode(url, StandardCharsets.UTF_8), true);
    var check = Non-compulsory.ofNullable(testUrl).map(myTestUrl -> 
      this.codeGenerationService.createTestSources(
        URLDecoder.decode(myTestUrl, StandardCharsets.UTF_8), false))
          .orElse(new GithubSource("none", "none", Record.of(), Record.of()));
    return new GithubSources(sources, check);
  }
}

The CodeGenerationController has the tactic getSources(...). It will get the URL and optionally the testUrl for the category to generate checks for and for the non-obligatory instance check. It decodes the request parameters and calls the createTestSources(...) technique with them. The strategy returns the GithubSources with the sources of the category to check, its dependencies within the undertaking, and the check instance.

The strategy getGenerateTests(...) will get the url for the check class and the non-obligatory testUrl to be url decoded and calls the tactic generateTests(...) of the CodeGenerationService.

Service

The CodeGenerationService collects the lessons from GitHub and generates the check code for the category underneath check.

The Service with the prompts seems like this:

@Service
public class CodeGenerationService {
  personal static ultimate Logger LOGGER = LoggerFactory
    .getLogger(CodeGenerationService.class);
  personal ultimate GithubClient githubClient;
  personal ultimate ChatClient chatClient;
  personal ultimate String ollamaPrompt = """
    You might be an assistant to generate spring checks for the category underneath check. 
    Analyse the lessons supplied and generate checks for all strategies. Base  
    your checks on the instance.
    Generate and implement the check strategies. Generate and implement full  
    checks strategies.
    Generate the whole supply of the check class.
					 
    Generate checks for this class:
    {classToTest}

    Use these lessons as context for the checks:
    {contextClasses}

    {testExample}
  """;	
  personal ultimate String ollamaPrompt1 = """
    You might be an assistant to generate a spring check class for the supply 
    class.
    1. Analyse the supply class
    2. Analyse the context lessons for the lessons utilized by the supply class
    3. Analyse the category in check instance to base the code of the generated 
    check class on it.
    4. Generate a check class for the supply class, use the context lessons as 
    sources for it and base the code of the check class on the check instance. 
    Generate the whole supply code of the check class implementing the 
    checks.						

    {testExample}

    Use these context lessons as extension for the supply class:
    {contextClasses}
			
    Generate the whole supply code of the check class implementing the  
    checks.
    Generate checks for this supply class:
    {classToTest}	
  """;
  @Worth("${spring.ai.ollama.chat.options.num-ctx:0}")
  personal Lengthy contextWindowSize;

  public CodeGenerationService(GithubClient githubClient, ChatClient 
    chatClient) {
    this.githubClient = githubClient;
    this.chatClient = chatClient;
  }

That is the CodeGenerationService with the GithubClient and the ChatClient. The GithubClient is used to load the sources from a publicly obtainable repository and the ChatClient is the Spring AI interface to entry the AI/LLM.

The ollamaPrompt is the immediate for the IBM Granite LLM with a context window of 8k tokens. The {classToTest} is changed with the supply code of the category underneath check. The {contextClasses} might be changed with the dependent lessons of the category underneath check and the {testExample} is non-obligatory and might be changed with a check class that may serve for example for the code technology.

The ollamaPrompt2 is the immediate for the Deepseek Coder V2 LLM. This LLM can “understand” or work with a sequence of thought immediate and has a context window of greater than 64k tokens. The {...} placeholders work the identical as within the ollamaPrompt. The lengthy context window allows the addition of context lessons for code technology. 

The contextWindowSize property is injected by Spring to manage if the context window of the LLM is sufficiently big so as to add the {contextClasses} to the immediate.

The strategy createTestSources(...) collects and returns the sources for the AI/LLM prompts:

public GithubSource createTestSources(String url, ultimate boolean 
  referencedSources) {
  ultimate var myUrl = url.substitute("https://github.com", 
    GithubClient.GITHUB_BASE_URL).substitute("/blob", "");
  var consequence = this.githubClient.readSourceFile(myUrl);
  ultimate var isComment = new AtomicBoolean(false);
  ultimate var sourceLines = consequence.strains().stream().map(myLine -> 
      myLine.replaceAll("[t]", "").trim())
    .filter(myLine -> !myLine.isBlank()).filter(myLine -> 
      filterComments(isComment, myLine)).toList();
  ultimate var basePackage = Record.of(consequence.sourcePackage()
    .break up(".")).stream().restrict(2)
    .gather(Collectors.becoming a member of("."));
  ultimate var dependencies = this.createDependencies(referencedSources, myUrl, 
    sourceLines, basePackage);
  return new GithubSource(consequence.sourceName(), consequence.sourcePackage(), 
    sourceLines, dependencies);
}

personal Record createDependencies(ultimate boolean 
  referencedSources, ultimate String myUrl, ultimate Record sourceLines, 
  ultimate String basePackage) {
  return sourceLines.stream().filter(x -> referencedSources)
    .filter(myLine -> myLine.incorporates("import"))
    .filter(myLine -> myLine.incorporates(basePackage))
    .map(myLine -> String.format("%s%s%s", 
      myUrl.break up(basePackage.substitute(".", "https://dzone.com/"))[0].trim(),
	myLine.break up("import")[1].break up(";")[0].replaceAll(".", 
          "https://dzone.com/").trim(), myUrl.substring(myUrl.lastIndexOf('.'))))
    .map(myLine -> this.createTestSources(myLine, false)).toList();
}

personal boolean filterComments(AtomicBoolean isComment, String myLine) {
  var result1 = true;
  if (myLine.incorporates("/*") || isComment.get()) {
    isComment.set(true);
    result1 = false;
  }
  if (myLine.incorporates("*/")) {
    isComment.set(false);
    result1 = false;
  }
  result1 = result1 && !myLine.trim().startsWith("//");
  return result1;
}

The strategy createTestSources(...) with the supply code of the GitHub supply url and relying on the worth of the referencedSources the sources of the dependent lessons within the undertaking present the GithubSource information.

To try this the myUrl is created to get the uncooked supply code of the category. Then the githubClient is used to learn the supply file as a string. The supply string is then turned in supply strains with out formatting and feedback with the tactic filterComments(...)

To learn the dependent lessons within the undertaking the bottom bundle is used. For instance in a bundle ch.xxx.aidoclibchat.usecase.service the bottom bundle is ch.xxx. The strategy createDependencies(...) is used to create the GithubSource information for the dependent lessons within the base packages. The basePackage parameter is used to filter out the lessons after which the tactic createTestSources(...) is named recursively with the parameter referencedSources set to false to cease the recursion. That’s how the dependent class GithubSource information are created.

The strategy generateTest(...) is used to create the check sources for the category underneath check with the AI/LLM:

public String generateTest(String url, Non-compulsory testUrlOpt) {
  var begin = Prompt.now();
  var githubSource = this.createTestSources(url, true);
  var githubTestSource = testUrlOpt.map(testUrl -> 
    this.createTestSources(testUrl, false))
      .orElse(new GithubSource(null, null, Record.of(), Record.of()));
  String contextClasses = githubSource.dependencies().stream()
    .filter(x -> this.contextWindowSize >= 16 * 1024)
    .map(myGithubSource -> myGithubSource.sourceName() + ":"  + 
      System.getProperty("line.separator")	
      + myGithubSource.strains().stream()
        .gather(Collectors.becoming a member of(System.getProperty("line.separator")))
      .gather(Collectors.becoming a member of(System.getProperty("line.separator")));
  String testExample = Non-compulsory.ofNullable(githubTestSource.sourceName())
    .map(x -> "Use this as test example class:" + 
      System.getProperty("line.separator") +  
      githubTestSource.strains().stream()
        .gather(Collectors.becoming a member of(System.getProperty("line.separator"))))
    .orElse("");
  String classToTest = githubSource.strains().stream()
    .gather(Collectors.becoming a member of(System.getProperty("line.separator")));
  LOGGER.debug(new PromptTemplate(this.contextWindowSize >= 16 * 1024 ? 
    this.ollamaPrompt1 : this.ollamaPrompt, Map.of("classToTest", 
      classToTest, "contextClasses", contextClasses, "testExample", 
      testExample)).createMessage().getContent());
  LOGGER.data("Generation started with context window: {}",  
    this.contextWindowSize);
  var response = chatClient.name(new PromptTemplate(
    this.contextWindowSize >= 16 * 1024 ? this.ollamaPrompt1 :  
      this.ollamaPrompt, Map.of("classToTest", classToTest, "contextClasses", 
      contextClasses, "testExample", testExample)).create());
  if((Prompt.now().getEpochSecond() - begin.getEpochSecond()) >= 300) {
    LOGGER.data(response.getResult().getOutput().getContent());
  }
  LOGGER.data("Prompt tokens: " + 
    response.getMetadata().getUsage().getPromptTokens());
  LOGGER.data("Generation tokens: " + 
    response.getMetadata().getUsage().getGenerationTokens());
  LOGGER.data("Total tokens: " + 
    response.getMetadata().getUsage().getTotalTokens());
  LOGGER.data("Time in seconds: {}", (Prompt.now().toEpochMilli() - 
    begin.toEpochMilli()) / 1000.0);
  return response.getResult().getOutput().getContent();
}

To try this the createTestSources(...) technique is used to create the information with the supply strains. Then the string contextClasses is created to switch the {contextClasses} placeholder within the immediate. If the context window is smaller than 16k tokens the string is empty to have sufficient tokens for the category underneath check and the check instance class. Then the non-obligatory testExample string is created to switch the {testExample} placeholder within the immediate. If no testUrl is supplied the string is empty. Then the classToTest string is created to switch the {classToTest} placeholder within the immediate.

The chatClient is named to ship the immediate to the AI/LLM. The immediate is chosen based mostly on the dimensions of the context window within the contextWindowSize property. The PromptTemplate replaces the placeholders with the ready strings. 

The response is used to log the quantity of the immediate tokens, the technology tokens, and the whole tokens to have the ability to examine if the context window boundary was honored. Then the time to generate the check supply is logged and the check supply is returned. If the technology of the check supply took greater than 5 minutes the check supply is logged as safety towards browser timeouts.

Conclusion

Each fashions have been examined to generate Spring Controller checks and Spring service checks. The check URLs have been:

http://localhost:8080/relaxation/code-generation/check?url=https://github.com/Angular2Guy/MovieManager/blob/grasp/backend/src/primary/java/ch/xxx/moviemanager/adapter/controller/ActorController.java&testUrl=https://github.com/Angular2Guy/MovieManager/blob/grasp/backend/src/check/java/ch/xxx/moviemanager/adapter/controller/MovieControllerTest.java
http://localhost:8080/relaxation/code-generation/check?url=https://github.com/Angular2Guy/MovieManager/blob/grasp/backend/src/primary/java/ch/xxx/moviemanager/usecase/service/ActorService.java&testUrl=https://github.com/Angular2Guy/MovieManager/blob/grasp/backend/src/check/java/ch/xxx/moviemanager/usecase/service/MovieServiceTest.java

The granite-code:20b LLM on Ollama has a context window of 8k tokens. That’s too small to offer contextClasses and have sufficient tokens for a response. Meaning the LLM simply had the category underneath check and the check instance to work with. 

The deepseek-coder-v2:16b LLM on Ollama has a context window of greater than 64k tokens. That enabled the addition of the contextClasses to the immediate and it is ready to work with a sequence of thought immediate.

Outcomes

The Granite-Code LLM was in a position to generate a buggy however helpful foundation for a Spring service check. No check labored however the lacking elements might be defined with the lacking context lessons. The Spring Controller check was not so good. It missed an excessive amount of code to be helpful as a foundation. The check technology took greater than 10 minutes on a medium-power laptop computer CPU.

The Deepseek-Coder-V2 LLM was in a position to create a Spring service check with the vast majority of the checks working. That was a very good foundation to work with and the lacking elements have been simple to repair. The Spring Controller check had extra bugs however was a helpful foundation to begin from. The check technology took lower than ten minutes on a medium-power laptop computer CPU.

Opinion

The Deepseek-Coder-V2 LLM will help with writing checks for Spring functions. For productive use, GPU acceleration is required. The LLM isn’t in a position to create non-trivial code accurately, even with context lessons obtainable. The assistance a LLM can present may be very restricted as a result of LLMs don’t perceive the code. Code is simply characters for a LLM and with out an understanding of language syntax, the outcomes should not spectacular. The developer has to have the ability to repair all of the bugs within the checks. Meaning it simply saves a while typing the checks.

The expertise with GitHub Copilot is just like the Granite-Code LLM. As of September 2024, the context window is simply too small to do good code technology and the code completion recommendations should be ignored too typically.

Is a LLM a assist -> sure.

Is the LLM a big timesaver -> no.

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Exit mobile version