Posts Tagged ‘Java’
A Decade of Devoxx FR and Java Evolution: A Detailed Retrospective and Forward-Looking Analysis
Introduction:
The Devoxx FR conference has served as a key barometer of the Java platform’s dynamic evolution over the past ten years. This period has been marked by numerous releases, including major advancements that have significantly reshaped how we architect, develop, and deploy Java applications. This presentation offers a detailed retrospective analysis of significant announcements and the substantial changes within Java, emphasizing the critical importance of embracing these enhancements to optimize our applications for performance, maintainability, and security. Beyond a surface-level examination of syntax and API modifications, this session provides a comprehensive rationale for migrating to newer Java versions, addressing the common concerns and challenges that often accompany such transitions with practical insights and actionable strategies.
1. A Detailed Look Back: Java’s Evolution Over the Past Decade
Jean-Michel “JM” Doudoux begins the session by establishing a parallel timeline of the ten-year history of the Devoxx FR conference and Java’s continuous development. He emphasizes the importance of understanding the reception and adoption rates of different Java versions to contextualize the current state of the Java ecosystem.
Java 8:
JM highlights Java 8 as a watershed release, noting its widespread adoption and the introduction of transformative features that fundamentally changed Java development. Key features include:
- Lambda Expressions: Revolutionized functional programming in Java, enabling more concise and expressive code.
- Stream API: Introduced a powerful and efficient way to process collections of data.
- Method References: Simplified the syntax for referring to methods, further enhancing code readability.
- New Date/Time API (java.time): Addressed the shortcomings of the old
java.util.Dateandjava.util.CalendarAPIs, providing a more robust and intuitive way to handle date and time. - Default Methods in Interfaces: Allowed adding new methods to interfaces without breaking backward compatibility.
Java 11:
JM points out the slower adoption rate of Java 11, despite being a Long-Term Support (LTS) release, which typically encourages enterprise adoption due to extended support guarantees. Notable features include:
- HTTP Client API: Introduced a new and improved HTTP Client API, supporting HTTP/2 and WebSocket.
Java 17:
Characterized as a release that has garnered significant developer enthusiasm, building upon the foundation laid by previous versions and further refining the language.
Java 9:
Acknowledged as a disruptive release, primarily due to the introduction of the Java Platform Module System (JPMS), which brought modularity to Java. Doudoux discusses the profound impact of modularity on the Java ecosystem, affecting code organization, accessibility, and deployment.
Java 10, 12-16:
These releases are characterized as more transient, feature releases, with less widespread adoption compared to the LTS versions. However, they introduced valuable features such as:
- Local Variable Type Inference (
var): Simplified variable declaration. - Enhanced Switch Expressions: Improved the
switchstatement, making it more expressive and usable as an expression.
2. Navigating Migration: Java 17 and Strategic Considerations
The presentation transitions to a practical discussion on the complexities of migrating to newer Java versions, with a strong emphasis on the benefits and challenges of migrating to Java 17. Doudoux addresses the common obstacles developers encounter when advocating for migration within their organizations, particularly the challenge of securing buy-in from operations teams and management.
Strategies for Persuasion:
The speaker offers valuable strategies to help developers build a compelling case for migration, focusing on:
- Highlighting Performance Improvements: Emphasizing the performance gains offered by newer Java versions.
- Improved Security: Stressing the importance of security updates and enhancements.
- Increased Developer Productivity: Showcasing how new language features can streamline development workflows.
- Long-Term Maintainability: Arguing that staying on older versions increases technical debt and maintenance costs in the long run.
Migration Considerations:
While a detailed, step-by-step migration guide is beyond the scope of the session, Doudoux outlines the essential high-level considerations and key steps involved in the migration process, such as:
- Dependency Analysis: Assessing compatibility with updated libraries and frameworks.
- Testing: Thoroughly testing the application after migration.
- Gradual Rollouts: Considering phased deployments to minimize risk.
3. The Future of Java: Trends and Directions
The session concludes with a concise yet insightful look at the future trajectory of the Java platform. This segment provides a glimpse into upcoming features, emerging trends, and the ongoing evolution of Java, ensuring the audience is aware of the continuous innovation within the Java ecosystem.
Summary:
This presentation provides a detailed and comprehensive overview of Java’s journey over the past decade, carefully contextualized within the parallel evolution of the Devoxx FR conference. It goes beyond a simple recitation of features, offering in-depth analysis of the impact of key advancements, practical guidance on navigating the complexities of Java migration, and a valuable perspective on the future of the platform.
[DevoxxFR 2022] Exploiter facilement des fonctions natives avec le Projet Panama depuis Java
Lors de Devoxx France 2022, Brice Dutheil a présenté une conférence de 28 minutes sur le Projet Panama, une initiative visant à simplifier l’appel de fonctions natives depuis Java sans les complexités de JNI ou de bibliothèques tierces. Brice, contributeur actif à l’écosystème Java, a introduit l’API Foreign Function & Memory (JEP-419), montrant comment elle relie le monde géré de Java au code natif en C, Swift ou Rust. À travers des démonstrations de codage en direct, Brice a illustré le potentiel de Panama pour des intégrations natives fluides. Suivez Brice sur Twitter à twitter.com/Brice_Dutheil pour plus d’insights Java.
Simplifier l’intégration de code natif
Brice a débuté en expliquant la mission du Projet Panama : connecter l’environnement géré de Java, avec son garbage collector, au monde natif de C, Swift ou Rust, plus proche de la machine. Traditionnellement, JNI imposait des étapes laborieuses : écrire des classes wrapper, charger des bibliothèques et générer des headers lors des builds. Ces processus étaient sujets aux erreurs et chronophages. Des alternatives comme JNA et JNR amélioraient l’expérience développeur en générant des bindings au runtime, mais elles étaient plus lentes et moins sécurisées.
Lancé en 2014, le Projet Panama répond à ces défis avec trois composantes : les API vectorielles (non couvertes ici), les appels de fonctions étrangères et la gestion de la mémoire. Brice s’est concentré sur l’API Foreign Function & Memory (JEP-419), disponible en incubation dans JDK 18. Contrairement à JNI, Panama élimine les complexités du build et offre des performances proches du natif sur toutes les plateformes. Il introduit un modèle de sécurité robuste, limitant les opérations dangereuses et envisageant de restreindre JNI dans les futures versions de Java (par exemple, Java 25 pourrait exiger un flag pour activer JNI). Brice a souligné l’utilisation des method handles et des instructions d’invocation dynamique, inspirées des avancées du bytecode JVM, pour générer efficacement des instructions assembleur pour les appels natifs.
Démonstrations pratiques avec Panama
Brice a démontré les capacités de Panama via du codage en direct, commençant par un exemple simple appelant la fonction getpid de la bibliothèque standard C. À l’aide du SystemLinker, il a effectué une recherche de symbole pour localiser getpid, créé un method handle avec un descripteur de fonction définissant la signature (retournant un long Java), et l’a invoqué pour récupérer l’ID du processus. Ce processus a contourné les lourdeurs de JNI, nécessitant seulement quelques lignes de code Java. Brice a insisté sur l’activation de l’accès natif avec le flag –enable-native-access dans JDK 18, renforçant le modèle de sécurité de Panama en limitant l’accès à des modules spécifiques.
Il a ensuite présenté un exemple plus complexe avec la fonction crypto_box de la bibliothèque cryptographique Libsodium, portable sur des plateformes comme Android. Brice a alloué des segments de mémoire avec un ResourceScope et un NativeAllocator, garantissant la sécurité mémoire en libérant automatiquement les ressources après usage, contrairement à JNI qui dépend du garbage collector. Le ResourceScope prévient les fuites mémoire, une amélioration significative par rapport aux buffers natifs traditionnels. Brice a également abordé l’appel de code Swift via des interfaces compatibles C, démontrant la polyvalence de Panama.
Outils et potentiel futur
Brice a introduit jextract, un outil de Panama qui génère des mappings Java à partir de headers C/C++, simplifiant l’intégration de bibliothèques comme Blake3, une fonction de hachage performante écrite en Rust. Dans une démo, il a montré comment jextract créait des bindings compatibles Panama pour les structures de données et fonctions de Blake3, permettant aux développeurs Java de tirer parti des performances natives sans bindings manuels. Malgré quelques accrocs, la démo a souligné le potentiel de Panama pour des intégrations natives transparentes.
Brice a conclu en soulignant les avantages de Panama : simplicité, rapidité, compatibilité multiplateforme et sécurité mémoire renforcée. Il a noté son évolution continue, avec JEP-419 en incubation dans JDK 18 et une deuxième preview prévue pour JDK 19. Pour les développeurs d’applications desktop ou de systèmes critiques, Panama offre une solution puissante pour exploiter des fonctions spécifiques aux OS ou des bibliothèques optimisées comme Libsodium. Brice a encouragé le public à expérimenter Panama et à poser des questions, renforçant son engagement via Twitter.
[DevoxxFR 2018] Java in Docker: Best Practices for Production
The practice of running Java applications within Docker containers has become widely adopted in modern software deployment, yet it is not devoid of potential challenges, particularly when transitioning to production environments. Charles Sabourdin, a freelance architect, and Jean-Christophe Sirot, an engineer at Docker, collaborated at DevoxxFR2018 to share their valuable experiences and disseminate best practices for optimizing Java applications inside Docker containers. Their insightful talk directly addressed common and often frustrating issues, such as containers crashing unexpectedly, applications consuming excessive RAM leading to node instability, and encountering CPU throttling. They offered practical solutions and configurations aimed at ensuring smoother and more reliable production deployments for Java workloads.
Navigating Common Pitfalls: Why Operations Teams May Approach Java Containers with Caution
The presenters initiated their session with a touch of humor, explaining why operations teams might exhibit a degree of apprehension when tasked with deploying a containerized Java application into a production setting. It’s a common scenario: containers that perform flawlessly on a developer’s local machine can begin to behave erratically or fail outright in production. This discrepancy often stems from a fundamental misunderstanding of how the Java Virtual Machine (JVM) interacts with the resource limits imposed by the container’s control groups (cgroups). Several key problems frequently surface in this context. Perhaps the most common is memory mismanagement; the JVM, particularly older versions, might not be inherently aware of the memory limits defined for its container by the cgroup. This lack of awareness can lead the JVM to attempt to allocate and use more memory than has been allocated to the container by the orchestrator or runtime. Such overconsumption inevitably results in the container being abruptly terminated by the operating system’s Out-Of-Memory (OOM) killer, a situation that can be difficult to diagnose without understanding this interaction.
Similarly, CPU resource allocation can present challenges. The JVM might not accurately perceive the CPU resources available to it within the container, such as CPU shares or quotas defined by cgroups. This can lead to suboptimal decisions in sizing internal thread pools (like the common ForkJoinPool or garbage collection threads) or can cause the application to experience unexpected CPU throttling, impacting performance. Another frequent issue is Docker image bloat. Overly large Docker images not only increase deployment times across the infrastructure but also expand the potential attack surface by including unnecessary libraries or tools, thereby posing security vulnerabilities. The talk aimed to equip developers and operations personnel with the knowledge to anticipate and mitigate these common pitfalls. During the presentation, a demonstration application, humorously named “ressources-munger,” was used to simulate these problems, clearly showing how an application could consume excessive memory leading to an OOM kill by Docker, or how it might trigger excessive swapping if not configured correctly, severely degrading performance.
JVM Memory Management and CPU Considerations within Containers
A significant portion of the discussion was dedicated to the intricacies of JVM memory management within the containerized environment. Charles and Jean-Christophe elaborated that older JVM versions, specifically those prior to Java 8 update 131 and Java 9, were not inherently “cgroup-aware”. This lack of awareness meant that the JVM’s default heap sizing heuristics—for example, typically allocating up to one-quarter of the physical host’s memory for the heap—would be based on the total resources of the host machine rather than the specific limits imposed on the container by its cgroup. This behavior is a primary contributor to unexpected OOM kills when the container’s actual memory limit is much lower than what the JVM assumes based on the host.
Several best practices were shared to address these memory-related issues effectively. The foremost recommendation is to use cgroup-aware JVM versions. Modern Java releases, particularly Java 8 update 191 and later, and Java 10 and newer, incorporate significantly improved cgroup awareness. For older Java 8 updates (specifically 8u131 to 8u190), experimental flags such as -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap can be employed to enable the JVM to better respect container memory limits. In Java 10 and subsequent versions, this behavior became standard and often requires no special flags. However, even with cgroup-aware JVMs, explicitly setting the heap size using parameters like -Xms for the initial heap size and -Xmx for the maximum heap size is frequently a recommended practice for predictability and control. Newer JVMs also offer options like -XX:MaxRAMPercentage, allowing for more dynamic heap sizing relative to the container’s allocated memory. It’s crucial to understand that the JVM’s total memory footprint extends beyond just the heap; it also requires memory for metaspace (which replaced PermGen in Java 8+), thread stacks, native libraries, and direct memory buffers. Therefore, when allocating memory to a container, it is essential to account for this total footprint, not merely the -Xmx value. A common guideline suggests that the Java heap might constitute around 50-75% of the total memory allocated to the container, with the remainder reserved for these other essential JVM components and any other processes running within the container. Tuning metaspace parameters, such as -XX:MetaspaceSize and -XX:MaxMetaspaceSize, can also prevent excessive native memory consumption, particularly in applications that dynamically load many classes.
Regarding CPU resources, the presenters noted that the JVM’s perception of available processors is also influenced by its cgroup awareness. In environments where CPU resources are constrained, using flags like -XX:ActiveProcessorCount can be beneficial to explicitly inform the JVM about the number of CPUs it should consider for sizing its internal thread pools, such as the common ForkJoinPool or the threads used for garbage collection. Optimizing the Docker image itself is another critical aspect of preparing Java applications for production. This involves choosing a minimal base image, such as alpine-jre, distroless, or official “slim” JRE images, instead of a full operating system distribution, to reduce the image size and potential attack surface. Utilizing multi-stage builds in the Dockerfile is a highly recommended technique; this allows developers to use a larger image containing build tools like Maven or Gradle and a full JDK in an initial stage, and then copy only the necessary application artifacts (like the JAR file) and a minimal JRE into a final, much smaller runtime image. Furthermore, being mindful of Docker image layering by combining related commands in the Dockerfile where possible can help reduce the number of layers and optimize image size. For applications on Java 9 and later, tools like jlink can be used to create custom, minimal JVM runtimes that include only the Java modules specifically required by the application, further reducing the image footprint. The session strongly emphasized that a collaborative approach between development and operations teams, combined with a thorough understanding of both JVM internals and Docker containerization principles, is paramount for successfully and reliably running Java applications in production environments.
Links:
- Docker Official Website
- OpenJDK Docker Hub Official Images
- Understanding JVM Memory Management (Baeldung)
Hashtags: #Java #Docker #JVM #Containerization #DevOps #Performance #MemoryManagement #DevoxxFR2018 #CharlesSabourdin #JeanChristopheSirot #BestPractices #ProductionReady #CloudNative
[DevoxxUS2017] 55 New Features in JDK 9: A Comprehensive Overview
At DevoxxUS2017, Simon Ritter, Deputy CTO at Azul Systems, delivered a detailed exploration of the 55 new features in JDK 9, with a particular focus on modularity through Project Jigsaw. Simon, a veteran Java evangelist, provided a whirlwind tour of the enhancements, categorizing them into features, standards, JVM internals, specialized updates, and housekeeping changes. His presentation equipped developers with the knowledge to leverage JDK 9’s advancements effectively. This post examines the key themes of Simon’s talk, highlighting how these features enhance Java’s flexibility, performance, and maintainability.
Modularity and Project Jigsaw
The cornerstone of JDK 9 is Project Jigsaw, which introduces modularity to the Java platform. Simon explained that the traditional rt.jar file, containing over 4,500 classes, has been replaced with 94 modular components in the jmods directory. This restructuring encapsulates private APIs, such as sun.misc.Unsafe, to improve security and maintainability, though it poses compatibility challenges for libraries relying on these APIs. To mitigate this, Simon highlighted options like the --add-exports and --add-opens flags, as well as a “big kill switch” (--permit-illegal-access) to disable modularity for legacy applications. The jlink tool further enhances modularity by creating custom runtimes with only the necessary modules, optimizing deployment for specific applications.
Enhanced APIs and Developer Productivity
JDK 9 introduces several API improvements to streamline development. Simon showcased factory methods for collections, allowing developers to create immutable collections with concise syntax, such as List.of() or Set.of(). The Streams API has been enhanced with methods like takeWhile, dropWhile, and ofNullable, improving expressiveness in data processing. Additionally, the introduction of jshell, an interactive REPL, enables rapid prototyping and experimentation. These enhancements reduce boilerplate code and enhance developer productivity, making Java more intuitive and efficient for modern application development.
JVM Internals and Performance
Simon delved into JVM enhancements, including improvements to the G1 garbage collector, which is now the default in JDK 9. The G1 collector offers better performance for large heaps, addressing limitations of the Concurrent Mark Sweep collector. Other internal improvements include a new process API for accessing operating system process details and a directive file for controlling JIT compiler behavior. These changes enhance runtime efficiency and provide developers with greater control over JVM performance, ensuring Java remains competitive for high-performance applications.
Housekeeping and Deprecations
JDK 9 includes significant housekeeping changes to streamline the platform. Simon highlighted the new version string format, adopting semantic versioning (major.minor.security.patch) for clearer identification. The directory structure has been flattened, eliminating the JRE subdirectory and tools.jar, with configuration files centralized in the conf directory. Deprecated APIs, such as the applet API and certain garbage collection options, have been removed to reduce maintenance overhead. These changes simplify the JDK’s structure, improving maintainability while requiring developers to test applications for compatibility.
Standards and Specialized Features
Simon also covered updates to standards and specialized features. The HTTP/2 client, introduced as an incubator module, allows developers to test and provide feedback before it becomes standard. Other standards updates include support for Unicode 8.0 and the deprecation of SHA-1 certificates for enhanced security. Specialized features, such as the annotations pipeline and parser API, improve the handling of complex annotations and programmatic interactions with the compiler. These updates ensure Java aligns with modern standards while offering flexibility for specialized use cases.
Links:
[DevoxxBE2013] Architecting Android Applications with Dagger
Jake Wharton, an Android engineering luminary at Square, champions Dagger, a compile-time dependency injector revolutionizing Java and Android modularity. Creator of Retrofit and Butter Knife, Jake elucidates Dagger’s divergence from reflection-heavy alternatives like Guice, emphasizing its speed and testability. His session overviews injection principles, Android-specific scoping, and advanced utilities like Lazy and Assisted Injection, arming developers with patterns for clean, verifiable code.
Dagger, Jake stresses, decouples class behaviors from dependencies, fostering reusable, injectable components. Through live examples, he builds a Twitter client, showcasing modules for API wrappers and HTTP clients, ensuring seamless integration.
Dependency Injection Fundamentals
Jake defines injection as externalizing object wiring, promoting loose coupling. He contrasts manual factories with Dagger’s annotation-driven graphs, where @Inject fields auto-resolve dependencies.
This pattern, Jake demonstrates, simplifies testing—mock modules swap implementations effortlessly, isolating units.
Dagger in Android Contexts
Android’s lifecycle demands scoping, Jake explains: @Singleton for app-wide instances, activity-bound for UI components. He constructs an app graph, injecting Twitter services into activities.
Fragments and services, he notes, inherit parent scopes, minimizing boilerplate while preserving encapsulation.
Advanced Features and Utilities
Dagger’s extras shine: @Lazy defers creation, @Assisted blends factories with injection for parameterized objects. Jake demos provider methods in modules, binding interfaces dynamically.
JSR-330 compliance, augmented by @Module, ensures portability, though Jake clarifies Dagger’s compile-time limits preclude Guice’s AOP dynamism.
Testing and Production Tips
Unit tests leverage Mockito for mocks, Jake illustrates, verifying injections without runtime costs. Production graphs, he advises, tier via subcomponents, optimizing memory.
Dagger’s reflection-free speed, Jake concludes, suits resource-constrained Android, with Square’s hiring call underscoring real-world impact.
Links:
[DevoxxBE2013] MongoDB for JPA Developers
Justin Lee, a seasoned Java developer and senior software engineer at Squarespace, guides Java EE developers through the transition to MongoDB, a leading NoSQL database. With nearly two decades of experience, including contributions to GlassFish’s WebSocket implementation and the JSR 356 expert group, Justin illuminates MongoDB’s paradigm shift from relational JPA to document-based storage. His session introduces MongoDB’s structure, explores data mapping with the Java driver and Morphia, and demonstrates adapting a JPA application to MongoDB’s flexible model.
MongoDB’s schemaless design challenges traditional JPA conventions, offering dynamic data interactions. Justin addresses performance, security, and integration, debunking myths about data loss and injection risks, making MongoDB accessible for Java developers seeking scalable, modern solutions.
Understanding MongoDB’s Document Model
Justin introduces MongoDB’s core concept: documents stored as JSON-like BSON objects, replacing JPA’s rigid tables. He demonstrates collections, where documents vary in structure, offering flexibility over fixed schemas.
This approach, Justin explains, suits dynamic applications, allowing developers to evolve data models without migrations.
Mapping JPA to MongoDB with Morphia
Using Morphia, Justin adapts a JPA application, mapping entities to documents. He shows annotating Java classes to define collections, preserving object-oriented principles. A live example converts a JPA entity to a MongoDB document, maintaining relationships via references.
Morphia, Justin notes, simplifies integration, bridging JPA’s structured queries with MongoDB’s fluidity.
Data Interaction and Performance Tuning
Justin explores MongoDB’s query engine, demonstrating CRUD operations via the Java driver. He highlights performance trade-offs: write concerns adjust speed versus durability. A demo shows fast writes with minimal safety, scaling to secure, slower operations.
No reported data loss bugs, Justin assures, bolster confidence in MongoDB’s reliability for enterprise use.
Security Considerations and Best Practices
Addressing security, Justin evaluates injection risks. MongoDB’s query engine resists SQL-like attacks, but he cautions against $where clauses executing JavaScript, which could expose vulnerabilities if misused.
Best practices include sanitizing inputs and leveraging Morphia’s type-safe queries, ensuring robust, secure applications.
Links:
Difference between wait() and sleep() in Java
Today in interview I have also been asked the following question: in Java, what is the difference between the methods wait() and sleep()?
First of all, wait() is a method of Object, meanwhile sleep() is a static method of Thread.
More important: Thread.sleep() freezes the execution of the complete thread for a given time. wait(), on its side, gives a maximum time on which the application is suspended: the waiting period may be interrupted by a call to the method notify() on the same object.
“Synchonized” in a block vs on a method
Today, in recruting interview, I have been asked the following question: what is the difference between the Java reserved word synchronized used on a method and this very word in a block? Happily I have known the answer:
Indeed, synchronized uses an Object to lock on. When a methid is synchronized, this means the current object is the locker.
Eg: this piece of code:
[java] public synchronized void foo(){
System.out.println("hello world!!!");
}
[/java]
is equivalent to that:
[java] public void foo(){
synchronized (this) {
System.out.println("hello world!!!");
}
}
[/java]
Besides, when synchronized is used on a static method, the class itself is the locker.
Eg: this piece of code:
[java] public static synchronized void goo() {
System.out.println("Chuck Norris");
}
[/java]is equivalent to that:
[java] public static void goo() {
synchronized (MyClass.class) {
System.out.println("Chuck Norris");
}
}
[/java]
How to export Oracle DB content to DBUnit XML flatfiles?
Case
From an Agile and TDD viewpoint, performing uni tests on DAO is a requirement. Sometimes, instead of using DBUnit datasets “out of the box”, the developper need test on actual data. In the same vein, when a bug appears on production, isolating and reproducing the issue is a smart way to investigate, and, along the way, fix it.
Therefore, how to export actual data from Oracle DB (or even MySQL, Sybase, DB2, etc.) to a DBUnit dataset as a flat XML file?
Here is a Runtime Test I wrote on this subject:
Fix
Spring
Edit the following Spring context file, setting the login, password, etc.
[xml]
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd">
<!– don’t forget to write this, otherwise the application will miss the driver class name, and therfore the test will fail–>
<bean id="driverClassForName" class="org.springframework.beans.factory.config.MethodInvokingFactoryBean">
<property name="targetClass" value="java.lang.Class"/>
<property name="targetMethod" value="forName"/>
<property name="arguments">
<list>
<value>oracle.jdbc.driver.OracleDriver</value>
</list>
</property>
</bean>
<bean id="connexion" class="org.springframework.beans.factory.config.MethodInvokingFactoryBean"
depends-on="driverClassForName">
<property name="targetClass" value="java.sql.DriverManager"/>
<property name="targetMethod" value="getConnection"/>
<property name="arguments">
<list>
<value>jdbc:oracle:thin:@host:1234:SCHEMA</value>
<value>myLogin</value>
<value>myPassword</value>
</list>
</property>
</bean>
<bean id="databaseConnection" class="org.dbunit.database.DatabaseConnection">
<constructor-arg ref="connexion"/>
</bean>
<bean id="queryDataSet" class="org.dbunit.database.QueryDataSet">
<constructor-arg ref="databaseConnection"/>
</bean>
</beans>[/xml]
The bean driverClassForName does not look to be used ; anyway, if Class.forName("oracle.jdbc.driver.OracleDriver") is not called, then the test will raise an exception.
To ensure driverClassForName is created before the bean connexion, I added a attribute depends-on="driverClassForName". The other beans will be created after connexion, since Spring will deduce the needed order of creation via the explicit dependency tree.
Java
[java]public class Oracle2DBUnitExtractor extends TestCase {
private QueryDataSet queryDataSet;
@Before
public void setUp() throws Exception {
final ApplicationContext applicationContext;
applicationContext = new ClassPathXmlApplicationContext(
"lalou/jonathan/Oracle2DBUnitExtractor-applicationContext.xml");
assertNotNull(applicationContext);
queryDataSet = (QueryDataSet) applicationContext.getBean("queryDataSet");
}
@Test
public void testExportTablesInFile() throws DataSetException, IOException {
// add all the needed tables ; take care to write them in the right order, so that you don’t happen to fall on dependencies issues, such as ones related to foreign keys
queryDataSet.addTable("MYTABLE");
queryDataSet.addTable("MYOTHERTABLE");
queryDataSet.addTable("YETANOTHERTABLE");
// Destination XML file into which data needs to be extracted
FlatXmlDataSet.write(queryDataSet, new FileOutputStream("myProject/src/test/runtime/lalou/jonathan/output-dataset.xml"));
}
}[/java]
How to Read a BLOB for a Human Being?
Case
I have had to access a BLOB and read its content. By principle, I dislike using binary objects, which do not suit easy tracing and auditing. Anyway, in my case, floats are stored in a BLOB, and I need read them in order to validate my current development.
You have many ways to read the content of the BLOB. I used two: SQL and Java
SQL
Start your TOAD for Oracle ; you can launch queries similar to this:
[sql]SELECT UTL_RAW.cast_to_binary_float
(DBMS_LOB.submyrecord (myrecord.myrecordess,
4,
1 + (myrecordessnameid * 4)
)
) AS myrecordessvalue
FROM mytable myrecord
WHERE myrecordessid = 123456; [/sql]
You can also run a stored procedure, similar to this:
[sql]
DECLARE
blobAsVariable BLOB;
my_vr RAW (4);
blobValue FLOAT;
bytelen NUMBER := 4;
v_index NUMBER := 5;
jonathan RAW (4);
loopLength INT;
BEGIN
SELECT myField
INTO blobAsVariable
FROM myTable
WHERE tableid = (5646546846);
DBMS_LOB.READ (blobAsVariable, bytelen, 1, jonathan);
loopLength := UTL_RAW.cast_to_binary_integer (jonathan);
FOR rec IN 1 .. loopLength
LOOP
DBMS_LOB.READ (blobAsVariable, bytelen, v_index, my_vr);
blobValue := UTL_RAW.cast_to_binary_float (my_vr);
v_index := v_index + 4;
DBMS_OUTPUT.put_line (TO_CHAR (blobValue));
END LOOP;
END;[/sql]
Java
I am still not sure to be DBA expert. Indeed I am convinced I am more fluent in Java than in PL/SQL 😉
Create a Spring configuration file, let’s say BlobRuntimeTest-applicationContext.xml:
[xml]<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:aop="http://www.springframework.org/schema/aop"
xmlns:tx="http://www.springframework.org/schema/tx"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd">
<!– $Id: BlobRuntimeTest-applicationContext.xml $ –>
<bean id="dataSource" destroy-method="close" class="org.apache.commons.dbcp.BasicDataSource">
<property name="driverClassName" value="oracle.jdbc.driver.OracleDriver"/>
<property name="url" value="jdbc:oracle:thin:@myDBserver:1234:MY_SCHEMA"/>
<property name="username" value="jonathan"/>
<property name="password" value="lalou"/>
<property name="initialSize" value="2"/>
<property name="minIdle" value="2"/>
</bean>
<bean id="jdbcTemplate" class="org.springframework.jdbc.core.JdbcTemplate">
<property name="dataSource" ref="dataSource"/>
</bean>
</beans>[/xml]
Now create a runtime test:
[java]/**
* User: Jonathan Lalou
* Date: Aug 7, 2011
* Time: 5:22:33 PM
* $Id: BlobRuntimeTest.java $
*/
public class BlobRuntimeTest extends TestCase {
private static final Logger LOGGER = Logger.getLogger(BlobRuntimeTest.class);
private static final String TABLE = "jonathanTable";
private static final String PK_FIELD = "jonathanTablePK";
private static final String BLOB_FIELD = "myBlobField";
private static final int[] PK_VALUES = {123, 456, 789};
private ApplicationContext applicationContext;
private JdbcTemplate jdbcTemplate;
@Before
public void setUp() throws Exception {
applicationContext = new ClassPathXmlApplicationContext(
"lalou/jonathan/the/cownboy/BlobRuntimeTest-applicationContext.xml");
assertNotNull(applicationContext);
jdbcTemplate = (JdbcTemplate) applicationContext.getBean("jdbcTemplate");
assertNotNull(jdbcTemplate);
}
@After
public void tearDown() throws Exception {
}
@Test
public void testGetArray() throws Exception {
for (int pk_value : PK_VALUES) {
final Blob blob;
final byte[] bytes;
final float[] floats;
blob = (Blob) jdbcTemplate.queryForObject("select " + BLOB_FIELD + " from " + TABLE + " where " + PK_FIELD + " = " + pk_value, Blob.class);
assertNotNull(blob);
bytes = blob.getBytes(1, (int) blob.length());
// process your blob: unzip, read, concat, add, etc..
// floats = ….
LOGGER.info("Blob size: " + floats.length);
LOGGER.info(ToStringBuilder.reflectionToString(floats));
}
}
}
[/java]