Search results for: optical techniques
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8055

Search results for: optical techniques

5115 Foundation Settlement Determination: A Simplified Approach

Authors: Adewoyin O. Olusegun, Emmanuel O. Joshua, Marvel L. Akinyemi

Abstract:

The heterogeneous nature of the subsurface requires the use of factual information to deal with rather than assumptions or generalized equations. Therefore, there is need to determine the actual rate of settlement possible in the soil before structures are built on it. This information will help in determining the type of foundation design and the kind of reinforcement that will be necessary in constructions. This paper presents a simplified and a faster approach for determining foundation settlement in any type of soil using real field data acquired from seismic refraction techniques and cone penetration tests. This approach was also able to determine the depth of settlement of each strata of soil. The results obtained revealed the different settlement time and depth of settlement possible.

Keywords: heterogeneous, settlement, foundation, seismic, technique

Procedia PDF Downloads 439
5114 The Hydrotrope-Mediated, Low-Temperature, Aqueous Dissolution of Maize Starch

Authors: Jeroen Vinkx, Jan A. Delcour, Bart Goderis

Abstract:

Complete aqueous dissolution of starch is notoriously difficult. A high-temperature autoclaving process is necessary, followed by cooling the solution below its boiling point. The cooled solution is inherently unstable over time. Gelation and retrogradation processes, along with aggregation-induced by undissolved starch remnants, result in starch precipitation. We recently observed the spontaneous gelatinization of native maize starch (MS) in aqueous sodium salicylate (NaSal) solutions at room temperature. A hydrotropic mode of solubilization is hypothesized. Differential scanning calorimetry (DSC) and polarized optical microscopy (POM) of starch dispersions in NaSal solution were used to demonstrate the room temperature gelatinization of MS at different concentrations of MS and NaSal. The DSC gelatinization peak shifts to lower temperatures, and the gelatinization enthalpy decreases with increasing NaSal concentration. POM images confirm the same trend through the disappearance of the ‘Maltese cross’ interference pattern of starch granules. The minimal NaSal concentration to induce complete room temperature dissolution of MS was found to be around 15-20 wt%. The MS content of the dispersion has little influence on the amount of NaSal needed to dissolve it. The effect of the NaSal solution on the MS molecular weight was checked with HPSEC. It is speculated that, because of its amphiphilic character, NaSal enhances the solubility of MS in water by association with the more hydrophobic MS moieties, much like urea, which has also been used to enhance starch dissolution in alkaline aqueous media. As such small molecules do not tend to form micelles in water, they are called hydrotropes rather than surfactants. A minimal hydrotrope concentration (MHC) is necessary for the hydrotropes to structure themselves in water, resulting in a higher solubility of MS. This is the case for the system MS/NaSal/H₂O. Further investigations into the putative hydrotropic dissolution mechanism are necessary.

Keywords: hydrotrope, dissolution, maize starch, sodium salicylate, gelatinization

Procedia PDF Downloads 174
5113 The Risk of Ground Movements After Digging Two Parallel Vertical Tunnel in Urban

Authors: Djelloul Chafia, Demagh Rafik, Kareche Toufik

Abstract:

Human activities, made without precautions, accelerate the degradation of the soil structure and reduces its resistance. Operations, such as tunnel construction may exercise an influence more or less permanent on the grounds which surrounded them, these structures alter soil it is necessary to predict their impacts by suitable measures. This research is a numerical analysis that deals the risks and effects due to the weakening of the soil after digging two parallel vertical circular tunnels in urban areas, and suggests forecasting techniques based essentially on the organization of underground space. The simulations are performed using the finite-difference code FLAC in a two-dimensional case and with an elasto-plastic behavior of the soil.

Keywords: sol, weakening, degradation, prevention, tunnel

Procedia PDF Downloads 554
5112 Survey on Big Data Stream Classification by Decision Tree

Authors: Mansoureh Ghiasabadi Farahani, Samira Kalantary, Sara Taghi-Pour, Mahboubeh Shamsi

Abstract:

Nowadays, the development of computers technology and its recent applications provide access to new types of data, which have not been considered by the traditional data analysts. Two particularly interesting characteristics of such data sets include their huge size and streaming nature .Incremental learning techniques have been used extensively to address the data stream classification problem. This paper presents a concise survey on the obstacles and the requirements issues classifying data streams with using decision tree. The most important issue is to maintain a balance between accuracy and efficiency, the algorithm should provide good classification performance with a reasonable time response.

Keywords: big data, data streams, classification, decision tree

Procedia PDF Downloads 512
5111 Optimizing Machine Vision System Setup Accuracy by Six-Sigma DMAIC Approach

Authors: Joseph C. Chen

Abstract:

Machine vision system provides automatic inspection to reduce manufacturing costs considerably. However, only a few principles have been found to optimize machine vision system and help it function more accurately in industrial practice. Mostly, there were complicated and impractical design techniques to improve the accuracy of machine vision system. This paper discusses implementing the Six Sigma Define, Measure, Analyze, Improve, and Control (DMAIC) approach to optimize the setup parameters of machine vision system when it is used as a direct measurement technique. This research follows a case study showing how Six Sigma DMAIC methodology has been put into use.

Keywords: DMAIC, machine vision system, process capability, Taguchi Parameter Design

Procedia PDF Downloads 426
5110 Real-Time Monitoring of Drinking Water Quality Using Advanced Devices

Authors: Amani Abdallah, Isam Shahrour

Abstract:

The quality of drinking water is a major concern of public health. The control of this quality is generally performed in the laboratory, which requires a long time. This type of control is not adapted for accidental pollution from sudden events, which can have serious consequences on population health. Therefore, it is of major interest to develop real-time innovative solutions for the detection of accidental contamination in drinking water systems This paper presents researches conducted within the SunRise Demonstrator for ‘Smart and Sustainable Cities’ with a particular focus on the supervision of the water quality. This work aims at (i) implementing a smart water system in a large water network (Campus of the University Lille1) including innovative equipment for real-time detection of abnormal events, such as those related to the contamination of drinking water and (ii) develop a numerical modeling of the contamination diffusion in the water distribution system. The first step included verification of the water quality sensors and their effectiveness on a network prototype of 50m length. This part included the evaluation of the efficiency of these sensors in the detection both bacterial and chemical contamination events in drinking water distribution systems. An on-line optical sensor integral with a laboratory-scale distribution system (LDS) was shown to respond rapidly to changes in refractive index induced by injected loads of chemical (cadmium, mercury) and biological contaminations (Escherichia coli). All injected substances were detected by the sensor; the magnitude of the response depends on the type of contaminant introduced and it is proportional to the injected substance concentration.

Keywords: distribution system, drinking water, refraction index, sensor, real-time

Procedia PDF Downloads 345
5109 Characterization of Particle Charge from Aerosol Generation Process: Impact on Infrared Signatures and Material Reactivity

Authors: Erin M. Durke, Monica L. McEntee, Meilu He, Suresh Dhaniyala

Abstract:

Aerosols are one of the most important and significant surfaces in the atmosphere. They can influence weather, absorption, and reflection of light, and reactivity of atmospheric constituents. A notable feature of aerosol particles is the presence of a surface charge, a characteristic imparted via the aerosolization process. The existence of charge can complicate the interrogation of aerosol particles, so many researchers remove or neutralize aerosol particles before characterization. However, the charge is present in real-world samples, and likely has an effect on the physical and chemical properties of an aerosolized material. In our studies, we aerosolized different materials in an attempt to characterize the charge imparted via the aerosolization process and determine what impact it has on the aerosolized materials’ properties. The metal oxides, TiO₂ and SiO₂, were aerosolized expulsively and then characterized, using several different techniques, in an effort to determine the surface charge imparted upon the particles via the aerosolization process. Particle charge distribution measurements were conducted via the employment of a custom scanning mobility particle sizer. The results of the charge distribution measurements indicated that expulsive generation of 0.2 µm SiO₂ particles produced aerosols with upwards of 30+ charges on the surface of the particle. Determination of the degree of surface charging led to the use of non-traditional techniques to explore the impact of additional surface charge on the overall reactivity of the metal oxides, specifically TiO₂. TiO₂ was aerosolized, again expulsively, onto a gold-coated tungsten mesh, which was then evaluated with transmission infrared spectroscopy in an ultra-high vacuum environment. The TiO₂ aerosols were exposed to O₂, H₂, and CO, respectively. Exposure to O₂ resulted in a decrease in the overall baseline of the aerosol spectrum, suggesting O₂ removed some of the surface charge imparted during aerosolization. Upon exposure to H₂, there was no observable rise in the baseline of the IR spectrum, as is typically seen for TiO₂, due to the population of electrons into the shallow trapped states and subsequent promotion of the electrons into the conduction band. This result suggests that the additional charge imparted via aerosolization fills the trapped states, therefore no rise is seen upon exposure to H₂. Dosing the TiO₂ aerosols with CO showed no adsorption of CO on the surface, even at lower temperatures (~100 K), indicating the additional charge on the aerosol surface prevents the CO molecules from adsorbing to the TiO₂ surface. The results observed during exposure suggest that the additional charge imparted via aerosolization impacts the interaction with each probe gas.

Keywords: aerosols, charge, reactivity, infrared

Procedia PDF Downloads 119
5108 Methods to Measure the Quality of 2D Image Compression Techniques

Authors: Mohammed H. Rasheed, Hussein Nadhem Fadhel, Mohammed M. Siddeq

Abstract:

In this paper we suggested image quality measuring metrics tools that can provide an accurate and close to the perceived quality sense of the tested images. Such tools give metrics that can be used to compare the performance of image compression algorithms. In this paper, two new metrics to measure the quality of decompressed images are proposed. The metric measurement based on combined data (CD) between an originals and decompressed images. Compared with other e.g., PSNR and RMSE, the proposed metrics gives values with the closest reflection of image quality perception by the human eye.

Keywords: RMSE, PSNR, image quality metrics, image compression

Procedia PDF Downloads 20
5107 Pre-Shared Key Distribution Algorithms' Attacks for Body Area Networks: A Survey

Authors: Priti Kumari, Tricha Anjali

Abstract:

Body Area Networks (BANs) have emerged as the most promising technology for pervasive health care applications. Since they facilitate communication of very sensitive health data, information leakage in such networks can put human life at risk, and hence security inside BANs is a critical issue. Safe distribution and periodic refreshment of cryptographic keys are needed to ensure the highest level of security. In this paper, we focus on the key distribution techniques and how they are categorized for BAN. The state-of-art pre-shared key distribution algorithms are surveyed. Possible attacks on algorithms are demonstrated with examples.

Keywords: attacks, body area network, key distribution, key refreshment, pre-shared keys

Procedia PDF Downloads 355
5106 Identification of Suitable Sites for Rainwater Harvesting in Salt Water Intruded Area by Using Geospatial Techniques in Jafrabad, Amreli District, India

Authors: Pandurang Balwant, Ashutosh Mishra, Jyothi V., Abhay Soni, Padmakar C., Rafat Quamar, Ramesh J.

Abstract:

The sea water intrusion in the coastal aquifers has become one of the major environmental concerns. Although, it is a natural phenomenon but, it can be induced with anthropogenic activities like excessive exploitation of groundwater, seacoast mining, etc. The geological and hydrogeological conditions including groundwater heads and groundwater pumping pattern in the coastal areas also influence the magnitude of seawater intrusion. However, this problem can be remediated by taking some preventive measures like rainwater harvesting and artificial recharge. The present study is an attempt to identify suitable sites for rainwater harvesting in salt intrusion affected area near coastal aquifer of Jafrabad town, Amreli district, Gujrat, India. The physico-chemical water quality results show that out of 25 groundwater samples collected from the study area most of samples were found to contain high concentration of Total Dissolved Solids (TDS) with major fractions of Na and Cl ions. The Cl/HCO3 ratio was also found greater than 1 which indicates the salt water contamination in the study area. The geophysical survey was conducted at nine sites within the study area to explore the extent of contamination of sea water. From the inverted resistivity sections, low resistivity zone (<3 Ohm m) associated with seawater contamination were demarcated in North block pit and south block pit of NCJW mines, Mitiyala village Lotpur and Lunsapur village at the depth of 33 m, 12 m, 40 m, 37 m, 24 m respectively. Geospatial techniques in combination of Analytical Hierarchy Process (AHP) considering hydrogeological factors, geographical features, drainage pattern, water quality and geophysical results for the study area were exploited to identify potential zones for the Rainwater Harvesting. Rainwater harvesting suitability model was developed in ArcGIS 10.1 software and Rainwater harvesting suitability map for the study area was generated. AHP in combination of the weighted overlay analysis is an appropriate method to identify rainwater harvesting potential zones. The suitability map can be further utilized as a guidance map for the development of rainwater harvesting infrastructures in the study area for either artificial groundwater recharge facilities or for direct use of harvested rainwater.

Keywords: analytical hierarchy process, groundwater quality, rainwater harvesting, seawater intrusion

Procedia PDF Downloads 166
5105 Nanoparticles Modification by Grafting Strategies for the Development of Hybrid Nanocomposites

Authors: Irati Barandiaran, Xabier Velasco-Iza, Galder Kortaberria

Abstract:

Hybrid inorganic/organic nanostructured materials based on block copolymers are of considerable interest in the field of Nanotechnology, taking into account that these nanocomposites combine the properties of polymer matrix and the unique properties of the added nanoparticles. The use of block copolymers as templates offers the opportunity to control the size and the distribution of inorganic nanoparticles. This research is focused on the surface modification of inorganic nanoparticles to reach a good interface between nanoparticles and polymer matrices which hinders the nanoparticle aggregation. The aim of this work is to obtain a good and selective dispersion of Fe3O4 magnetic nanoparticles into different types of block copolymers such us, poly(styrene-b-methyl methacrylate) (PS-b-PMMA), poly(styrene-b-ε-caprolactone) (PS-b-PCL) poly(isoprene-b-methyl methacrylate) (PI-b-PMMA) or poly(styrene-b-butadiene-b-methyl methacrylate) (SBM) by using different grafting strategies. Fe3O4 magnetic nanoparticles have been surface-modified with polymer or block copolymer brushes following different grafting methods (grafting to, grafting from and grafting through) to achieve a selective location of nanoparticles into desired domains of the block copolymers. Morphology of fabricated hybrid nanocomposites was studied by means of atomic force microscopy (AFM) and with the aim to reach well-ordered nanostructured composites different annealing methods were used. Additionally, nanoparticle amount has been also varied in order to investigate the effect of the nanoparticle content in the morphology of the block copolymer. Nowadays different characterization methods were using in order to investigate magnetic properties of nanometer-scale electronic devices. Particularly, two different techniques have been used with the aim of characterizing synthesized nanocomposites. First, magnetic force microscopy (MFM) was used to investigate qualitatively the magnetic properties taking into account that this technique allows distinguishing magnetic domains on the sample surface. On the other hand, magnetic characterization by vibrating sample magnetometer and superconducting quantum interference device. This technique demonstrated that magnetic properties of nanoparticles have been transferred to the nanocomposites, exhibiting superparamagnetic behavior similar to that of the maghemite nanoparticles at room temperature. Obtained advanced nanostructured materials could found possible applications in the field of dye-sensitized solar cells and electronic nanodevices.

Keywords: atomic force microscopy, block copolymers, grafting techniques, iron oxide nanoparticles

Procedia PDF Downloads 257
5104 Investigation on Reducing the Bandgap in Nanocomposite Polymers by Doping

Authors: Sharvare Palwai, Padmaja Guggilla

Abstract:

Smart materials, also called as responsive materials, undergo reversible physical or chemical changes in their properties as a consequence of small environmental variations. They can respond to a single or multiple stimuli such as stress, temperature, moist, electric or magnetic fields, light, or chemical compounds. Hence smart materials are the basis of many applications, including biosensors and transducers, particularly electroactive polymers. As the polymers exhibit good flexibility, high transparency, easy processing, and low cost, they would be promising for the sensor material. Polyvinylidene Fluoride (PVDF), being a ferroelectric polymer, exhibits piezoelectric and pyro electric properties. Pyroelectric materials convert heat directly into electricity, while piezoelectric materials convert mechanical energy into electricity. These characteristics of PVDF make it useful in biosensor devices and batteries. However, the influence of nanoparticle fillers such as Lithium Tantalate (LiTaO₃/LT), Potassium Niobate (KNbO₃/PN), and Zinc Titanate (ZnTiO₃/ZT) in polymer films will be studied comprehensively. Developing advanced and cost-effective biosensors is pivotal to foresee the fullest potential of polymer based wireless sensor networks, which will further enable new types of self-powered applications. Finally, nanocomposites films with best set of properties; the sensory elements will be designed and tested for their performance as electric generators under laboratory conditions. By characterizing the materials for their optical properties and investigate the effects of doping on the bandgap energies, the science in the next-generation biosensor technologies can be advanced.

Keywords: polyvinylidene fluoride, PVDF, lithium tantalate, potassium niobate, zinc titanate

Procedia PDF Downloads 126
5103 Flexible and Color Tunable Inorganic Light Emitting Diode Array for High Resolution Optogenetic Devices

Authors: Keundong Lee, Dongha Yoo, Youngbin Tchoe, Gyu-Chul Yi

Abstract:

Light emitting diode (LED) array is an ideal optical stimulation tool for optogenetics, which controls inhibition and excitation of specific neurons with light-sensitive ion channels or pumps. Although a fiber-optic cable with an external light source, either a laser or LED mechanically connected to the end of the fiber-optic cable has widely been used for illumination on neural tissue, a new approach to use micro LEDs (µLEDs) has recently been demonstrated. The LEDs can be placed directly either on the cortical surface or within the deep brain using a penetrating depth probe. Accordingly, this method would not need a permanent opening in the skull if the LEDs are integrated with miniature electrical power source and wireless communication. In addition, multiple color generation from single µLED cell would enable to excite and/or inhibit neurons in localized regions. Here, we demonstrate flexible and color tunable µLEDs for the optogenetic device applications. The flexible and color tunable LEDs was fabricated using multifaceted gallium nitride (GaN) nanorod arrays with GaN nanorods grown on InxGa1−xN/GaN single quantum well structures (SQW) anisotropically formed on the nanorod tips and sidewalls. For various electroluminescence (EL) colors, current injection paths were controlled through a continuous p-GaN layer depending on the applied bias voltage. The electric current was injected through different thickness and composition, thus changing the color of light from red to blue that the LED emits. We believe that the flexible and color tunable µLEDs enable us to control activities of the neuron by emitting various colors from the single µLED cell.

Keywords: light emitting diode, optogenetics, graphene, flexible optoelectronics

Procedia PDF Downloads 209
5102 Determination of Myocardial Function Using Heart Accumulated Radiopharmaceuticals

Authors: C. C .D. Kulathilake, M. Jayatilake, T. Takahashi

Abstract:

The myocardium is composed of specialized muscle which relies mainly on fatty acid and sugar metabolism and it is widely contribute to the heart functioning. The changes of the cardiac energy-producing system during heart failure have been proved using autoradiography techniques. This study focused on evaluating sugar and fatty acid metabolism in myocardium as cardiac energy getting system using heart-accumulated radiopharmaceuticals. Two sets of autoradiographs of heart cross sections of Lewis male rats were analyzed and the time- accumulation curve obtained with use of the MATLAB image processing software to evaluate fatty acid and sugar metabolic functions.

Keywords: autoradiographs, fatty acid, radiopharmaceuticals, sugar

Procedia PDF Downloads 445
5101 Shakespeare's Hamlet in Ballet: Transformation of an Archival Recording of a Neoclassical Ballet Performance into a Contemporary Transmodern Dance Video Applying Postmodern Concepts and Techniques

Authors: Svebor Secak

Abstract:

This four-year artistic research project hosted by the University of New England, Australia has set the goal to experiment with non-conventional ways of presenting a language-based narrative in dance using insights of recent theoretical writing on performance, addressing the research question: How to transform an archival recording of a neoclassical ballet performance into a new artistic dance video by implementing postmodern philosophical concepts? The Creative Practice component takes the form of a dance video Hamlet Revisited which is a reworking of the archival recording of the neoclassical ballet Hamlet, augmented by new material, produced using resources, technicians and dancers of the Croatian National Theatre in Zagreb. The methodology for the creation of Hamlet Revisited consisted of extensive field and desk research after which three dancers were shown the recording of original Hamlet and then created their artistic response to it based on their reception and appreciation of it. The dancers responded differently, based upon their diverse dancing backgrounds and life experiences. They began in the role of the audience observing video of the original ballet and transformed into the role of the choreographer-performer. Their newly recorded material was edited and juxtaposed with the archival recording of Hamlet and other relevant footage, allowing for postmodern features such as aleatoric content, synchronicity, eclecticism and serendipity, that way establishing communication on a receptive reader-response basis, thus blending the roles of the choreographer, performer and spectator, creating an original work of art whose significance lies in the relationship and communication between styles, old and new choreographic approaches, artists and audiences and the transformation of their traditional roles and relationships. In editing and collating, the following techniques were used with the intention to avoid the singular narrative: fragmentation, repetition, reverse-motion, multiplication of images, split screen, overlaying X-rays, image scratching, slow-motion, freeze-frame and simultaneity. Key postmodern concepts considered were: deconstruction, diffuse authorship, supplementation, simulacrum, self-reflexivity, questioning the role of the author, intertextuality and incredulity toward grand narratives - departing from the original story, thus personalising its ontological themes. From a broad brush of diverse concepts and techniques applied in an almost prescriptive manner, the project focuses on intertextuality that proves to be valid on at least two levels. The first is the possibility of a more objective analysis in combination with a semiotic structuralist approach moving from strict relationships between signs to a multiplication of signifiers, considering the dance text as an open construction, containing the elusive and enigmatic quality of art that leaves the interpretive position open. The second one is the creation of the new work where the author functions as the editor, aware and conscious of the interplay of disparate texts and their sources which co-act in the mind during the creative process. It is argued here that the eclectic combination of the old and new material through constant oscillations of different discourses upon the same topic resulted in a transmodern integrationist recent work of art that might be applied as a model for reconsidering existing choreographic creations.

Keywords: Ballet Hamlet, intertextuality, transformation, transmodern dance video

Procedia PDF Downloads 251
5100 Cubical Representation of Prime and Essential Prime Implicants of Boolean Functions

Authors: Saurabh Rawat, Anushree Sah

Abstract:

K Maps are generally and ideally, thought to be simplest form for obtaining solution of Boolean equations. Cubical Representation of Boolean equations is an alternate pick to incur a solution, otherwise to be meted out with Truth Tables, Boolean Laws, and different traits of Karnaugh Maps. Largest possible k- cubes that exist for a given function are equivalent to its prime implicants. A technique of minimization of Logic functions is tried to be achieved through cubical methods. The main purpose is to make aware and utilise the advantages of cubical techniques in minimization of Logic functions. All this is done with an aim to achieve minimal cost solution.r

Keywords: K-maps, don’t care conditions, Boolean equations, cubes

Procedia PDF Downloads 381
5099 Investigation and Identification of a Number of Precious and Semi-precious Stones Related to Bam Historical Citadel Using Micro Raman Spectroscopy and Scanning Electron Microscopy (SEM/EDX)

Authors: Nazli Darkhal

Abstract:

The use of gems and ornaments has been common in Iran since the beginning of history. The prosperity of the country, the wealth, and the interest of the people of this land in luxurious and glorious life, combined with beauty, have always attracted the attention of the gems and ornaments of the Iranian people. Iranians are famous in the world for having a long history of collecting and recognizing precious stones. In this case, we can use the unique treasure of national jewelry. Raman spectroscopy method is one of the oscillating spectroscopy methods that is classified in the group of nondestructive study methods, and like other methods, in addition to several advantages, it also has disadvantages and problems. Micro Raman spectroscopy is one of the different types of Raman spectroscopy in which an optical microscope is combined with a Raman device to provide more capabilities and advantages than its original method. In this way, with the help of Raman spectroscopy and a light microscope, while observing more details from different parts of the historical sample, natural or artificial pigments can be identified in a small part of it. The EDX electron microscope also functions as the basis for the interaction of the electron beam with the matter. The beams emitted from this interaction can be used to examine samples. In this article, in addition to introducing the micro Raman spectroscopy method, studies have been conducted on the structure of three samples of existing stones in the historic citadel of Bam. Using this method of study on precious and semi-precious stones, in addition to requiring a short time, can provide us with complete information about the structure and theme of these samples. The results of experiments and gemology of the stones showed that the selected beads are agate and jasper, and they can be placed in the chalcedony group.

Keywords: bam citadel, precious and semi-precious stones, Raman spectroscopy, scanning electron microscope

Procedia PDF Downloads 124
5098 Solvent-Aided Dispersion of Tannic Acid to Enhance Flame Retardancy of Epoxy

Authors: Matthew Korey, Jeffrey Youngblood, John Howarter

Abstract:

Background and Significance: Tannic acid (TA) is a bio-based high molecular weight organic, aromatic molecule that has been found to increase thermal stability and flame retardancy of many polymer matrices when used as an additive. Although it is biologically sourced, TA is a pollutant in industrial wastewater streams, and there is a desire to find applications in which to downcycle this molecule after extraction from these streams. Additionally, epoxy thermosets have revolutionized many industries, but are too flammable to be used in many applications without additives which augment their flame retardancy (FR). Many flame retardants used in epoxy thermosets are synthesized from petroleum-based monomers leading to significant environmental impacts on the industrial scale. Many of these compounds also have significant impacts on human health. Various bio-based modifiers have been developed to improve the FR of the epoxy resin; however, increasing FR of the system without tradeoffs with other properties has proven challenging, especially for TA. Methodologies: In this work, TA was incorporated into the thermoset by use of solvent-exchange using methyl ethyl ketone, a co-solvent for TA, and epoxy resin. Samples were then characterized optically (UV-vis spectroscopy and optical microscopy), thermally (thermogravimetric analysis and differential scanning calorimetry), and for their flame retardancy (mass loss calorimetry). Major Findings: Compared to control samples, all samples were found to have increased thermal stability. Further, the addition of tannic acid to the polymer matrix by the use of solvent greatly increased the compatibility of the additive in epoxy thermosets. By using solvent-exchange, the highest loading level of TA found in literature was achieved in this work (40 wt%). Conclusions: The use of solvent-exchange shows promises for circumventing the limitations of TA in epoxy.

Keywords: sustainable, flame retardant, epoxy, tannic acid

Procedia PDF Downloads 122
5097 Mixed Integer Programming-Based One-Class Classification Method for Process Monitoring

Authors: Younghoon Kim, Seoung Bum Kim

Abstract:

One-class classification plays an important role in detecting outlier and abnormality from normal observations. In the previous research, several attempts were made to extend the scope of application of the one-class classification techniques to statistical process control problems. For most previous approaches, such as support vector data description (SVDD) control chart, the design of the control limits is commonly based on the assumption that the proportion of abnormal observations is approximately equal to an expected Type I error rate in Phase I process. Because of the limitation of the one-class classification techniques based on convex optimization, we cannot make the proportion of abnormal observations exactly equal to expected Type I error rate: controlling Type I error rate requires to optimize constraints with integer decision variables, but convex optimization cannot satisfy the requirement. This limitation would be undesirable in theoretical and practical perspective to construct effective control charts. In this work, to address the limitation of previous approaches, we propose the one-class classification algorithm based on the mixed integer programming technique, which can solve problems formulated with continuous and integer decision variables. The proposed method minimizes the radius of a spherically shaped boundary subject to the number of normal data to be equal to a constant value specified by users. By modifying this constant value, users can exactly control the proportion of normal data described by the spherically shaped boundary. Thus, the proportion of abnormal observations can be made theoretically equal to an expected Type I error rate in Phase I process. Moreover, analogous to SVDD, the boundary can be made to describe complex structures by using some kernel functions. New multivariate control chart applying the effectiveness of the algorithm is proposed. This chart uses a monitoring statistic to characterize the degree of being an abnormal point as obtained through the proposed one-class classification. The control limit of the proposed chart is established by the radius of the boundary. The usefulness of the proposed method was demonstrated through experiments with simulated and real process data from a thin film transistor-liquid crystal display.

Keywords: control chart, mixed integer programming, one-class classification, support vector data description

Procedia PDF Downloads 170
5096 Imaging 255nm Tungsten Thin Film Adhesion with Picosecond Ultrasonics

Authors: A. Abbas, X. Tridon, J. Michelon

Abstract:

In the electronic or in the photovoltaic industries, components are made from wafers which are stacks of thin film layers of a few nanometers to serval micrometers thickness. Early evaluation of the bounding quality between different layers of a wafer is one of the challenges of these industries to avoid dysfunction of their final products. Traditional pump-probe experiments, which have been developed in the 70’s, give a partial solution to this problematic but with a non-negligible drawback. In fact, on one hand, these setups can generate and detect ultra-high ultrasounds frequencies which can be used to evaluate the adhesion quality of wafer layers. But, on the other hand, because of the quiet long acquisition time they need to perform one measurement, these setups remain shut in punctual measurement to evaluate global sample quality. This last point can lead to bad interpretation of the sample quality parameters, especially in the case of inhomogeneous samples. Asynchronous Optical Sampling (ASOPS) systems can perform sample characterization with picosecond acoustics up to 106 times faster than traditional pump-probe setups. This last point allows picosecond ultrasonic to unlock the acoustic imaging field at the nanometric scale to detect inhomogeneities regarding sample mechanical properties. This fact will be illustrated by presenting an image of the measured acoustical reflection coefficients obtained by mapping, with an ASOPS setup, a 255nm thin-film tungsten layer deposited on a silicone substrate. Interpretation of the coefficient reflection in terms of bounding quality adhesion will also be exposed. Origin of zones which exhibit good and bad quality bounding will be discussed.

Keywords: adhesion, picosecond ultrasonics, pump-probe, thin film

Procedia PDF Downloads 155
5095 The Corrosion Resistance of the 32CrMoV13 Steel Nitriding

Authors: Okba Belahssen, Lazhar Torchane, Said Benramache, Abdelouahed Chala

Abstract:

This paper presents corrosion behavior of the plasma-nitrided 32CrMoV13 steel. Different kinds of samples were tested: non-treated, plasma nitrided samples. The structure of layers was determined by X-ray diffraction, while the morphology was observed by scanning electron microscopy (SEM). The corrosion behavior was evaluated by electrochemical techniques (potentiodynamic curves and electrochemical impedance spectroscopy). The corrosion tests were carried out in acid chloride solution (HCl 1M). Experimental results showed that the nitrides ε-Fe2−3N and γ′-Fe4N present in the white layer are nobler than the substrate but may promote, by galvanic effect, a localized corrosion through open porosity. The better corrosion protection was observed for nitrided sample.

Keywords: plasma-nitrided, 32CrMoV13 steel, corrosion, EIS

Procedia PDF Downloads 580
5094 Predicting Costs in Construction Projects with Machine Learning: A Detailed Study Based on Activity-Level Data

Authors: Soheila Sadeghi

Abstract:

Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.

Keywords: cost prediction, machine learning, project management, random forest, neural networks

Procedia PDF Downloads 36
5093 Practical Challenges of Tunable Parameters in Matlab/Simulink Code Generation

Authors: Ebrahim Shayesteh, Nikolaos Styliaras, Alin George Raducu, Ozan Sahin, Daniel Pombo VáZquez, Jonas Funkquist, Sotirios Thanopoulos

Abstract:

One of the important requirements in many code generation projects is defining some of the model parameters tunable. This helps to update the model parameters without performing the code generation again. This paper studies the concept of embedded code generation by MATLAB/Simulink coder targeting the TwinCAT Simulink system. The generated runtime modules are then tested and deployed to the TwinCAT 3 engineering environment. However, defining the parameters tunable in MATLAB/Simulink code generation targeting TwinCAT is not very straightforward. This paper focuses on this subject and reviews some of the techniques tested here to make the parameters tunable in generated runtime modules. Three techniques are proposed for this purpose, including normal tunable parameters, callback functions, and mask subsystems. Moreover, some test Simulink models are developed and used to evaluate the results of proposed approaches. A brief summary of the study results is presented in the following. First of all, the parameters defined tunable and used in defining the values of other Simulink elements (e.g., gain value of a gain block) could be changed after the code generation and this value updating will affect the values of all elements defined based on the values of the tunable parameter. For instance, if parameter K=1 is defined as a tunable parameter in the code generation process and this parameter is used to gain a gain block in Simulink, the gain value for the gain block is equal to 1 in the gain block TwinCAT environment after the code generation. But, the value of K can be changed to a new value (e.g., K=2) in TwinCAT (without doing any new code generation in MATLAB). Then, the gain value of the gain block will change to 2. Secondly, adding a callback function in the form of “pre-load function,” “post-load function,” “start function,” and will not help to make the parameters tunable without performing a new code generation. This means that any MATLAB files should be run before performing the code generation. The parameters defined/calculated in this file will be used as fixed values in the generated code. Thus, adding these files as callback functions to the Simulink model will not make these parameters flexible since the MATLAB files will not be attached to the generated code. Therefore, to change the parameters defined/calculated in these files, the code generation should be done again. However, adding these files as callback functions forces MATLAB to run them before the code generation, and there is no need to define the parameters mentioned in these files separately. Finally, using a tunable parameter in defining/calculating the values of other parameters through the mask is an efficient method to change the value of the latter parameters after the code generation. For instance, if tunable parameter K is used in calculating the value of two other parameters K1 and K2 and, after the code generation, the value of K is updated in TwinCAT environment, the value of parameters K1 and K2 will also be updated (without any new code generation).

Keywords: code generation, MATLAB, tunable parameters, TwinCAT

Procedia PDF Downloads 222
5092 Modeling and Characterization of Organic LED

Authors: Bouanati Sidi Mohammed, N. E. Chabane Sari, Mostefa Kara Selma

Abstract:

It is well-known that Organic light emitting diodes (OLEDs) are attracting great interest in the display technology industry due to their many advantages, such as low price of manufacturing, large-area of electroluminescent display, various colors of emission included white light. Recently, there has been much progress in understanding the device physics of OLEDs and their basic operating principles. In OLEDs, Light emitting is the result of the recombination of electron and hole in light emitting layer, which are injected from cathode and anode. For improve luminescence efficiency, it is needed that hole and electron pairs exist affluently and equally and recombine swiftly in the emitting layer. The aim of this paper is to modeling polymer LED and OLED made with small molecules for studying the electrical and optical characteristics. The first simulation structures used in this paper is a mono layer device; typically consisting of the poly (2-methoxy-5(2’-ethyl) hexoxy-phenylenevinylene) (MEH-PPV) polymer sandwiched between an anode usually an indium tin oxide (ITO) substrate, and a cathode, such as Al. In the second structure we replace MEH-PPV by tris (8-hydroxyquinolinato) aluminum (Alq3). We choose MEH-PPV because of it's solubility in common organic solvents, in conjunction with a low operating voltage for light emission and relatively high conversion efficiency and Alq3 because it is one of the most important host materials used in OLEDs. In this simulation, the Poole-Frenkel- like mobility model and the Langevin bimolecular recombination model have been used as the transport and recombination mechanism. These models are enabled in ATLAS -SILVACO software. The influence of doping and thickness on I(V) characteristics and luminescence, are reported.

Keywords: organic light emitting diode, polymer lignt emitting diode, organic materials, hexoxy-phenylenevinylene

Procedia PDF Downloads 550
5091 Simultaneous Saccharification and Fermentation for D-Lactic Acid Production from Dried Distillers Grains with Solubles

Authors: Nurul Aqilah Mohd Zaini, Afroditi Chatzifragkou, Dimitris Charalampopoulos

Abstract:

D-Lactic acid production is gaining increasing attention due to the thermostable properties of its polymer, Polylactic Acid (PLA). In this study, D-lactic acid was produced in microbial cultures using Lactobacillus coryniformis subsp. torquens as D-lactic acid producer and hydrolysates of Dried Distillers Grains with Solubles (DDGS) as fermentation substrate. Prior to fermentation, DDGS was first alkaline pretreated with 5% (w/v) NaOH, for 15 minutes (121oC/ ~16 psi). This led to the generation of DDGS solid residues, rich in carbohydrates and especially cellulose (~52%). The carbohydrate-rich solids were then subjected to enzymatic hydrolysis with Accellerase® 1500. For Separate Hydrolysis and Fermentation (SHF), enzymatic hydrolysis was carried out at 50oC for 24 hours, followed by fermentation of D-lactic acid at 37oC in controlled pH 6. The obtained hydrolysate contained 24 g/l glucose, 5.4 g/l xylose and 0.6 g/l arabinose. In the case of Simultaneous Saccharification and Fermentation (SSF), hydrolysis and fermentation were conducted in a single step process at 37oC in pH 5. The enzymatic hydrolysis of DGGS pretreated solids took place mostly during lag phase of L. coryniformis fermentation, with only a small amount of glucose consumed during the first 6 h. When exponential phase was started, glucose generation reduced as the microorganism started to consume glucose for D-lactic acid production. Higher concentrations of D-lactic acid were produced when SSF approach was applied, with 28 g/l D-lactic acid after 24 h of fermentation (84.5% yield). In contrast, 21.2 g/l D-lactic acid were produced when SHF was used. The optical pu rity of D-lactic acid produced from both experiments was 99.9%. Besides, approximately 2 g/l acetic acid was also generated due to lactic acid degradation after glucose depletion in SHF. SSF was proved an efficient towards DDGS ulilisation and D-lactic acid production, by reducing the overall processing time, yielding sufficient D-lactic acid concentrations without the generation of fermentation by-products.

Keywords: DDGS, alkaline pretreatment, SSF, D-lactic acid

Procedia PDF Downloads 335
5090 Restoration of Steppes in Algeria: Case of the Stipa tenacissima L. Steppe

Authors: H. Kadi-Hanifi, F. Amghar

Abstract:

Steppes of arid Mediterranean zones are deeply threatened by desertification. To stop or alleviate ecological and economic problems associated with this desertification, management actions have been implemented since the last three decades. The struggle against desertification has become a national priority in many countries. In Algeria, several management techniques have been used to cope with desertification. This study aims at investigating the effect of exclosure on floristic diversity and chemical soil proprieties after four years of implementation. 167 phyto-ecological samples have been studied, 122 inside the exclosure and 45 outside. Results showed that plant diversity, composition, vegetation cover, pastoral value and soil fertility were significantly higher in protected areas.

Keywords: Algeria, arid, desertification, pastoral management, soil fertility

Procedia PDF Downloads 185
5089 Just a Heads Up: Approach to Head Shape Abnormalities

Authors: Noreen Pulte

Abstract:

Prior to the 'Back to Sleep' Campaign in 1992, 1 of every 300 infants seen by Advanced Practice Providers had plagiocephaly. Insufficient attention is given to plagiocephaly and brachycephaly diagnoses in practice and pediatric education. In this talk, Nurse Practitioners and Pediatric Providers will be able to: (1) identify red flags associated with head shape abnormalities, (2) learn techniques they can teach parents to prevent head shape abnormalities, and (3) differentiate between plagiocephaly, brachycephaly, and craniosynostosis. The presenter is a Primary Care Pediatric Nurse Practitioner at Ann & Robert H. Lurie Children's Hospital of Chicago and the primary provider for its head shape abnormality clinics. She will help participants translate key information obtained from birth history, review of systems, and developmental history to understand risk factors for head shape abnormalities and progression of deformities. Synostotic and non-synostotic head shapes will be explained to help participants differentiate plagiocephaly and brachycephaly from synostotic head shapes. This knowledge is critical for the prompt referral of infants with craniosynostosis for surgical evaluation and correction. Rapid referral for craniosynostosis can possibly direct the patient to a minimally invasive surgical procedure versus a craniectomy. As for plagiocephaly and brachycephaly, this timely referral can also aid in a physical therapy referral if necessitated, which treats torticollis and aids in improving head shape. A well-timed referral to a head shape clinic can possibly eliminate the need for a helmet and/or minimize the time in a helmet. Practitioners will learn the importance of obtaining head measurements using calipers. The presenter will explain head calculations and how the calculations are interpreted to determine the severity of the head shape abnormalities. Severity defines the treatment plan. Participants will learn when to refer patients to a head shape abnormality clinic and techniques they should teach parents to perform while waiting for the referral appointment. The purpose, mechanics, and logistics of helmet therapy, including optimal time to initiate helmet therapy, recommended helmet wear-time, and tips for helmet therapy compliance, will be described. Case scenarios will be incorporated into the presenter's presentation to support learning. The salient points of the case studies will be explained and discussed. Practitioners will be able to immediately translate the knowledge and skills gained in this presentation into their clinical practice.

Keywords: plagiocephaly, brachycephaly, craniosynostosis, red flags

Procedia PDF Downloads 91
5088 The Quotation-Based Algorithm for Distributed Decision Making

Authors: Gennady P. Ginkul, Sergey Yu. Soloviov

Abstract:

The article proposes to use so-called "quotation-based algorithm" for simulation of decision making process in distributed expert systems and multi-agent systems. The idea was adopted from the techniques for group decision-making. It is based on the assumption that one expert system to perform its logical inference may use rules from another expert system. The application of the algorithm was demonstrated on the example in which the consolidated decision is the decision that requires minimal quotation.

Keywords: backward chaining inference, distributed expert systems, group decision making, multi-agent systems

Procedia PDF Downloads 368
5087 Adsoption Tests of Two Industrial Dyes by Hydroxyds of Metals

Authors: R. Berrached, H. Ait Mahamed, A. Iddou

Abstract:

Water pollution is nowadays a serious problem, due to the increasing scarcity of water and thus to the impact induced by such pollution on the human health. Various techniques are made use of to deal with water pollution. Among the most used ones, some can be enumerated: the bacterian bed, the activated sludge, lagoons as biological processes and coagulation-flocculation as a physic-chemical process. These processes are very expensive and a decreasing in efficiency treatment with the increase of the initial pollutants concentration. This is the reason why research has been reoriented towards the use of adsorption process as an alternative solution instead of the other traditional processes. In our study, we have tempted to explore the characteristics of hydroxides of Al and Fe to purify contaminated water by two industrial dyes SBL blue and SRL-150 orange. Results have shown the efficiency of the two materials on the blue SBL dye.

Keywords: metallic hydroxydes, dyes, purification, adsorption

Procedia PDF Downloads 331
5086 A Machine Learning Approach for Efficient Resource Management in Construction Projects

Authors: Soheila Sadeghi

Abstract:

Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.

Keywords: resource allocation, machine learning, optimization, data-driven decision-making, project management

Procedia PDF Downloads 28