Search results for: nanofluid applications
1326 The Effect of the Adhesive Ductility on Bond Characteristics of CFRP/Steel Double Strap Joints Subjected to Dynamic Tensile Loadings
Authors: Haider Al-Zubaidy, Xiao-Ling Zhao, Riadh Al-Mahaidi
Abstract:
In recent years, the technique adhesively-bonded fibre reinforced polymer (FRP) composites has found its way into civil engineering applications and it has attracted a widespread attention as a viable alternative strategy for the retrofitting of civil infrastructure such as bridges and buildings. When adopting this method, adhesive has a significant role and controls the general performance and degree of enhancement of the strengthened and/or upgraded structures. This is because the ultimate member strength is highly affected by the failure mode which is considerably dependent on the utilised adhesive. This paper concerns with experimental investigations on the effect of the adhesive used on the bond between CFRP patch and steel plate under medium impact tensile loading. Experiment were conducted using double strap joints and these samples were prepared using two different types of adhesives, Araldite 420 and MBrace saturant. Drop mass rig was used to carry out dynamic tests at impact speeds of 3.35, 4.43 and m/s while quasi-static tests were implemented at 2mm/min using Instrone machine. In this test program, ultimate load-carrying capacity and failure modes were examined for all loading speeds. For both static and dynamic tests, the adhesive type has a significant effect on ultimate joint strength. It was found that the double strap joints prepared using Araldite 420 showed higher strength than those prepared utilising MBrace saturant adhesive. Failure mechanism for joints prepared using Araldite 420 is completely different from those samples prepared utilising MBrace saturant. CFRP failure is the most common failure pattern for joints with Araldite 420, whereas the dominant failure for joints with MBrace saturant adhesive is adhesive failure.Keywords: CFRP/steel double strap joints, adhesives of different ductility, dynamic tensile loading, bond between CFRP and steel
Procedia PDF Downloads 2361325 Cost-Effective Materials for Hydrocarbons Recovery from Produced Water
Authors: Fahd I. Alghunaimi, Hind S. Dossary, Norah W. Aljuryyed, Tawfik A. Saleh
Abstract:
Produced water (PW) is one of the largest by-volume waste streams and one of the most challenging effluents in the oil and gas industry. This is due to the variation of contaminants that make up PW. Severalmaterialshavebeen developed, studied, and implemented to remove hydrocarbonsfrom PW. Adsorption is one of the most effective ways ofremoving oil fromPW. In this work, three new and cost-effective hydrophobic adsorbentmaterials based on 9-octadecenoic acid grafted graphene (POG) were synthesized for oil/water separation. Graphene derived from graphite was modified with 9-octadecenoic acid to yield 9-octadecenoic acid grafted graphene (OG). The newsynthesized materials which called POG25, POG50, and POG75 were characterized by using N₂-physisorption (BET) and Fourier transform infrared (FTIR). The BET surface area of POG75 was the highest with 288 m²/g, whereas POG50 was 225 m²/g and POG25 was lowest 79 m²/g. These three materials were also evaluated for their oil-water separation efficiency using a model mixture, whichdemonstrated that POG-75 has the highest oil removal efficiency and the faster rate of the adsorption (Figure-1). POG75 was regenerated, and its performance was verified again with a little reduced adsorption rate compared to the fresh material. The mixtures that used in the performance test were prepared by mixing nonpolar organic liquids such as heptane, dodecane, or hexadecane into the colored water. In general, the new materials showed fast uptake of the certain quantity of the oildue to the high hydrophobicity nature of the materials, which repel water as confirmed by the contact angle of approximately 150˚. Besides that, novel superhydrophobic material was also synthesized by introducing hydrophobic branches of laurate on the surface of the stainless steel mesh (SSM). This novel mesh could help to hold the novel adsorbent materials in a column to remove oil from PW. Both BOG-75 and the novel mesh have the potential to remove oil contaminants from produced water, which will help to provide an opportunity to recover useful components, in addition, to reduce the environmental impact and reuse produced water in several applications such as fracturing.Keywords: graphite to graphene, oleophilic, produced water, separation
Procedia PDF Downloads 1221324 Study of Electro-Chemical Properties of ZnO Nanowires for Various Application
Authors: Meera A. Albloushi, Adel B. Gougam
Abstract:
The development in the field of piezoelectrics has led to a renewed interest in ZnO nanowires (NWs) as a promising material in the nanogenerator devices category. It can be used as a power source for self-powered electronic systems with higher density, higher efficiency, longer lifetime, as well as lower cost of fabrication. Highly aligned ZnO nanowires seem to exhibit a higher performance compared with nonaligned ones. The purpose of this study was to develop ZnO nanowires and to investigate their electrical and chemical properties for various applications. They were grown on silicon (100) and glass substrates. We have used a low temperature and non-hazardous method: aqueous chemical growth (ACG). ZnO (non-doped) and AZO (Aluminum doped) seed layers were deposited using RF magnetron sputteringunder Argon pressure of 3 mTorr and deposition power of 180 W, the times of growth were selected to obtain thicknesses in the range of 30 to 125 nm. Some of the films were subsequently annealed. The substrates were immersed tilted in an equimolar solution composed of zinc nitrate and hexamine (HMTA) of 0.02 M and 0.05 M in the temperature range of 80 to 90 ᵒC for 1.5 to 2 hours. The X-ray diffractometer shows strong peaks at 2Ө = 34.2ᵒ of ZnO films which indicates that the films have a preferred c-axis wurtzite hexagonal (002) orientation. The surface morphology of the films is investigated by atomic force microscope (AFM) which proved the uniformity of the film since the roughness is within 5 nm range. The scanning electron microscopes(SEM) (Quanta FEG 250, Quanta 3D FEG, Nova NanoSEM 650) are used to characterize both ZnO film and NWs. SEM images show forest of ZnO NWs grown vertically and have a range of length up to 2000 nm and diameter of 20-300 nm. The SEM images prove that the role of the seed layer is to enhance the vertical alignment of ZnO NWs at the pH solution of 5-6. Also electrical and optical properties of the NWs are carried out using Electrical Force Microscopy (EFM). After growing the ZnO NWs, developing the nano-generator is the second step of this study in order to determine the energy conversion efficiency and the power output.Keywords: ZnO nanowires(NWs), aqueous chemical growth (ACG), piezoelectric NWs, harvesting enery
Procedia PDF Downloads 3221323 Sol-Gel Derived 58S Bioglass Substituted by Li and Mg: A Comparative Evaluation on in vitro Bioactivity, MC3T3 Proliferation and Antibacterial Efficiency
Authors: Amir Khaleghipour, Amirhossein Moghanian, Elhamalsadat Ghaffari
Abstract:
Modified bioactive glass has been considered as a promising multifunctional candidate in bone repair and regeneration due to its attractive properties. The present study mainly aims to evaluate how the individual substitution of lithium (L-BG) and magnesium (M-BG) for calcium can affect the in vitro bioactivity of sol-gel derived substituted 58S bioactive glass (BG); and to present one composition in both of the 60SiO₂–(36-x)CaO–4P₂O₅–(x)Li₂O and 60SiO₂–(36-x)CaO–4P₂O₅–(x)MgO quaternary systems (where x= 0, 5, 10 mol.%) with improved biocompatibility, enhanced alkaline phosphatase (ALP) activity, and the most efficient antibacterial activity against methicillin-resistant Staphylococcus aureus bacteria. To address these aims, and study the effect of CaO/Li₂O and CaO/MgO substitution up to 10 mol % in 58S-BGs, the samples were characterized by X-ray diffraction, Fourier transform infrared spectroscopy, inductively coupled plasma atomic emission spectrometry and scanning electron microscopy after immersion in simulated body fluid up to 14 days. Results indicated that substitution of either CaO/ Li₂O and CaO/ MgO had a retarding effect on in vitro hydroxyapatite (HA) formation due to the lower supersaturation degree for nucleation of HA compared with 58s-BG. Meanwhile, magnesium had a more pronounced effect. The 3-(4, 5dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) and alkaline phosphatase (ALP) assays showed that both substitutions of CaO/ Li₂O and CaO/ MgO up to 5mol % in 58s-BGs led to increased biocompatibility and stimulated proliferation of the pre-osteoblast MC3T3 cells with respect to the control. On the other hand, substitution of either Li or Mg for Ca in the 58s BG composition resulted in improved bactericidal efficiency against MRSA bacteria. Taken together, sample 58s-BG with 5 mol % CaO/Li₂O substitution (BG-5L) was considered as a multifunctional biomaterial in bone repair/regeneration with improved biocompatibility, enhanced ALP activity as well enhanced antibacterial efficiency against methicillin-resistant Staphylococcus aureus (MRSA) bacteria among all of the synthesized L-BGs and M-BGs.Keywords: alkaline, alkaline earth, bioactivity, biomedical applications, sol-gel processes
Procedia PDF Downloads 1901322 Identification and Origins of Multiple Personality: A Criterion from Wiggins
Authors: Brittany L. Kang
Abstract:
One familiar theory of the origin of multiple personalities focuses on how symptoms of trauma or abuse are central causes, as seen in paradigmatic examples of the condition. The theory states that multiple personalities constitute a congenital condition, as babies all exhibit multiplicity, and that generally alters only remain separated due to trauma. In more typical cases, the alters converge and become a single identity; only in cases of trauma, according to this account, do the alters remain separated. This theory is misleading in many aspects, the most prominent being that not all multiple personality patients are victims of child abuse or trauma, nor are all cases of multiple personality observed in early childhood. The use of this criterion also causes clinical problems, including an inability to identify multiple personalities through the variety of symptoms and traits seen across observed cases. These issues present a need for revision in the currently applied criterion in order to separate the notion of child abuse and to be able to better understand the origins of multiple personalities itself. Identifying multiplicity through the application of identity theories will improve the current criterion, offering a bridge between identifying existing cases and understanding their origins. We begin by applying arguments from Wiggins, who held that each personality within a multiple was not a whole individual, but rather characters who switch off. Wiggins’ theory is supported by observational evidence of how such characters are differentiated. Alters of older ages are seen to require different prescription lens, in addition to having different handwriting. The alters may also display drastically varying styles of clothing, preferences in food, their gender, sexuality, religious beliefs and more. The definitions of terms such as 'personality' or 'persons' also become more distinguished, leading to greater understanding of who is exactly able to be classified as a patient of multiple personalities. While a more common meaning of personality is a designation of specific characteristics which account for the entirety of a person, this paper argues from Wiggins’ theory that each 'personality' is in fact only partial. Clarification of the concept in question will allow for more successful future clinical applications.Keywords: identification, multiple personalities, origin, Wiggins' theory
Procedia PDF Downloads 2421321 Hybrid CNN-SAR and Lee Filtering for Enhanced InSAR Phase Unwrapping and Coherence Optimization
Authors: Hadj Sahraoui Omar, Kebir Lahcen Wahib, Bennia Ahmed
Abstract:
Interferometric Synthetic Aperture Radar (InSAR) coherence is a crucial parameter for accurately monitoring ground deformation and environmental changes. However, coherence can be degraded by various factors such as temporal decorrelation, atmospheric disturbances, and geometric misalignments, limiting the reliability of InSAR measurements (Omar Hadj‐Sahraoui and al. 2019). To address this challenge, we propose an innovative hybrid approach that combines artificial intelligence (AI) with advanced filtering techniques to optimize interferometric coherence in InSAR data. Specifically, we introduce a Convolutional Neural Network (CNN) integrated with the Lee filter to enhance the performance of radar interferometry. This hybrid method leverages the strength of CNNs to automatically identify and mitigate the primary sources of decorrelation, while the Lee filter effectively reduces speckle noise, improving the overall quality of interferograms. We develop a deep learning-based model trained on multi-temporal and multi-frequency SAR datasets, enabling it to predict coherence patterns and enhance low-coherence regions. This hybrid CNN-SAR with Lee filtering significantly reduces noise and phase unwrapping errors, leading to more precise deformation maps. Experimental results demonstrate that our approach improves coherence by up to 30% compared to traditional filtering techniques, making it a robust solution for challenging scenarios such as urban environments, vegetated areas, and rapidly changing landscapes. Our method has potential applications in geohazard monitoring, urban planning, and environmental studies, offering a new avenue for enhancing InSAR data reliability through AI-powered optimization combined with robust filtering techniques.Keywords: CNN-SAR, Lee Filter, hybrid optimization, coherence, InSAR phase unwrapping, speckle noise reduction
Procedia PDF Downloads 121320 A Comparative Study of Linearly Graded and without Graded Photonic Crystal Structure
Authors: Rajeev Kumar, Angad Singh Kushwaha, Amritanshu Pandey, S. K. Srivastava
Abstract:
Photonic crystals (PCs) have attracted much attention due to its electromagnetic properties and potential applications. In PCs, there is certain range of wavelength where electromagnetic waves are not allowed to pass are called photonic band gap (PBG). A localized defect mode will appear within PBG, due to change in the interference behavior of light, when we create a defect in the periodic structure. We can also create different types of defect structures by inserting or removing a layer from the periodic layered structure in two and three-dimensional PCs. We can design microcavity, waveguide, and perfect mirror by creating a point defect, line defect, and palanar defect in two and three- dimensional PC structure. One-dimensional and two-dimensional PCs with defects were reported theoretically and experimentally by Smith et al.. in conventional photonic band gap structure. In the present paper, we have presented the defect mode tunability in tilted non-graded photonic crystal (NGPC) and linearly graded photonic crystal (LGPC) using lead sulphide (PbS) and titanium dioxide (TiO2) in the infrared region. A birefringent defect layer is created in NGPC and LGPC using potassium titany phosphate (KTP). With the help of transfer matrix method, the transmission properties of proposed structure is investigated for transverse electric (TE) and transverse magnetic (TM) polarization. NGPC and LGPC without defect layer is also investigated. We have found that a photonic band gap (PBG) arises in the infrared region. An additional defect layer of KTP is created in NGPC and LGPC structure. We have seen that an additional transmission mode appers in PBG region. It is due to the addition of defect layer. We have also seen the effect, linear gradation in thickness, angle of incidence, tilt angle, and thickness of defect layer, on PBG and additional transmission mode. We have observed that the additional transmission mode and PBG can be tuned by changing the above parameters. The proposed structure may be used as channeled filter, optical switches, monochromator, and broadband optical reflector.Keywords: defect modes, graded photonic crystal, photonic crystal, tilt angle
Procedia PDF Downloads 3761319 Synthesis and Characterization of Anti-Psychotic Drugs Based DNA Aptamers
Authors: Shringika Soni, Utkarsh Jain, Nidhi Chauhan
Abstract:
Aptamers are recently discovered ~80-100 bp long artificial oligonucleotides that not only demonstrated their applications in therapeutics; it is tremendously used in diagnostic and sensing application to detect different biomarkers and drugs. Synthesizing aptamers for proteins or genomic template is comparatively feasible in laboratory, but drugs or other chemical target based aptamers require major specification and proper optimization and validation. One has to optimize all selection, amplification, and characterization steps of the end product, which is extremely time-consuming. Therefore, we performed asymmetric PCR (polymerase chain reaction) for random oligonucleotides pool synthesis, and further use them in Systematic evolution of ligands by exponential enrichment (SELEX) for anti-psychotic drugs based aptamers synthesis. Anti-psychotic drugs are major tranquilizers to control psychosis for proper cognitive functions. Though their low medical use, their misuse may lead to severe medical condition as addiction and can promote crime in social and economical impact. In this work, we have approached the in-vitro SELEX method for ssDNA synthesis for anti-psychotic drugs (in this case ‘target’) based aptamer synthesis. The study was performed in three stages, where first stage included synthesis of random oligonucleotides pool via asymmetric PCR where end product was analyzed with electrophoresis and purified for further stages. The purified oligonucleotide pool was incubated in SELEX buffer, and further partition was performed in the next stage to obtain target specific aptamers. The isolated oligonucleotides are characterized and quantified after each round of partition, and significant results were obtained. After the repetitive partition and amplification steps of target-specific oligonucleotides, final stage included sequencing of end product. We can confirm the specific sequence for anti-psychoactive drugs, which will be further used in diagnostic application in clinical and forensic set-up.Keywords: anti-psychotic drugs, aptamer, biosensor, ssDNA, SELEX
Procedia PDF Downloads 1351318 Algae Growth and Biofilm Control by Ultrasonic Technology
Authors: Vojtech Stejskal, Hana Skalova, Petr Kvapil, George Hutchinson
Abstract:
Algae growth has been an important issue in water management of water plants, ponds and lakes, swimming pools, aquaculture & fish farms, gardens or golf courses for last decades. There are solutions based on chemical or biological principles. Apart of these traditional principles for inhibition of algae growth and biofilm production there are also physical methods which are very competitive compared to the traditional ones. Ultrasonic technology is one of these alternatives. Ultrasonic emitter is able to eliminate the biofilm which behaves as a host and attachment point for algae and is original reason for the algae growth. The ultrasound waves prevent majority of the bacteria in planktonic form becoming strongly attached sessile bacteria that creates welcoming layer for the biofilm production. Biofilm creation is very fast – in the serene water it takes between 30 minutes to 4 hours, depending on temperature and other parameters. Ultrasound device is not killing bacteria. Ultrasound waves are passing through bacteria, which retract as if they were in very turbulent water even though the water is visually completely serene. In these conditions, bacteria does not excrete the polysaccharide glue they use to attach to the surface of the pool or pond, where ultrasonic technology is used. Ultrasonic waves decrease the production of biofilm on the surfaces in the selected area. In case there are already at the start of the application of ultrasonic technology in a pond or basin clean inner surfaces, the biofilm production is almost absolutely inhibited. This paper talks about two different pilot applications – one in Czech Republic and second in United States of America, where the used ultrasonic technology (AlgaeControl) is coming from. On both sites, there was used Mezzo Ultrasonic Algae Control System with very positive results not only on biofilm production, but also algae growth in the surrounding area. Technology has been successfully tested in two different environments. The poster describes the differences and their influence on the efficiency of ultrasonic technology application. Conclusions and lessons learned can be possibly applied also on other sites within Europe or even further.Keywords: algae growth, biofilm production, ultrasonic solution, ultrasound
Procedia PDF Downloads 2691317 In-vitro Metabolic Fingerprinting Using Plasmonic Chips by Laser Desorption/Ionization Mass Spectrometry
Authors: Vadanasundari Vedarethinam, Kun Qian
Abstract:
The metabolic analysis is more distal over proteomics and genomics engaging in clinics and needs rationally distinct techniques, designed materials, and device for clinical diagnosis. Conventional techniques such as spectroscopic techniques, biochemical analyzers, and electrochemical have been used for metabolic diagnosis. Currently, there are four major challenges including (I) long-term process in sample pretreatment; (II) difficulties in direct metabolic analysis of biosamples due to complexity (III) low molecular weight metabolite detection with accuracy and (IV) construction of diagnostic tools by materials and device-based platforms for real case application in biomedical applications. Development of chips with nanomaterial is promising to address these critical issues. Mass spectroscopy (MS) has displayed high sensitivity and accuracy, throughput, reproducibility, and resolution for molecular analysis. Particularly laser desorption/ ionization mass spectrometry (LDI MS) combined with devices affords desirable speed for mass measurement in seconds and high sensitivity with low cost towards large scale uses. We developed a plasmonic chip for clinical metabolic fingerprinting as a hot carrier in LDI MS by series of chips with gold nanoshells on the surface through controlled particle synthesis, dip-coating, and gold sputtering for mass production. We integrated the optimized chip with microarrays for laboratory automation and nanoscaled experiments, which afforded direct high-performance metabolic fingerprinting by LDI MS using 500 nL of serum, urine, cerebrospinal fluids (CSF) and exosomes. Further, we demonstrated on-chip direct in-vitro metabolic diagnosis of early-stage lung cancer patients using serum and exosomes without any pretreatment or purifications. To our best knowledge, this work initiates a bionanotechnology based platform for advanced metabolic analysis toward large-scale diagnostic use.Keywords: plasmonic chip, metabolic fingerprinting, LDI MS, in-vitro diagnostics
Procedia PDF Downloads 1631316 Computational Insight into a Mechanistic Overview of Water Exchange Kinetics and Thermodynamic Stabilities of Bis and Tris-Aquated Complexes of Lanthanides
Authors: Niharika Keot, Manabendra Sarma
Abstract:
A thorough investigation of Ln3+ complexes with more than one inner-sphere water molecule is crucial for designing high relaxivity contrast agents (CAs) used in magnetic resonance imaging (MRI). This study accomplished a comparative stability analysis of two hexadentate (H3cbda and H3dpaa) and two heptadentate (H4peada and H3tpaa) ligands with Ln3+ ions. The higher stability of the hexadentate H3cbda and heptadentate H4peada ligands has been confirmed by the binding affinity and Gibbs free energy analysis in aqueous solution. In addition, energy decomposition analysis (EDA) reveals the higher binding affinity of the peada4− ligand than the cbda3− ligand towards Ln3+ ions due to the higher charge density of the peada4− ligand. Moreover, a mechanistic overview of water exchange kinetics has been carried out based on the strength of the metal–water bond. The strength of the metal–water bond follows the trend Gd–O47 (w) > Gd–O39 (w) > Gd–O36 (w) in the case of the tris-aquated [Gd(cbda)(H2O)3] and Gd–O43 (w) > Gd–O40 (w) for the bis-aquated [Gd(peada)(H2O)2]− complex, which was confirmed by bond length, electron density (ρ), and electron localization function (ELF) at the corresponding bond critical points. Our analysis also predicts that the activation energy barrier decreases with the decrease in bond strength; hence kex increases. The 17O and 1H hyperfine coupling constant values of all the coordinated water molecules were different, calculated by using the second-order Douglas–Kroll–Hess (DKH2) approach. Furthermore, the ionic nature of the bonding in the metal–ligand (M–L) bond was confirmed by the Quantum Theory of Atoms-In-Molecules (QTAIM) and ELF along with energy decomposition analysis (EDA). We hope that the results can be used as a basis for the design of highly efficient Gd(III)-based high relaxivity MRI contrast agents for medical applications.Keywords: MRI contrast agents, lanthanide chemistry, thermodynamic stability, water exchange kinetics
Procedia PDF Downloads 831315 Electroencephalogram during Natural Reading: Theta and Alpha Rhythms as Analytical Tools for Assessing a Reader’s Cognitive State
Authors: D. Zhigulskaya, V. Anisimov, A. Pikunov, K. Babanova, S. Zuev, A. Latyshkova, K. Сhernozatonskiy, A. Revazov
Abstract:
Electrophysiology of information processing in reading is certainly a popular research topic. Natural reading, however, has been relatively poorly studied, despite having broad potential applications for learning and education. In the current study, we explore the relationship between text categories and spontaneous electroencephalogram (EEG) while reading. Thirty healthy volunteers (mean age 26,68 ± 1,84) participated in this study. 15 Russian-language texts were used as stimuli. The first text was used for practice and was excluded from the final analysis. The remaining 14 were opposite pairs of texts in one of 7 categories, the most important of which were: interesting/boring, fiction/non-fiction, free reading/reading with an instruction, reading a text/reading a pseudo text (consisting of strings of letters that formed meaningless words). Participants had to read the texts sequentially on an Apple iPad Pro. EEG was recorded from 12 electrodes simultaneously with eye movement data via ARKit Technology by Apple. EEG spectral amplitude was analyzed in Fz for theta-band (4-8 Hz) and in C3, C4, P3, and P4 for alpha-band (8-14 Hz) using the Friedman test. We found that reading an interesting text was accompanied by an increase in theta spectral amplitude in Fz compared to reading a boring text (3,87 µV ± 0,12 and 3,67 µV ± 0,11, respectively). When instructions are given for reading, we see less alpha activity than during free reading of the same text (3,34 µV ± 0,20 and 3,73 µV ± 0,28, respectively, for C4 as the most representative channel). The non-fiction text elicited less activity in the alpha band (C4: 3,60 µV ± 0,25) than the fiction text (C4: 3,66 µV ± 0,26). A significant difference in alpha spectral amplitude was also observed between the regular text (C4: 3,64 µV ± 0,29) and the pseudo text (C4: 3,38 µV ± 0,22). These results suggest that some brain activity we see on EEG is sensitive to particular features of the text. We propose that changes in theta and alpha bands during reading may serve as electrophysiological tools for assessing the reader’s cognitive state as well as his or her attitude to the text and the perceived information. These physiological markers have prospective practical value for developing technological solutions and biofeedback systems for reading in particular and for education in general.Keywords: EEG, natural reading, reader's cognitive state, theta-rhythm, alpha-rhythm
Procedia PDF Downloads 801314 Comparative Study of Skeletonization and Radial Distance Methods for Automated Finger Enumeration
Authors: Mohammad Hossain Mohammadi, Saif Al Ameri, Sana Ziaei, Jinane Mounsef
Abstract:
Automated enumeration of the number of hand fingers is widely used in several motion gaming and distance control applications, and is discussed in several published papers as a starting block for hand recognition systems. The automated finger enumeration technique should not only be accurate, but also must have a fast response for a moving-picture input. The high performance of video in motion games or distance control will inhibit the program’s overall speed, for image processing software such as Matlab need to produce results at high computation speeds. Since an automated finger enumeration with minimum error and processing time is desired, a comparative study between two finger enumeration techniques is presented and analyzed in this paper. In the pre-processing stage, various image processing functions were applied on a real-time video input to obtain the final cleaned auto-cropped image of the hand to be used for the two techniques. The first technique uses the known morphological tool of skeletonization to count the number of skeleton’s endpoints for fingers. The second technique uses a radial distance method to enumerate the number of fingers in order to obtain a one dimensional hand representation. For both discussed methods, the different steps of the algorithms are explained. Then, a comparative study analyzes the accuracy and speed of both techniques. Through experimental testing in different background conditions, it was observed that the radial distance method was more accurate and responsive to a real-time video input compared to the skeletonization method. All test results were generated in Matlab and were based on displaying a human hand for three different orientations on top of a plain color background. Finally, the limitations surrounding the enumeration techniques are presented.Keywords: comparative study, hand recognition, fingertip detection, skeletonization, radial distance, Matlab
Procedia PDF Downloads 3821313 Phase Composition Analysis of Ternary Alloy Materials for Gas Turbine Applications
Authors: Mayandi Ramanathan
Abstract:
Gas turbine blades see the most aggressive thermal stress conditions within the engine, due to high Turbine Entry Temperatures in the range of 1500 to 1600°C. The blades rotate at very high rotation rates and remove a significant amount of thermal power from the gas stream. At high temperatures, the major component failure mechanism is a creep. During its service over time under high thermal loads, the blade will deform, lengthen and rupture. High strength and stiffness in the longitudinal direction up to elevated service temperatures are certainly the most needed properties of turbine blades and gas turbine components. The proposed advanced Ti alloy material needs a process that provides a strategic orientation of metallic ordering, uniformity in composition and high metallic strength. The chemical composition of the proposed Ti alloy material (25% Ta/(Al+Ta) ratio), unlike Ti-47Al-2Cr-2Nb, has less excess Al that could limit the service life of turbine blades. Properties and performance of Ti-47Al-2Cr-2Nb and Ti-6Al-4V materials will be compared with that of the proposed Ti alloy material to generalize the performance metrics of various gas turbine components. This paper will involve the summary of the effects of additive manufacturing and heat treatment process conditions on the changes in the phase composition, grain structure, lattice structure of the material, tensile strength, creep strain rate, thermal expansion coefficient and fracture toughness at different temperatures. Based on these results, additive manufacturing and heat treatment process conditions will be optimized to fabricate turbine blade with Ti-43Al matrix alloyed with an optimized amount of refractory Ta metal. Improvement in service temperature of the turbine blades and corrosion resistance dependence on the coercivity of the alloy material will be reported. A correlation of phase composition and creep strain rate will also be discussed.Keywords: high temperature materials, aerospace, specific strength, creep strain, phase composition
Procedia PDF Downloads 1161312 The Systems Biology Verification Endeavor: Harness the Power of the Crowd to Address Computational and Biological Challenges
Authors: Stephanie Boue, Nicolas Sierro, Julia Hoeng, Manuel C. Peitsch
Abstract:
Systems biology relies on large numbers of data points and sophisticated methods to extract biologically meaningful signal and mechanistic understanding. For example, analyses of transcriptomics and proteomics data enable to gain insights into the molecular differences in tissues exposed to diverse stimuli or test items. Whereas the interpretation of endpoints specifically measuring a mechanism is relatively straightforward, the interpretation of big data is more complex and would benefit from comparing results obtained with diverse analysis methods. The sbv IMPROVER project was created to implement solutions to verify systems biology data, methods, and conclusions. Computational challenges leveraging the wisdom of the crowd allow benchmarking methods for specific tasks, such as signature extraction and/or samples classification. Four challenges have already been successfully conducted and confirmed that the aggregation of predictions often leads to better results than individual predictions and that methods perform best in specific contexts. Whenever the scientific question of interest does not have a gold standard, but may greatly benefit from the scientific community to come together and discuss their approaches and results, datathons are set up. The inaugural sbv IMPROVER datathon was held in Singapore on 23-24 September 2016. It allowed bioinformaticians and data scientists to consolidate their ideas and work on the most promising methods as teams, after having initially reflected on the problem on their own. The outcome is a set of visualization and analysis methods that will be shared with the scientific community via the Garuda platform, an open connectivity platform that provides a framework to navigate through different applications, databases and services in biology and medicine. We will present the results we obtained when analyzing data with our network-based method, and introduce a datathon that will take place in Japan to encourage the analysis of the same datasets with other methods to allow for the consolidation of conclusions.Keywords: big data interpretation, datathon, systems toxicology, verification
Procedia PDF Downloads 2781311 Study on the Wave Dissipation Performance of Double-Cylinder and Double-Plate Floating Breakwater
Authors: Liu Bijin
Abstract:
Floating breakwaters have several advantages, including being environmentally friendly, easy to construct, and cost-effective regardless of water depth. They have a broad range of applications in coastal engineering. However, they face significant challenges due to the unstable effect of wave dissipation, structural vulnerability, and high mooring system requirements. This paper investigates the wave dissipation performance of a floating breakwater structure. The structure consists of double cylinders, double vertical plates, and horizontal connecting plates. The investigation is carried out using physical model tests and numerical simulation methods based on STAR-CCM+. This paper discusses the impact of wave elements, relative vertical plate heights, and relative horizontal connecting plate widths on the wave dissipation performance of the double-cylinder, double-plate floating breakwater (DCDPFB). The study also analyses the changes in local vorticity and velocity fields around the DCDPFB to determine the optimal structural dimensions. The study found that the relative width of the horizontal connecting plate, the relative height of the vertical plate, and the size of the semi-cylinder are the key factors affecting the wave dissipation performance of the DCDPFB. The transmittance coefficient is minimally affected by the wave height and the depth of water entry. The local vortex and velocity field formed around the DCDPFB are important factors for dissipating wave energy. The test section of the DCDPFB, constructed according to the relative optimal structural dimensions, showed good wave dissipation performance during offshore prototype tests. The test section of DCDPFB, constructed with optimal structural dimensions, exhibits excellent wave dissipation performance in offshore prototype tests.Keywords: floating breakwater, wave dissipation performance, transmittance coefficient, model test
Procedia PDF Downloads 561310 Toward Indoor and Outdoor Surveillance using an Improved Fast Background Subtraction Algorithm
Authors: El Harraj Abdeslam, Raissouni Naoufal
Abstract:
The detection of moving objects from a video image sequences is very important for object tracking, activity recognition, and behavior understanding in video surveillance. The most used approach for moving objects detection / tracking is background subtraction algorithms. Many approaches have been suggested for background subtraction. But, these are illumination change sensitive and the solutions proposed to bypass this problem are time consuming. In this paper, we propose a robust yet computationally efficient background subtraction approach and, mainly, focus on the ability to detect moving objects on dynamic scenes, for possible applications in complex and restricted access areas monitoring, where moving and motionless persons must be reliably detected. It consists of three main phases, establishing illumination changes in variance, background/foreground modeling and morphological analysis for noise removing. We handle illumination changes using Contrast Limited Histogram Equalization (CLAHE), which limits the intensity of each pixel to user determined maximum. Thus, it mitigates the degradation due to scene illumination changes and improves the visibility of the video signal. Initially, the background and foreground images are extracted from the video sequence. Then, the background and foreground images are separately enhanced by applying CLAHE. In order to form multi-modal backgrounds we model each channel of a pixel as a mixture of K Gaussians (K=5) using Gaussian Mixture Model (GMM). Finally, we post process the resulting binary foreground mask using morphological erosion and dilation transformations to remove possible noise. For experimental test, we used a standard dataset to challenge the efficiency and accuracy of the proposed method on a diverse set of dynamic scenes.Keywords: video surveillance, background subtraction, contrast limited histogram equalization, illumination invariance, object tracking, object detection, behavior understanding, dynamic scenes
Procedia PDF Downloads 2561309 Dido: An Automatic Code Generation and Optimization Framework for Stencil Computations on Distributed Memory Architectures
Authors: Mariem Saied, Jens Gustedt, Gilles Muller
Abstract:
We present Dido, a source-to-source auto-generation and optimization framework for multi-dimensional stencil computations. It enables a large programmer community to easily and safely implement stencil codes on distributed-memory parallel architectures with Ordered Read-Write Locks (ORWL) as an execution and communication back-end. ORWL provides inter-task synchronization for data-oriented parallel and distributed computations. It has been proven to guarantee equity, liveness, and efficiency for a wide range of applications, particularly for iterative computations. Dido consists mainly of an implicitly parallel domain-specific language (DSL) implemented as a source-level transformer. It captures domain semantics at a high level of abstraction and generates parallel stencil code that leverages all ORWL features. The generated code is well-structured and lends itself to different possible optimizations. In this paper, we enhance Dido to handle both Jacobi and Gauss-Seidel grid traversals. We integrate temporal blocking to the Dido code generator in order to reduce the communication overhead and minimize data transfers. To increase data locality and improve intra-node data reuse, we coupled the code generation technique with the polyhedral parallelizer Pluto. The accuracy and portability of the generated code are guaranteed thanks to a parametrized solution. The combination of ORWL features, the code generation pattern and the suggested optimizations, make of Dido a powerful code generation framework for stencil computations in general, and for distributed-memory architectures in particular. We present a wide range of experiments over a number of stencil benchmarks.Keywords: stencil computations, ordered read-write locks, domain-specific language, polyhedral model, experiments
Procedia PDF Downloads 1271308 Advantages of Neural Network Based Air Data Estimation for Unmanned Aerial Vehicles
Authors: Angelo Lerro, Manuela Battipede, Piero Gili, Alberto Brandl
Abstract:
Redundancy requirements for UAV (Unmanned Aerial Vehicle) are hardly faced due to the generally restricted amount of available space and allowable weight for the aircraft systems, limiting their exploitation. Essential equipment as the Air Data, Attitude and Heading Reference Systems (ADAHRS) require several external probes to measure significant data as the Angle of Attack or the Sideslip Angle. Previous research focused on the analysis of a patented technology named Smart-ADAHRS (Smart Air Data, Attitude and Heading Reference System) as an alternative method to obtain reliable and accurate estimates of the aerodynamic angles. This solution is based on an innovative sensor fusion algorithm implementing soft computing techniques and it allows to obtain a simplified inertial and air data system reducing external devices. In fact, only one external source of dynamic and static pressures is needed. This paper focuses on the benefits which would be gained by the implementation of this system in UAV applications. A simplification of the entire ADAHRS architecture will bring to reduce the overall cost together with improved safety performance. Smart-ADAHRS has currently reached Technology Readiness Level (TRL) 6. Real flight tests took place on ultralight aircraft equipped with a suitable Flight Test Instrumentation (FTI). The output of the algorithm using the flight test measurements demonstrates the capability for this fusion algorithm to embed in a single device multiple physical and virtual sensors. Any source of dynamic and static pressure can be integrated with this system gaining a significant improvement in terms of versatility.Keywords: aerodynamic angles, air data system, flight test, neural network, unmanned aerial vehicle, virtual sensor
Procedia PDF Downloads 2211307 Monitoring the Thin Film Formation of Carrageenan and PNIPAm Microgels
Authors: Selim Kara, Ertan Arda, Fahrettin Dolastir, Önder Pekcan
Abstract:
Biomaterials and thin film coatings play a fundamental role in medical, food and pharmaceutical industries. Carrageenan is a linear sulfated polysaccharide extracted from algae and seaweeds. To date, such biomaterials have been used in many smart drug delivery systems due to their biocompatibility and antimicrobial activity properties. Poly (N-isopropylacrylamide) (PNIPAm) gels and copolymers have also been used in medical applications. PNIPAm shows lower critical solution temperature (LCST) property at about 32-34 °C which is very close to the human body temperature. Below and above the LCST point, PNIPAm gels exhibit distinct phase transitions between swollen and collapsed states. A special class of gels are microgels which can react to environmental changes significantly faster than microgels due to their small sizes. Quartz crystal microbalance (QCM) measurement technique is one of the attractive techniques which has been used for monitoring the thin-film formation process. A sensitive QCM system was designed as to detect 0.1 Hz difference in resonance frequency and 10-7 change in energy dissipation values, which are the measures of the deposited mass and the film rigidity, respectively. PNIPAm microgels with the diameter around few hundred nanometers in water were produced via precipitation polymerization process. 5 MHz quartz crystals with functionalized gold surfaces were used for the deposition of the carrageenan molecules and microgels in the solutions which were slowly pumped through a flow cell. Interactions between charged carrageenan and microgel particles were monitored during the formation of the film layers, and the Sauerbrey masses of the deposited films were calculated. The critical phase transition temperatures around the LCST were detected during the heating and cooling cycles. It was shown that it is possible to monitor the interactions between PNIPAm microgels and biopolymer molecules, and it is also possible to specify the critical phase transition temperatures by using a QCM system.Keywords: carrageenan, phase transitions, PNIPAm microgels, quartz crystal microbalance (QCM)
Procedia PDF Downloads 2311306 Combined Effect of Roughness and Suction on Heat Transfer in a Laminar Channel Flow
Authors: Marzieh Khezerloo, Lyazid Djenidi
Abstract:
Owing to wide range of the micro-device applications, the problems of mixing at small scales is of significant interest. Also, because most of the processes produce heat, it is needed to develop and implement strategies for heat removal in these devices. There are many studies which focus on the effect of roughness or suction on heat transfer performance, separately, although it would be useful to take advantage of these two methods to improve heat transfer performance. Unfortunately, there is a gap in this area. The present numerical study is carried to investigate the combined effects of roughness and wall suction on heat transfer performance of a laminar channel flow; suction is applied on the top and back faces of the roughness element, respectively. The study is carried out for different Reynolds numbers, different suction rates, and various locations of suction area on the roughness. The flow is assumed two dimensional, incompressible, laminar, and steady state. The governing Navier-Stokes equations are solved using ANSYS-Fluent 18.2 software. The present results are tested against previous theoretical results. The results show that by adding suction, the local Nusselt number is enhanced in the channel. In addition, it is shown that by applying suction on the bottom section of the roughness back face, one can reduce the thickness of thermal boundary layer, which leads to an increase in local Nusselt number. This indicates that suction is an effective means for improving the heat transfer rate (suction by controls the thickness of thermal boundary layer). It is also shown that the size and intensity of vortical motion behind the roughness element, decreased with an increasing suction rate, which leads to higher local Nusselt number. So, it can be concluded that by using suction, strategically located on the roughness element, one can control both the recirculation region and the heat transfer rate. Further results will be presented at the conference for coefficient of drag and the effect of adding more roughness elements.Keywords: heat transfer, laminar flow, numerical simulation, roughness, suction
Procedia PDF Downloads 1131305 Online Versus Face-To-Face – How Do Video Consultations Change The Doctor-Patient-Interaction
Authors: Markus Feufel, Friederike Kendel, Caren Hilger, Selamawit Woldai
Abstract:
Since the corona pandemic, the use of video consultation has increased remarkably. For vulnerable groups such as oncological patients, the advantages seem obvious. But how does video consultation potentially change the doctor-patient relationship compared to face-to-face consultation? Which barriers may hinder the effective use of this consultation format in practice? We are presenting first results from a mixed-methods field study, funded by Federal Ministry of Health, which will provide the basis for a hands-on guide for both physicians and patients on how to improve the quality of video consultations. We use a quasi-experimental design to analyze qualitative and quantitative differences between face-to-face and video consultations based on video recordings of N = 64 actual counseling sessions (n = 32 for each consultation format). Data will be recorded from n = 32 gynecological and n = 32 urological cancer patients at two clinics. After the consultation, all patients will be asked to fill out a questionnaire about their consultation experience. For quantitative analyses, the counseling sessions will be systematically compared in terms of verbal and nonverbal communication patterns. Relative frequencies of eye contact and the information exchanged will be compared using 𝝌2 -tests. The validated questionnaire MAPPIN'Obsdyad will be used to assess the expression of shared decision-making parameters. In addition, semi-structured interviews will be conducted with n = 10 physicians and n = 10 patients experienced with video consultation, for which a qualitative content analysis will be conducted. We will elaborate the comprehensive methodological approach we used to compare video vs. face-to-face consultations and present first evidence on how video consultations change the doctor-patient interaction. We will also outline possible barriers of video consultations and best practices on how they may be overcome. Based on the results, we will present and discuss recommendations outlining best practices for how to prepare and conduct high-quality video consultations from the perspective of both physicians and patients.Keywords: video consultation, patient-doctor-relationship, digital applications, technical barriers
Procedia PDF Downloads 1401304 Estimating the Impact of Appliance Energy Efficiency Improvement on Residential Energy Demand in Tema City, Ghana
Authors: Marriette Sakah, Samuel Gyamfi, Morkporkpor Delight Sedzro, Christoph Kuhn
Abstract:
Ghana is experiencing rapid economic development and its cities command an increasingly dominant role as centers of both production and consumption. Cities run on energy and are extremely vulnerable to energy scarcity, energy price escalations and health impacts of very poor air quality. The overriding concern in Ghana and other West African states is bridging the gap between energy demand and supply. Energy efficiency presents a cost-effective solution for supply challenges by enabling more coverage with current power supply levels and reducing the need for investment in additional generation capacity and grid infrastructure. In Ghana, major issues for energy policy formulation in residential applications include lack of disaggregated electrical energy consumption data and lack of thorough understanding with regards to socio-economic influences on energy efficiency investment. This study uses a bottom up approach to estimate baseline electricity end-use as well as the energy consumption of best available technologies to enable estimation of energy-efficiency resource in terms of relative reduction in total energy use for Tema city, Ghana. A ground survey was conducted to assess the probable consumer behavior in response to energy efficiency initiatives to enable estimation of the amount of savings that would occur in response to specific policy interventions with regards to funding and incentives provision targeted at households. Results show that 16% - 54% reduction in annual electricity consumption is reasonably achievable depending on the level of incentives provision. The saved energy could supply 10000 - 34000 additional households if the added households use only best available technology. Political support and consumer awareness are necessary to translate energy efficiency resources into real energy savings.Keywords: achievable energy savings, energy efficiency, Ghana, household appliances
Procedia PDF Downloads 2141303 Taguchi-Based Surface Roughness Optimization for Slotted and Tapered Cylindrical Products in Milling and Turning Operations
Authors: Vineeth G. Kuriakose, Joseph C. Chen, Ye Li
Abstract:
The research follows a systematic approach to optimize the parameters for parts machined by turning and milling processes. The quality characteristic chosen is surface roughness since the surface finish plays an important role for parts that require surface contact. A tapered cylindrical surface is designed as a test specimen for the research. The material chosen for machining is aluminum alloy 6061 due to its wide variety of industrial and engineering applications. HAAS VF-2 TR computer numerical control (CNC) vertical machining center is used for milling and HAAS ST-20 CNC machine is used for turning in this research. Taguchi analysis is used to optimize the surface roughness of the machined parts. The L9 Orthogonal Array is designed for four controllable factors with three different levels each, resulting in 18 experimental runs. Signal to Noise (S/N) Ratio is calculated for achieving the specific target value of 75 ± 15 µin. The controllable parameters chosen for turning process are feed rate, depth of cut, coolant flow and finish cut and for milling process are feed rate, spindle speed, step over and coolant flow. The uncontrollable factors are tool geometry for turning process and tool material for milling process. Hypothesis testing is conducted to study the significance of different uncontrollable factors on the surface roughnesses. The optimal parameter settings were identified from the Taguchi analysis and the process capability Cp and the process capability index Cpk were improved from 1.76 and 0.02 to 3.70 and 2.10 respectively for turning process and from 0.87 and 0.19 to 3.85 and 2.70 respectively for the milling process. The surface roughnesses were improved from 60.17 µin to 68.50 µin, reducing the defect rate from 52.39% to 0% for the turning process and from 93.18 µin to 79.49 µin, reducing the defect rate from 71.23% to 0% for the milling process. The purpose of this study is to efficiently utilize the Taguchi design analysis to improve the surface roughness.Keywords: surface roughness, Taguchi parameter design, CNC turning, CNC milling
Procedia PDF Downloads 1551302 Education-based, Graphical User Interface Design for Analyzing Phase Winding Inter-Turn Faults in Permanent Magnet Synchronous Motors
Authors: Emir Alaca, Hasbi Apaydin, Rohullah Rahmatullah, Necibe Fusun Oyman Serteller
Abstract:
In recent years, Permanent Magnet Synchronous Motors (PMSMs) have found extensive applications in various industrial sectors, including electric vehicles, wind turbines, and robotics, due to their high performance and low losses. Accurate mathematical modeling of PMSMs is crucial for advanced studies in electric machines. To enhance the effectiveness of graduate-level education, incorporating virtual or real experiments becomes essential to reinforce acquired knowledge. Virtual laboratories have gained popularity as cost-effective alternatives to physical testing, mitigating the risks associated with electrical machine experiments. This study presents a MATLAB-based Graphical User Interface (GUI) for PMSMs. The GUI offers a visual interface that allows users to observe variations in motor outputs corresponding to different input parameters. It enables users to explore healthy motor conditions and the effects of short-circuit faults in the one-phase winding. Additionally, the interface includes menus through which users can access equivalent circuits related to the motor and gain hands-on experience with the mathematical equations used in synchronous motor calculations. The primary objective of this paper is to enhance the learning experience of graduate and doctoral students by providing a GUI-based approach in laboratory studies. This interactive platform empowers students to examine and analyze motor outputs by manipulating input parameters, facilitating a deeper understanding of PMSM operation and control.Keywords: magnet synchronous motor, mathematical modelling, education tools, winding inter-turn fault
Procedia PDF Downloads 531301 Predicting Returns Volatilities and Correlations of Stock Indices Using Multivariate Conditional Autoregressive Range and Return Models
Authors: Shay Kee Tan, Kok Haur Ng, Jennifer So-Kuen Chan
Abstract:
This paper extends the conditional autoregressive range (CARR) model to multivariate CARR (MCARR) model and further to the two-stage MCARR-return model to model and forecast volatilities, correlations and returns of multiple financial assets. The first stage model fits the scaled realised Parkinson volatility measures using individual series and their pairwise sums of indices to the MCARR model to obtain in-sample estimates and forecasts of volatilities for these individual and pairwise sum series. Then covariances are calculated to construct the fitted variance-covariance matrix of returns which are imputed into the stage-two return model to capture the heteroskedasticity of assets’ returns. We investigate different choices of mean functions to describe the volatility dynamics. Empirical applications are based on the Standard and Poor 500, Dow Jones Industrial Average and Dow Jones United States Financial Service Indices. Results show that the stage-one MCARR models using asymmetric mean functions give better in-sample model fits than those based on symmetric mean functions. They also provide better out-of-sample volatility forecasts than those using CARR models based on two robust loss functions with the scaled realised open-to-close volatility measure as the proxy for the unobserved true volatility. We also find that the stage-two return models with constant means and multivariate Student-t errors give better in-sample fits than the Baba, Engle, Kraft, and Kroner type of generalized autoregressive conditional heteroskedasticity (BEKK-GARCH) models. The estimates and forecasts of value-at-risk (VaR) and conditional VaR based on the best MCARR-return models for each asset are provided and tested using Kupiec test to confirm the accuracy of the VaR forecasts.Keywords: range-based volatility, correlation, multivariate CARR-return model, value-at-risk, conditional value-at-risk
Procedia PDF Downloads 991300 Estimating the Traffic Impacts of Green Light Optimal Speed Advisory Systems Using Microsimulation
Authors: C. B. Masera, M. Imprialou, L. Budd, C. Morton
Abstract:
Even though signalised intersections are necessary for urban road traffic management, they can act as bottlenecks and disrupt traffic operations. Interrupted traffic flow causes congestion, delays, stop-and-go conditions (i.e. excessive acceleration/deceleration) and longer journey times. Vehicle and infrastructure connectivity offers the potential to provide improved new services with additional functions of assisting drivers. This paper focuses on one of the applications of vehicle-to-infrastructure communication namely Green Light Optimal Speed Advisory (GLOSA). To assess the effectiveness of GLOSA in the urban road network, an integrated microscopic traffic simulation framework is built into VISSIM software. Vehicle movements and vehicle-infrastructure communications are simulated through the interface of External Driver Model. A control algorithm is developed for recommending an optimal speed that is continuously updated in every time step for all vehicles approaching a signal-controlled point. This algorithm allows vehicles to pass a traffic signal without stopping or to minimise stopping times at a red phase. This study is performed with all connected vehicles at 100% penetration rate. Conventional vehicles are also simulated in the same network as a reference. A straight road segment composed of two opposite directions with two traffic lights per lane is studied. The simulation is implemented under 150 vehicles per hour and 200 per hour traffic volume conditions to identify how different traffic densities influence the benefits of GLOSA. The results indicate that traffic flow is improved by the application of GLOSA. According to this study, vehicles passed through the traffic lights more smoothly, and waiting times were reduced by up to 28 seconds. Average delays decreased for the entire network by 86.46% and 83.84% under traffic densities of 150 vehicles per hour per lane and 200 vehicles per hour per lane, respectively.Keywords: connected vehicles, GLOSA, intelligent transport systems, vehicle-to-infrastructure communication
Procedia PDF Downloads 1711299 American Sign Language Recognition System
Authors: Rishabh Nagpal, Riya Uchagaonkar, Venkata Naga Narasimha Ashish Mernedi, Ahmed Hambaba
Abstract:
The rapid evolution of technology in the communication sector continually seeks to bridge the gap between different communities, notably between the deaf community and the hearing world. This project develops a comprehensive American Sign Language (ASL) recognition system, leveraging the advanced capabilities of convolutional neural networks (CNNs) and vision transformers (ViTs) to interpret and translate ASL in real-time. The primary objective of this system is to provide an effective communication tool that enables seamless interaction through accurate sign language interpretation. The architecture of the proposed system integrates dual networks -VGG16 for precise spatial feature extraction and vision transformers for contextual understanding of the sign language gestures. The system processes live input, extracting critical features through these sophisticated neural network models, and combines them to enhance gesture recognition accuracy. This integration facilitates a robust understanding of ASL by capturing detailed nuances and broader gesture dynamics. The system is evaluated through a series of tests that measure its efficiency and accuracy in real-world scenarios. Results indicate a high level of precision in recognizing diverse ASL signs, substantiating the potential of this technology in practical applications. Challenges such as enhancing the system’s ability to operate in varied environmental conditions and further expanding the dataset for training were identified and discussed. Future work will refine the model’s adaptability and incorporate haptic feedback to enhance the interactivity and richness of the user experience. This project demonstrates the feasibility of an advanced ASL recognition system and lays the groundwork for future innovations in assistive communication technologies.Keywords: sign language, computer vision, vision transformer, VGG16, CNN
Procedia PDF Downloads 431298 Poly(Ethylene Glycol)-Silicone Containing Phase Change Polymer for Thermal Energy Storage
Authors: Swati Sundararajan, , Asit B. Samui, Prashant S. Kulkarni
Abstract:
The global energy crisis has led to extensive research on alternative sources of energy. The gap between energy supply and demand can be met by thermal energy storage techniques, of which latent heat storage is most effective in the form of phase change materials (PCMs). Phase change materials utilize latent heat absorbed or released over a narrow temperature range of the material undergoing phase transformation, to store energy. The latent heat can be utilized for heating or cooling purposes. It can also be used for converting to electricity. All these actions amount to minimizing the load on electricity demand. These materials retain this property over repeated number of cycles. Different PCMs differ in the phase change temperature and the heat storage capacities. Poly(ethylene glycol) (PEG) was cross-linked to hydroxyl-terminated poly(dimethyl siloxane) (PDMS) in the presence of cross-linker, tetraethyl orthosilicate (TEOS) and catalyst, dibutyltin dilaurate. Four different ratios of PEG and PDMS were reacted together, and the composition with the lowest PEG concentration resulted in the formation of a flexible solid-solid phase change membrane. The other compositions are obtained in powder form. The enthalpy values of the prepared PCMs were studied by using differential scanning calorimetry and the crystallization properties were analyzed by using X-ray diffraction and polarized optical microscopy. The incorporation of silicone moiety was expected to reduce the hydrophilic character of PEG, which was evaluated by measurement of contact angle. The membrane forming ability of this crosslinked polymer can be extended to several smart packaging, building and textile applications. The detailed synthesis, characterization and performance evaluation of the crosslinked polymer blend will be incorporated in the presentation.Keywords: phase change materials, poly(ethylene glycol), poly(dimethyl siloxane), thermal energy storage
Procedia PDF Downloads 3541297 Defining New Limits in Hybrid Perovskites: Single-Crystal Solar Cells with Exceptional Electron Diffusion Length Reaching Half Millimeters
Authors: Bekir Turedi
Abstract:
Exploiting the potential of perovskite single-crystal solar cells in optoelectronic applications necessitates overcoming a significant challenge: the low charge collection efficiency at increased thickness, which has restricted their deployment in radiation detectors and nuclear batteries. Our research details a promising approach to this problem, wherein we have successfully fabricated single-crystal MAPbI3 solar cells employing a space-limited inverse temperature crystallization (ITC) methodology. Remarkably, these cells, up to 400-fold thicker than current-generation perovskite polycrystalline films, maintain a high charge collection efficiency even without external bias. The crux of this achievement lies in the long electron diffusion length within these cells, estimated to be around 0.45 mm. This extended diffusion length ensures the conservation of high charge collection and power conversion efficiencies, even as the thickness of the cells increases. Fabricated cells at 110, 214, and 290 µm thickness manifested power conversion efficiencies (PCEs) of 20.0, 18.4, and 14.7% respectively. The single crystals demonstrated nearly optimal charge collection, even when their thickness exceeded 200 µm. Devices of thickness 108, 214, and 290 µm maintained 98.6, 94.3, and 80.4% of charge collection efficiency relative to their maximum theoretical short-circuit current value, respectively. Additionally, we have proposed an innovative, self-consistent technique for ascertaining the electron-diffusion length in perovskite single crystals under operational conditions. The computed electron-diffusion length approximated 446 µm, significantly surpassing previously reported values for this material. In conclusion, our findings underscore the feasibility of fabricating halide perovskite single-crystal solar cells of hundreds of micrometers in thickness while preserving high charge extraction efficiency and PCE. This advancement paves the way for developing perovskite-based optoelectronics necessitating thicker active layers, such as X-ray detectors and nuclear batteries.Keywords: perovskite, solar cell, single crystal, diffusion length
Procedia PDF Downloads 52