Search results for: intelligent techniques
5796 Al-Ti-W Metallic Glass Thin Films Deposited by Magnetron Sputtering Technology to Protect Steel Against Hydrogen Embrittlement
Authors: Issam Lakdhar, Akram Alhussein, Juan Creus
Abstract:
With the huge increase in world energy consumption, researchers are working to find other alternative sources of energy instead of fossil fuel one causing many environmental problems as the production of greenhouse effect gases. Hydrogen is considered a green energy source, which its combustion does not cause environmental pollution. The transport and the storage of the gas molecules or the other products containing this smallest chemical element in metallic structures (pipelines, tanks) are crucial issues. The dissolve and the permeation of hydrogen into the metal lattice lead to the formation of hydride phases and the embrittlement of structures. To protect the metallic structures, a surface treatment could be a good solution. Among the different techniques, magnetron sputtering is used to elaborate micrometric coatings capable of slowing down or stop hydrogen permeation. In the plasma environment, the deposition parameters of new thin-film metallic glasses Al-Ti-W were optimized and controlled in order to obtain, hydrogen barrier. Many characterizations were carried out (SEM, XRD and Nano-indentation…) to control the composition and understand the influence of film microstructure and chemical composition on the hydrogen permeation through the coatings. The coating performance was evaluated under two hydrogen production methods: chemical and electrochemical (cathodic protection) techniques. The hydrogen quantity absorbed was experimentally determined using the Thermal-Desorption Spectroscopy method (TDS)). An ideal ATW thin film was developed and showed excellent behavior against the diffusion of hydrogen.Keywords: thin films, hydrogen, PVD, plasma technology, electrochemical properties
Procedia PDF Downloads 1845795 Robust Processing of Antenna Array Signals under Local Scattering Environments
Authors: Ju-Hong Lee, Ching-Wei Liao
Abstract:
An adaptive array beamformer is designed for automatically preserving the desired signals while cancelling interference and noise. Providing robustness against model mismatches and tracking possible environment changes calls for robust adaptive beamforming techniques. The design criterion yields the well-known generalized sidelobe canceller (GSC) beamformer. In practice, the knowledge of the desired steering vector can be imprecise, which often occurs due to estimation errors in the DOA of the desired signal or imperfect array calibration. In these situations, the SOI is considered as interference, and the performance of the GSC beamformer is known to degrade. This undesired behavior results in a reduction of the array output signal-to-interference plus-noise-ratio (SINR). Therefore, it is worth developing robust techniques to deal with the problem due to local scattering environments. As to the implementation of adaptive beamforming, the required computational complexity is enormous when the array beamformer is equipped with massive antenna array sensors. To alleviate this difficulty, a generalized sidelobe canceller (GSC) with partially adaptivity for less adaptive degrees of freedom and faster adaptive response has been proposed in the literature. Unfortunately, it has been shown that the conventional GSC-based adaptive beamformers are usually very sensitive to the mismatch problems due to local scattering situations. In this paper, we present an effective GSC-based beamformer against the mismatch problems mentioned above. The proposed GSC-based array beamformer adaptively estimates the actual direction of the desired signal by using the presumed steering vector and the received array data snapshots. We utilize the predefined steering vector and a presumed angle tolerance range to carry out the required estimation for obtaining an appropriate steering vector. A matrix associated with the direction vector of signal sources is first created. Then projection matrices related to the matrix are generated and are utilized to iteratively estimate the actual direction vector of the desired signal. As a result, the quiescent weight vector and the required signal blocking matrix required for performing adaptive beamforming can be easily found. By utilizing the proposed GSC-based beamformer, we find that the performance degradation due to the considered local scattering environments can be effectively mitigated. To further enhance the beamforming performance, a signal subspace projection matrix is also introduced into the proposed GSC-based beamformer. Several computer simulation examples show that the proposed GSC-based beamformer outperforms the existing robust techniques.Keywords: adaptive antenna beamforming, local scattering, signal blocking, steering mismatch
Procedia PDF Downloads 1125794 Numerical Modelling of Immiscible Fluids Flow in Oil Reservoir Rocks during Enhanced Oil Recovery Processes
Authors: Zahreddine Hafsi, Manoranjan Mishra , Sami Elaoud
Abstract:
Ensuring the maximum recovery rate of oil from reservoir rocks is a challenging task that requires preliminary numerical analysis of different techniques used to enhance the recovery process. After conventional oil recovery processes and in order to retrieve oil left behind after the primary recovery phase, water flooding in one of several techniques used for enhanced oil recovery (EOR). In this research work, EOR via water flooding is numerically modeled, and hydrodynamic instabilities resulted from immiscible oil-water flow in reservoir rocks are investigated. An oil reservoir is a porous medium consisted of many fractures of tiny dimensions. For modeling purposes, the oil reservoir is considered as a collection of capillary tubes which provides useful insights into how fluids behave in the reservoir pore spaces. Equations governing oil-water flow in oil reservoir rocks are developed and numerically solved following a finite element scheme. Numerical results are obtained using Comsol Multiphysics software. The two phase Darcy module of COMSOL Multiphysics allows modelling the imbibition process by the injection of water (as wetting phase) into an oil reservoir. Van Genuchten, Brooks Corey and Levrett models were considered as retention models and obtained flow configurations are compared, and the governing parameters are discussed. For the considered retention models it was found that onset of instabilities viz. fingering phenomenon is highly dependent on the capillary pressure as well as the boundary conditions, i.e., the inlet pressure and the injection velocity.Keywords: capillary pressure, EOR process, immiscible flow, numerical modelling
Procedia PDF Downloads 1315793 Noise Mitigation Techniques to Minimize Electromagnetic Interference/Electrostatic Discharge Effects for the Lunar Mission Spacecraft
Authors: Vabya Kumar Pandit, Mudit Mittal, N. Prahlad Rao, Ramnath Babu
Abstract:
TeamIndus is the only Indian team competing for the Google Lunar XPRIZE(GLXP). The GLXP is a global competition to challenge the private entities to soft land a rover on the moon, travel minimum 500 meters and transmit high definition images and videos to Earth. Towards this goal, the TeamIndus strategy is to design and developed lunar lander that will deliver a rover onto the surface of the moon which will accomplish GLXP mission objectives. This paper showcases the various system level noise control techniques adopted by Electrical Distribution System (EDS), to achieve the required Electromagnetic Compatibility (EMC) of the spacecraft. The design guidelines followed to control Electromagnetic Interference by proper electronic package design, grounding, shielding, filtering, and cable routing within the stipulated mass budget, are explained. The paper also deals with the challenges of achieving Electromagnetic Cleanliness in presence of various Commercial Off-The-Shelf (COTS) and In-House developed components. The methods of minimizing Electrostatic Discharge (ESD) by identifying the potential noise sources, susceptible areas for charge accumulation and the methodology to prevent arcing inside spacecraft are explained. The paper then provides the EMC requirements matrix derived from the mission requirements to meet the overall Electromagnetic compatibility of the Spacecraft.Keywords: electromagnetic compatibility, electrostatic discharge, electrical distribution systems, grounding schemes, light weight harnessing
Procedia PDF Downloads 2935792 On the Use of Machine Learning for Tamper Detection
Authors: Basel Halak, Christian Hall, Syed Abdul Father, Nelson Chow Wai Kit, Ruwaydah Widaad Raymode
Abstract:
The attack surface on computing devices is becoming very sophisticated, driven by the sheer increase of interconnected devices, reaching 50B in 2025, which makes it easier for adversaries to have direct access and perform well-known physical attacks. The impact of increased security vulnerability of electronic systems is exacerbated for devices that are part of the critical infrastructure or those used in military applications, where the likelihood of being targeted is very high. This continuously evolving landscape of security threats calls for a new generation of defense methods that are equally effective and adaptive. This paper proposes an intelligent defense mechanism to protect from physical tampering, it consists of a tamper detection system enhanced with machine learning capabilities, which allows it to recognize normal operating conditions, classify known physical attacks and identify new types of malicious behaviors. A prototype of the proposed system has been implemented, and its functionality has been successfully verified for two types of normal operating conditions and further four forms of physical attacks. In addition, a systematic threat modeling analysis and security validation was carried out, which indicated the proposed solution provides better protection against including information leakage, loss of data, and disruption of operation.Keywords: anti-tamper, hardware, machine learning, physical security, embedded devices, ioT
Procedia PDF Downloads 1535791 Comparison between Photogrammetric and Structure from Motion Techniques in Processing Unmanned Aerial Vehicles Imageries
Authors: Ahmed Elaksher
Abstract:
Over the last few years, significant progresses have been made and new approaches have been proposed for efficient collection of 3D spatial data from Unmanned aerial vehicles (UAVs) with reduced costs compared to imagery from satellite or manned aircraft. In these systems, a low-cost GPS unit provides the position, velocity of the vehicle, a low-quality inertial measurement unit (IMU) determines its orientation, and off-the-shelf cameras capture the images. Structure from Motion (SfM) and photogrammetry are the main tools for 3D surface reconstruction from images collected by these systems. Unlike traditional techniques, SfM allows the computation of calibration parameters using point correspondences across images without performing a rigorous laboratory or field calibration process and it is more flexible in that it does not require consistent image overlap or same rotation angles between successive photos. These benefits make SfM ideal for UAVs aerial mapping. In this paper, a direct comparison between SfM Digital Elevation Models (DEM) and those generated through traditional photogrammetric techniques was performed. Data was collected by a 3DR IRIS+ Quadcopter with a Canon PowerShot S100 digital camera. Twenty ground control points were randomly distributed on the ground and surveyed with a total station in a local coordinate system. Images were collected from an altitude of 30 meters with a ground resolution of nine mm/pixel. Data was processed with PhotoScan, VisualSFM, Imagine Photogrammetry, and a photogrammetric algorithm developed by the author. The algorithm starts with performing a laboratory camera calibration then the acquired imagery undergoes an orientation procedure to determine the cameras’ positions and orientations. After the orientation is attained, correlation based image matching is conducted to automatically generate three-dimensional surface models followed by a refining step using sub-pixel image information for high matching accuracy. Tests with different number and configurations of the control points were conducted. Camera calibration parameters estimated from commercial software and those obtained with laboratory procedures were comparable. Exposure station positions were within less than few centimeters and insignificant differences, within less than three seconds, among orientation angles were found. DEM differencing was performed between generated DEMs and few centimeters vertical shifts were found.Keywords: UAV, photogrammetry, SfM, DEM
Procedia PDF Downloads 2955790 Evaluation of Newly Synthesized Steroid Derivatives Using In silico Molecular Descriptors and Chemometric Techniques
Authors: Milica Ž. Karadžić, Lidija R. Jevrić, Sanja Podunavac-Kuzmanović, Strahinja Z. Kovačević, Anamarija I. Mandić, Katarina Penov-Gaši, Andrea R. Nikolić, Aleksandar M. Oklješa
Abstract:
This study considered selection of the in silico molecular descriptors and the models for newly synthesized steroid derivatives description and their characterization using chemometric techniques. Multiple linear regression (MLR) models were established and gave the best molecular descriptors for quantitative structure-retention relationship (QSRR) modeling of the retention of the investigated molecules. MLR models were without multicollinearity among the selected molecular descriptors according to the variance inflation factor (VIF) values. Used molecular descriptors were ranked using generalized pair correlation method (GPCM). In this method, the significant difference between independent variables can be noticed regardless almost equal correlation between dependent variable. Generated MLR models were statistically and cross-validated and the best models were kept. Models were ranked using sum of ranking differences (SRD) method. According to this method, the most consistent QSRR model can be found and similarity or dissimilarity between the models could be noticed. In this study, SRD was performed using average values of experimentally observed data as a golden standard. Chemometric analysis was conducted in order to characterize newly synthesized steroid derivatives for further investigation regarding their potential biological activity and further synthesis. This article is based upon work from COST Action (CM1105), supported by COST (European Cooperation in Science and Technology).Keywords: generalized pair correlation method, molecular descriptors, regression analysis, steroids, sum of ranking differences
Procedia PDF Downloads 3475789 Comparative Evaluation of Vanishing Interfacial Tension Approach for Minimum Miscibility Pressure Determination
Authors: Waqar Ahmad Butt, Gholamreza Vakili Nezhaad, Ali Soud Al Bemani, Yahya Al Wahaibi
Abstract:
Minimum miscibility pressure (MMP) plays a great role in determining the displacement efficiency of different gas injection processes. Experimental techniques for MMP determination include industrially recommended slim tube, vanishing interfacial tension (VIT) and rising bubble apparatus (RBA). In this paper, MMP measurement study using slim tube and VIT experimental techniques for two different crude oil samples (M and N) both in live and stock tank oil forms is being presented. VIT measured MMP values for both 'M' and 'N' live crude oils were close to slim tube determined MMP values with 6.4 and 5 % deviation respectively. Whereas for both oil samples in stock tank oil form, VIT measured MMP showed a higher unacceptable deviation from slim tube determined MMP. This higher difference appears to be related to high stabilized crude oil heavier fraction and lack of multiple contacts miscibility. None of the different nine deployed crude oil and CO2 MMP computing correlations could result in reliable MMP, close to slim tube determined MMP. Since VIT determined MMP values for both considered live crude oils are in close match with slim tube determined MMP values, it confirms reliable, reproducible, rapid and cheap alternative for live crude oil MMP determination. Whereas VIT MMP determination for stock tank oil case needed further investigation about stabilization / destabilization mechanism of oil heavier ends and multiple contacts miscibility development issues.Keywords: minimum miscibility pressure, interfacial tension, multiple contacts miscibility, heavier ends
Procedia PDF Downloads 2685788 A Comprehensive Review of Adaptive Building Energy Management Systems Based on Users’ Feedback
Authors: P. Nafisi Poor, P. Javid
Abstract:
Over the past few years, the idea of adaptive buildings and specifically, adaptive building energy management systems (ABEMS) has become popular. Well-performed management in terms of energy is to create a balance between energy consumption and user comfort; therefore, in new energy management models, efficient energy consumption is not the sole factor and the user's comfortability is also considered in the calculations. One of the main ways of measuring this factor is by analyzing user feedback on the conditions to understand whether they are satisfied with conditions or not. This paper provides a comprehensive review of recent approaches towards energy management systems based on users' feedbacks and subsequently performs a comparison between them premised upon their efficiency and accuracy to understand which approaches were more accurate and which ones resulted in a more efficient way of minimizing energy consumption while maintaining users' comfortability. It was concluded that the highest accuracy rate among the presented works was 95% accuracy in determining satisfaction and up to 51.08% energy savings can be achieved without disturbing user’s comfort. Considering the growing interest in designing and developing adaptive buildings, these studies can support diverse inquiries about this subject and can be used as a resource to support studies and researches towards efficient energy consumption while maintaining the comfortability of users.Keywords: adaptive buildings, energy efficiency, intelligent buildings, user comfortability
Procedia PDF Downloads 1335787 Examination of Public Hospital Unions Technical Efficiencies Using Data Envelopment Analysis and Machine Learning Techniques
Authors: Songul Cinaroglu
Abstract:
Regional planning in health has gained speed for developing countries in recent years. In Turkey, 89 different Public Hospital Unions (PHUs) were conducted based on provincial levels. In this study technical efficiencies of 89 PHUs were examined by using Data Envelopment Analysis (DEA) and machine learning techniques by dividing them into two clusters in terms of similarities of input and output indicators. Number of beds, physicians and nurses determined as input variables and number of outpatients, inpatients and surgical operations determined as output indicators. Before performing DEA, PHUs were grouped into two clusters. It is seen that the first cluster represents PHUs which have higher population, demand and service density than the others. The difference between clusters was statistically significant in terms of all study variables (p ˂ 0.001). After clustering, DEA was performed for general and for two clusters separately. It was found that 11% of PHUs were efficient in general, additionally 21% and 17% of them were efficient for the first and second clusters respectively. It is seen that PHUs, which are representing urban parts of the country and have higher population and service density, are more efficient than others. Random forest decision tree graph shows that number of inpatients is a determinative factor of efficiency of PHUs, which is a measure of service density. It is advisable for public health policy makers to use statistical learning methods in resource planning decisions to improve efficiency in health care.Keywords: public hospital unions, efficiency, data envelopment analysis, random forest
Procedia PDF Downloads 1265786 Qf-Pcr as a Rapid Technique for Routine Prenatal Diagnosis of Fetal Aneuploidies
Authors: S. H. Atef
Abstract:
Background: The most common chromosomal abnormalities identified at birth are aneuploidies of chromosome 21, 18, 13, X and Y. Prenatal diagnosis of fetal aneuploidies is routinely done by traditional cytogenetic culture, a major drawback of this technique is the long period of time required to reach a diagnosis. In this study, we evaluated the QF-PCR as a rapid technique for prenatal diagnosis of common aneuploidies. Method:This work was carried out on Sixty amniotic fluid samples taken from patients with one or more of the following indications: Advanced maternal age (3 case), abnormal biochemical markers (6 cases), abnormal ultrasound (12 cases) or previous history of abnormal child (39 cases).Each sample was tested by QF-PCR and traditional cytogenetic. Aneuploidy screenings were performed amplifying four STRs on chromosomes 21, 18, 13, two pseudoautosomal,one X linked, as well as the AMXY and SRY; markers were distributed in two multiplex QFPCR assays (S1 and S2) in order to reduce the risk of sample mishandling. Results: All the QF-PCR results were successful, while there was two culture failures, only one of them was repeated. No discrepancy was seen between the results of both techniques. Fifty six samples showed normal patterns, three sample showed trisomy 21, successfully detected by both techniques and one sample showed normal pattern by QF-PCR but could not be compared to the cytogenetics due to culture failure, the pregnancy outcome of this case was a normal baby. Conclusion: Our study concluded that QF-PCR is a reliable technique for prenatal diagnosis of the common chromosomal aneuploidies. It has the advantages over the cytogenetic culture of being faster with the results appearing within 24-48 hours, simpler, doesn't need a highly qualified staff, less prone to failure and more cost effective.Keywords: QF-PCR, traditional cytogenetic fetal aneuploidies, trisomy 21, prenatal diagnosis
Procedia PDF Downloads 4175785 Design and Fabrication of a Programmable Stiffness-Sensitive Gripper for Object Handling
Authors: Mehdi Modabberifar, Sanaz Jabary, Mojtaba Ghodsi
Abstract:
Stiffness sensing is an important issue in medical diagnostic, robotics surgery, safe handling, and safe grasping of objects in production lines. Detecting and obtaining the characteristics in dwelling lumps embedded in a soft tissue and safe removing and handling of detected lumps is needed in surgery. Also in industry, grasping and handling an object without damaging in a place where it is not possible to access a human operator is very important. In this paper, a method for object handling is presented. It is based on the use of an intelligent gripper to detect the object stiffness and then setting a programmable force for grasping the object to move it. The main components of this system includes sensors (sensors for measuring force and displacement), electrical (electrical and electronic circuits, tactile data processing and force control system), mechanical (gripper mechanism and driving system for the gripper) and the display unit. The system uses a rotary potentiometer for measuring gripper displacement. A microcontroller using the feedback received by the load cell, mounted on the finger of the gripper, calculates the amount of stiffness, and then commands the gripper motor to apply a certain force on the object. Results of Experiments on some samples with different stiffness show that the gripper works successfully. The gripper can be used in haptic interfaces or robotic systems used for object handling.Keywords: gripper, haptic, stiffness, robotic
Procedia PDF Downloads 3585784 Applications of Out-of-Sequence Thrust Movement for Earthquake Mitigation: A Review
Authors: Rajkumar Ghosh
Abstract:
The study presents an overview of the many uses and approaches for estimating out-of-sequence thrust movement in earthquake mitigation. The study investigates how knowing and forecasting thrust movement during seismic occurrences might assist to effective earthquake mitigation measures. The review begins by discussing out-of-sequence thrust movement and its importance in earthquake mitigation strategies. It explores how typical techniques of estimating thrust movement may not capture the full complexity of seismic occurrences and emphasizes the benefits of include out-of-sequence data in the analysis. A thorough review of existing research and studies on out-of-sequence thrust movement estimates for earthquake mitigation. The study demonstrates how to estimate out-of-sequence thrust movement using multiple data sources such as GPS measurements, satellite imagery, and seismic recordings. The study also examines the use of out-of-sequence thrust movement estimates in earthquake mitigation measures. It investigates how precise calculation of thrust movement may help improve structural design, analyse infrastructure risk, and develop early warning systems. The potential advantages of using out-of-sequence data in these applications to improve the efficiency of earthquake mitigation techniques. The difficulties and limits of estimating out-of-sequence thrust movement for earthquake mitigation. It addresses data quality difficulties, modelling uncertainties, and computational complications. To address these obstacles and increase the accuracy and reliability of out-of-sequence thrust movement estimates, the authors recommend topics for additional study and improvement. The study is a helpful resource for seismic monitoring and earthquake risk assessment researchers, engineers, and policymakers, supporting innovations in earthquake mitigation measures based on a better knowledge of thrust movement dynamics.Keywords: earthquake mitigation, out-of-sequence thrust, satellite imagery, seismic recordings, GPS measurements
Procedia PDF Downloads 845783 Check Red Blood Cells Concentrations of a Blood Sample by Using Photoconductive Antenna
Authors: Ahmed Banda, Alaa Maghrabi, Aiman Fakieh
Abstract:
Terahertz (THz) range lies in the area between 0.1 to 10 THz. The process of generating and detecting THz can be done through different techniques. One of the most familiar techniques is done through a photoconductive antenna (PCA). The process of generating THz radiation at PCA includes applying a laser pump in femtosecond and DC voltage difference. However, photocurrent is generated at PCA, which its value is affected by different parameters (e.g., dielectric properties, DC voltage difference and incident power of laser pump). THz radiation is used for biomedical applications. However, different biomedical fields need new technologies to meet patients’ needs (e.g. blood-related conditions). In this work, a novel method to check the red blood cells (RBCs) concentration of a blood sample using PCA is presented. RBCs constitute 44% of total blood volume. RBCs contain Hemoglobin that transfers oxygen from lungs to body organs. Then it returns to the lungs carrying carbon dioxide, which the body then gets rid of in the process of exhalation. The configuration has been simulated and optimized using COMSOL Multiphysics. The differentiation of RBCs concentration affects its dielectric properties (e.g., the relative permittivity of RBCs in the blood sample). However, the effects of four blood samples (with different concentrations of RBCs) on photocurrent value have been tested. Photocurrent peak value and RBCs concentration are inversely proportional to each other due to the change of dielectric properties of RBCs. It was noticed that photocurrent peak value has dropped from 162.99 nA to 108.66 nA when RBCs concentration has risen from 0% to 100% of a blood sample. The optimization of this method helps to launch new products for diagnosing blood-related conditions (e.g., anemia and leukemia). The resultant electric field from DC components can not be used to count the RBCs of the blood sample.Keywords: biomedical applications, photoconductive antenna, photocurrent, red blood cells, THz radiation
Procedia PDF Downloads 2055782 Teacher Training Course: Conflict Resolution through Mediation
Authors: Csilla Marianna Szabó
Abstract:
In Hungary, the society has changes a lot for the past 25 years, and these changes could be detected in educational situations as well. The number and the intensity of conflicts have been increased in most fields of life, as well as at schools. Teachers have difficulties to be able to handle school conflicts. What is more, the new net generation, generation Z has values and behavioural patterns different from those of the previous one, which might generate more serious conflicts at school, especially with teachers who were mainly socialising in a traditional teacher – student relationships. In Hungary, the bill CCIV, 2011 declared the foundation of Institutes of Teacher Training in higher education institutes. One of the tasks of the Institutes is to survey the competences and needs of teachers working in public education and to provide further trainings and services for them according to their needs and requirements. This job is supported by the Social Renewal Operative Programs 4.1.2.B. The Institute of Teacher Training at the College of Dunaújváros, Hungary carried out a questionnaire and surveyed the needs and the requirements of teachers working in the Central Transdanubian region. Based on the results, the professors of the Institute of Teacher Training decided to meet the requirements of teachers and launch short courses in spring 2015. One of the courses is going to focus on school conflict management through mediation. The aim of the pilot course is to provide conflict management techniques for teachers presenting different mediation techniques to them. The theoretical part of the course (5 hours) will enable participants to understand the main points and the advantages of mediation, while the practical part (10 hours) will involve teachers in role plays to learn how to cope with conflict situations applying mediation. We hope if conflicts could be reduced, it would influence school atmosphere in a positive way and the teaching – learning process could be more successful and effective.Keywords: conflict resolution, generation Z, mediation, teacher training
Procedia PDF Downloads 4105781 Influence of the Cooking Technique on the Iodine Content of Frozen Hake
Authors: F. Deng, R. Sanchez, A. Beltran, S. Maestre
Abstract:
The high nutritional value associated with seafood is related to the presence of essential trace elements. Moreover, seafood is considered an important source of energy, proteins, and long-chain polyunsaturated fatty acids. Generally, seafood is consumed cooked. Consequently, the nutritional value could be degraded. Seafood, such as fish, shellfish, and seaweed, could be considered as one of the main iodine sources. The deficient or excessive consumption of iodine could cause dysfunction and pathologies related to the thyroid gland. The main objective of this work is to evaluated iodine stability in hake (Merluccius) undergone different culinary techniques. The culinary process considered were: boiling, steaming, microwave cooking, baking, cooking en papillote (twisted cover with the shape of a sweet wrapper) and coating with a batter of flour and deep-frying. The determination of iodine was carried by Inductively Coupled Plasma Mass Spectrometry (ICP-MS). Regarding sample handling strategies, liquid-liquid extraction has demonstrated to be a powerful pre-concentration and clean-up approach for trace metal analysis by ICP techniques. Extraction with tetramethylammonium hydroxide (TMAH reagent) was used as a sample preparation method in this work. Based on the results, it can be concluded that the stability of iodine was degraded with the cooking processes. The major degradation was observed for the boiling and microwave cooking processes. The content of iodine in hake decreased up to 60% and 52%, respectively. However, if the boiling cooking liquid is preserved, this loss that has been generated during cooking is reduced. Only when the fish was cooked by following the cooking en papillote process the iodine content was preserved.Keywords: cooking process, ICP-MS, iodine, hake
Procedia PDF Downloads 1415780 Exploring the ‘Many Worlds’ Interpretation in Both a Philosophical and Creative Literary Framework
Authors: Jane Larkin
Abstract:
Combining elements of philosophy, science, and creative writing, this investigation explores how a philosophically structured science-fiction novel can challenge the theory of linearity and singularity of time through the ‘many worlds’ theory. This concept is addressed through the creation of a research exegesis and accompanying creative artefact, designed to be read in conjunction with each other in an explorative, interwoven manner. Research undertaken into scientific concepts, such as the ‘many worlds’ interpretation of quantum mechanics and diverse philosophers and their ideologies on time, is embodied in an original science-fiction narrative titled, It Goes On. The five frames that make up the creative artefact are enhanced not only by five leading philosophers and their philosophies on time but by an appreciation of the research, which comes first in the paper. Research into traditional approaches to storytelling is creatively and innovatively inverted in several ways, thus challenging the singularity and linearity of time. Further nonconventional approaches to literary techniques include an abstract narrator, embodied by time, a concept, and a figure in the text, whose voice and vantage point in relation to death furthers the unreliability of the notion of time. These further challenge individuals’ understanding of complex scientific and philosophical views in a variety of ways. The science-fiction genre is essential when considering the speculative nature of It Goes On, which deals with parallel realities and is a fantastical exploration of human ingenuity in plausible futures. Therefore, this paper documents the research-led methodology used to create It Goes On, the application of the ‘many worlds’ theory within a framed narrative, and the many innovative techniques used to contribute new knowledge in a variety of fields.Keywords: time, many-worlds theory, Heideggerian philosophy, framed narrative
Procedia PDF Downloads 845779 Expert System for Road Bridge Constructions
Authors: Michael Dimmer, Holger Flederer
Abstract:
The basis of realizing a construction project is a technically flawless concept which satisfies conditions regarding environment and costs, as well as static-constructional terms. The presented software system actively supports civil engineers during the setup of optimal designs, by giving advice regarding durability, life-cycle costs, sustainability and much more. A major part of the surrounding conditions of a design process is gathered and assimilated by experienced engineers subconsciously. It is a question about eligible building techniques and their practicability by considering emerging costs. Planning engineers have acquired many of this experience during their professional life and use them for their daily work. Occasionally, the planning engineer should disassociate himself from his experience to be open for new and better solutions which meet the functional demands, as well. The developed expert system gives planning engineers recommendations for preferred design options of new constructions as well as for existing bridge constructions. It is possible to analyze construction elements and techniques regarding sustainability and life-cycle costs. This way the software provides recommendations for future constructions. Furthermore, there is an option to design existing road bridges especially for heavy duty transport. This implies a route planning tool to get quick and reliable information as to whether the bridge support structures of a transport route have been measured sufficiently for a certain heavy duty transport. The use of this expert system in bridge planning companies and building authorities will save costs massively for new and existent bridge constructions. This is achieved by consequently considering parameters like life-cycle costs and sustainability for its planning recommendations.Keywords: expert system, planning process, road bridges, software system
Procedia PDF Downloads 2775778 Sidelobe Free Inverse Synthetic Aperture Radar Imaging of Non Cooperative Moving Targets Using WiFi
Authors: Jiamin Huang, Shuliang Gui, Zengshan Tian, Fei Yan, Xiaodong Wu
Abstract:
In recent years, with the rapid development of radio frequency technology, the differences between radar sensing and wireless communication in terms of receiving and sending channels, signal processing, data management and control are gradually shrinking. There has been a trend of integrated communication radar sensing. However, most of the existing radar imaging technologies based on communication signals are combined with synthetic aperture radar (SAR) imaging, which does not conform to the practical application case of the integration of communication and radar. Therefore, in this paper proposes a high-precision imaging method using communication signals based on the imaging mechanism of inverse synthetic aperture radar (ISAR) imaging. This method makes full use of the structural characteristics of the orthogonal frequency division multiplexing (OFDM) signal, so the sidelobe effect in distance compression is removed and combines radon transform and Fractional Fourier Transform (FrFT) parameter estimation methods to achieve ISAR imaging of non-cooperative targets. The simulation experiment and measured results verify the feasibility and effectiveness of the method, and prove its broad application prospects in the field of intelligent transportation.Keywords: integration of communication and radar, OFDM, radon, FrFT, ISAR
Procedia PDF Downloads 1265777 The Role of Vibro-Stone Column for Enhancing the Soft Soil Properties
Authors: Mohsen Ramezan Shirazi, Orod Zarrin, Komeil Valipourian
Abstract:
This study investigated the behavior of improved soft soils through the vibro replacement technique by considering their settlements and consolidation rates and the applicability of this technique in various types of soils and settlement and bearing capacity calculations.Keywords: bearing capacity, expansive clay, stone columns, vibro techniques
Procedia PDF Downloads 5855776 Synthesis of MIPs towards Precursors and Intermediates of Illicit Drugs and Their following Application in Sensing Unit
Authors: K. Graniczkowska, N. Beloglazova, S. De Saeger
Abstract:
The threat of synthetic drugs is one of the most significant current drug problems worldwide. The use of drugs of abuse has increased dramatically during the past three decades. Among others, Amphetamine-Type Stimulants (ATS) are globally the second most widely used drugs after cannabis, exceeding the use of cocaine and heroin. ATS are potent central nervous system (CNS) stimulants, capable of inducing euphoric static similar to cocaine. Recreational use of ATS is widespread, even though warnings of irreversible damage of the CNS were reported. ATS pose a big problem and their production contributes to the pollution of the environment by discharging big volumes of liquid waste to sewage system. Therefore, there is a demand to develop robust and sensitive sensors that can detect ATS and their intermediates in environmental water samples. A rapid and simple test is required. Analysis of environmental water samples (which sometimes can be a harsh environment) using antibody-based tests cannot be applied. Therefore, molecular imprinted polymers (MIPs), which are known as synthetic antibodies, have been chosen for that approach. MIPs are characterized with a high mechanical and thermal stability, show chemical resistance in a broad pH range and various organic or aqueous solvents. These properties make them the preferred type of receptors for application in the harsh conditions imposed by environmental samples. To the best of our knowledge, there are no existing MIPs-based sensors toward amphetamine and its intermediates. Also not many commercial MIPs for this application are available. Therefore, the aim of this study was to compare different techniques to obtain MIPs with high specificity towards ATS and characterize them for following use in a sensing unit. MIPs against amphetamine and its intermediates were synthesized using a few different techniques, such as electro-, thermo- and UV-initiated polymerization. Different monomers, cross linkers and initiators, in various ratios, were tested to obtain the best sensitivity and polymers properties. Subsequently, specificity and selectivity were compared with commercially available MIPs against amphetamine. Different linkers, such as lipoic acid, 3-mercaptopioponic acid and tyramine were examined, in combination with several immobilization techniques, to select the best procedure for attaching particles on sensor surface. Performed experiments allowed choosing an optimal method for the intended sensor application. Stability of MIPs in extreme conditions, such as highly acidic or basic was determined. Obtained results led to the conclusion about MIPs based sensor applicability in sewage system testing.Keywords: amphetamine type stimulants, environment, molecular imprinted polymers, MIPs, sensor
Procedia PDF Downloads 2505775 Artificial Intelligence and Machine Vision-Based Defect Detection Methodology for Solid Rocket Motor Propellant Grains
Authors: Sandip Suman
Abstract:
Mechanical defects (cracks, voids, irregularities) in rocket motor propellant are not new and it is induced due to various reasons, which could be an improper manufacturing process, lot-to-lot variation in chemicals or just the natural aging of the products. These defects are normally identified during the examination of radiographic films by quality inspectors. However, a lot of times, these defects are under or over-classified by human inspectors, which leads to unpredictable performance during lot acceptance tests and significant economic loss. The human eye can only visualize larger cracks and defects in the radiographs, and it is almost impossible to visualize every small defect through the human eye. A different artificial intelligence-based machine vision methodology has been proposed in this work to identify and classify the structural defects in the radiographic films of rocket motors with solid propellant. The proposed methodology can extract the features of defects, characterize them, and make intelligent decisions for acceptance or rejection as per the customer requirements. This will automatize the defect detection process during manufacturing with human-like intelligence. It will also significantly reduce production downtime and help to restore processes in the least possible time. The proposed methodology is highly scalable and can easily be transferred to various products and processes.Keywords: artificial intelligence, machine vision, defect detection, rocket motor propellant grains
Procedia PDF Downloads 985774 Achieving High Renewable Energy Penetration in Western Australia Using Data Digitisation and Machine Learning
Authors: A. D. Tayal
Abstract:
The energy industry is undergoing significant disruption. This research outlines that, whilst challenging; this disruption is also an emerging opportunity for electricity utilities. One such opportunity is leveraging the developments in data analytics and machine learning. As the uptake of renewable energy technologies and complimentary control systems increases, electricity grids will likely transform towards dense microgrids with high penetration of renewable generation sources, rich in network and customer data, and linked through intelligent, wireless communications. Data digitisation and analytics have already impacted numerous industries, and its influence on the energy sector is growing, as computational capabilities increase to manage big data, and as machines develop algorithms to solve the energy challenges of the future. The objective of this paper is to address how far the uptake of renewable technologies can go given the constraints of existing grid infrastructure and provides a qualitative assessment of how higher levels of renewable energy penetration can be facilitated by incorporating even broader technological advances in the fields of data analytics and machine learning. Western Australia is used as a contextualised case study, given its abundance and diverse renewable resources (solar, wind, biomass, and wave) and isolated networks, making a high penetration of renewables a feasible target for policy makers over coming decades.Keywords: data, innovation, renewable, solar
Procedia PDF Downloads 3645773 Using Q-Learning to Auto-Tune PID Controller Gains for Online Quadcopter Altitude Stabilization
Authors: Y. Alrubyli
Abstract:
Unmanned Arial Vehicles (UAVs), and more specifically, quadcopters need to be stable during their flights. Altitude stability is usually achieved by using a PID controller that is built into the flight controller software. Furthermore, the PID controller has gains that need to be tuned to reach optimal altitude stabilization during the quadcopter’s flight. For that, control system engineers need to tune those gains by using extensive modeling of the environment, which might change from one environment and condition to another. As quadcopters penetrate more sectors, from the military to the consumer sectors, they have been put into complex and challenging environments more than ever before. Hence, intelligent self-stabilizing quadcopters are needed to maneuver through those complex environments and situations. Here we show that by using online reinforcement learning with minimal background knowledge, the altitude stability of the quadcopter can be achieved using a model-free approach. We found that by using background knowledge instead of letting the online reinforcement learning algorithm wander for a while to tune the PID gains, altitude stabilization can be achieved faster. In addition, using this approach will accelerate development by avoiding extensive simulations before applying the PID gains to the real-world quadcopter. Our results demonstrate the possibility of using the trial and error approach of reinforcement learning combined with background knowledge to achieve faster quadcopter altitude stabilization in different environments and conditions.Keywords: reinforcement learning, Q-leanring, online learning, PID tuning, unmanned aerial vehicle, quadcopter
Procedia PDF Downloads 1745772 A Case Study Approach on Co-Constructing the Idea of 'Safety' with Children
Authors: Beng Zhen Yeow
Abstract:
In most work that involves children, the voice of the children is often not heard. This is ironic since a lot of discussions might involve their welfare and safety. It might seem natural that the professionals should hear from them about what they wish for instead of deciding what is best for them. However, this, unfortunately, might be more the exception than the norm in most case and hence in many instances, children are merely 'subjects' in conversations about safety instead of active participants in the construction or creation of safety in the family. There might be many reasons why it does not happen in our work. Firstly, professionals have learnt how to 'socialise' into their professional roles and hence in the process become 'un-childlike'. Secondly, there is also a lack of professional training with regards to how to talk with children. Finally, there might be also a lack of concrete tools and techniques that are developed to facilitate the process. In this paper, the case study method is used to show how the idea of safety could be concretised and discussed with children and their family members, and hence making them active participants and co-creators of their own safety. Specific skills and techniques are highlighted through the case study. In this case, there was improvement in outcomes like no repeated offence or abuse. In addition, children were also able to advocate for their own safety after six months of intervention and how the family members were able to explicitly say what they can do to improve safety. The professionals in the safety network reported significant improvements. On top of that, the abused child who was removed due to child protection concerns, had verbalized observations of change in mother’s parenting abilities, and has requested for home leave to begin due to ownership of safety planning and having confidence to co-create safety for her siblings and herself together with the professionals in the safety network. Children becoming active participants in the co-creation of safety not only serve the purpose in allowing them to own a 'voice' but at the same time, give them greater confidence to protect themselves at home and in other contexts outside of home.Keywords: partnering for safety, collaborative social work, family and systemic psychotherapy, child protection
Procedia PDF Downloads 1205771 Hybrid Approach for Face Recognition Combining Gabor Wavelet and Linear Discriminant Analysis
Authors: A: Annis Fathima, V. Vaidehi, S. Ajitha
Abstract:
Face recognition system finds many applications in surveillance and human computer interaction systems. As the applications using face recognition systems are of much importance and demand more accuracy, more robustness in the face recognition system is expected with less computation time. In this paper, a hybrid approach for face recognition combining Gabor Wavelet and Linear Discriminant Analysis (HGWLDA) is proposed. The normalized input grayscale image is approximated and reduced in dimension to lower the processing overhead for Gabor filters. This image is convolved with bank of Gabor filters with varying scales and orientations. LDA, a subspace analysis techniques are used to reduce the intra-class space and maximize the inter-class space. The techniques used are 2-dimensional Linear Discriminant Analysis (2D-LDA), 2-dimensional bidirectional LDA ((2D)2LDA), Weighted 2-dimensional bidirectional Linear Discriminant Analysis (Wt (2D)2 LDA). LDA reduces the feature dimension by extracting the features with greater variance. k-Nearest Neighbour (k-NN) classifier is used to classify and recognize the test image by comparing its feature with each of the training set features. The HGWLDA approach is robust against illumination conditions as the Gabor features are illumination invariant. This approach also aims at a better recognition rate using less number of features for varying expressions. The performance of the proposed HGWLDA approaches is evaluated using AT&T database, MIT-India face database and faces94 database. It is found that the proposed HGWLDA approach provides better results than the existing Gabor approach.Keywords: face recognition, Gabor wavelet, LDA, k-NN classifier
Procedia PDF Downloads 4675770 Particle Swarm Optimization Algorithm vs. Genetic Algorithm for Image Watermarking Based Discrete Wavelet Transform
Authors: Omaima N. Ahmad AL-Allaf
Abstract:
Over communication networks, images can be easily copied and distributed in an illegal way. The copyright protection for authors and owners is necessary. Therefore, the digital watermarking techniques play an important role as a valid solution for authority problems. Digital image watermarking techniques are used to hide watermarks into images to achieve copyright protection and prevent its illegal copy. Watermarks need to be robust to attacks and maintain data quality. Therefore, we discussed in this paper two approaches for image watermarking, first is based on Particle Swarm Optimization (PSO) and the second approach is based on Genetic Algorithm (GA). Discrete wavelet transformation (DWT) is used with the two approaches separately for embedding process to cover image transformation. Each of PSO and GA is based on co-relation coefficient to detect the high energy coefficient watermark bit in the original image and then hide the watermark in original image. Many experiments were conducted for the two approaches with different values of PSO and GA parameters. From experiments, PSO approach got better results with PSNR equal 53, MSE equal 0.0039. Whereas GA approach got PSNR equal 50.5 and MSE equal 0.0048 when using population size equal to 100, number of iterations equal to 150 and 3×3 block. According to the results, we can note that small block size can affect the quality of image watermarking based PSO/GA because small block size can increase the search area of the watermarking image. Better PSO results were obtained when using swarm size equal to 100.Keywords: image watermarking, genetic algorithm, particle swarm optimization, discrete wavelet transform
Procedia PDF Downloads 2265769 Pre-Implementation of Total Body Irradiation Using Volumetric Modulated Arc Therapy: Full Body Anthropomorphic Phantom Development
Authors: Susana Gonçalves, Joana Lencart, Anabela Gregório Dias
Abstract:
Introduction: In combination with chemotherapy, Total Body Irradiation (TBI) is most used as part of the conditioning regimen prior to allogeneic hematopoietic stem cell transplantation. Conventional TBI techniques have a long application time but non-conformality of beam-application with the inability to individually spare organs at risk. Our institution’s intention is to start using Volumetric Modulated Arc Therapy (VMAT) techniques to increase homogeneity of delivered radiation. As a first approach, a dosimetric plan was performed on a computed tomography (CT) scan of a Rando Alderson antropomorfic phantom (head and torso), using a set of six arcs distributed along the phantom. However, a full body anthropomorphic phantom is essential to carry out technique validation and implementation. Our aim is to define the physical and chemical characteristics and the ideal manufacturing procedure of upper and lower limbs to our anthropomorphic phantom, for later validate TBI using VMAT. Materials and Methods: To study the better fit between our phantom and limbs, a CT scan of Rando Alderson anthropomorphic phantom was acquired. CT was performed on GE Healthcare equipment (model Optima CT580 W), with slice thickness of 2.5 mm. This CT was also used to access the electronic density of soft tissue and bone through Hounsfield units (HU) analysis. Results: CT images were analyzed and measures were made for the ideal upper and lower limbs. Upper limbs should be build under the following measures: 43cm length and 7cm diameter (next to the shoulder section). Lower limbs should be build under the following measures: 79cm length and 16.5cm diameter (next to the thigh section). As expected, soft tissue and bone have very different electronic density. This is important to choose and analyze different materials to better represent soft tissue and bone characteristics. The approximate HU values of the soft tissue and for bone shall be 35HU and 250HU, respectively. Conclusion: At the moment, several compounds are being developed based on different types of resins and additives in order to be able to control and mimic the various constituent densities of the tissues. Concurrently, several manufacturing techniques are being explored to make it possible to produce the upper and lower limbs in a simple and non-expensive way, in order to finally carry out a systematic and appropriate study of the total body irradiation. This preliminary study was a good starting point to demonstrate the feasibility of TBI with VMAT.Keywords: TBI, VMAT, anthropomorphic phantom, tissue equivalent materials
Procedia PDF Downloads 805768 The Application of Lesson Study Model in Writing Review Text in Junior High School
Authors: Sulastriningsih Djumingin
Abstract:
This study has some objectives. It aims at describing the ability of the second-grade students to write review text without applying the Lesson Study model at SMPN 18 Makassar. Second, it seeks to describe the ability of the second-grade students to write review text by applying the Lesson Study model at SMPN 18 Makassar. Third, it aims at testing the effectiveness of the Lesson Study model in writing review text at SMPN 18 Makassar. This research was true experimental design with posttest Only group design involving two groups consisting of one class of the control group and one class of the experimental group. The research populations were all the second-grade students at SMPN 18 Makassar amounted to 250 students consisting of 8 classes. The sampling technique was purposive sampling technique. The control class was VIII2 consisting of 30 students, while the experimental class was VIII8 consisting of 30 students. The research instruments were in the form of observation and tests. The collected data were analyzed using descriptive statistical techniques and inferential statistical techniques with t-test types processed using SPSS 21 for windows. The results shows that: (1) of 30 students in control class, there are only 14 (47%) students who get the score more than 7.5, categorized as inadequate; (2) in the experimental class, there are 26 (87%) students who obtain the score of 7.5, categorized as adequate; (3) the Lesson Study models is effective to be applied in writing review text. Based on the comparison of the ability of the control class and experimental class, it indicates that the value of t-count is greater than the value of t-table (2.411> 1.667). It means that the alternative hypothesis (H1) proposed by the researcher is accepted.Keywords: application, lesson study, review text, writing
Procedia PDF Downloads 2025767 SCNet: A Vehicle Color Classification Network Based on Spatial Cluster Loss and Channel Attention Mechanism
Authors: Fei Gao, Xinyang Dong, Yisu Ge, Shufang Lu, Libo Weng
Abstract:
Vehicle color recognition plays an important role in traffic accident investigation. However, due to the influence of illumination, weather, and noise, vehicle color recognition still faces challenges. In this paper, a vehicle color classification network based on spatial cluster loss and channel attention mechanism (SCNet) is proposed for vehicle color recognition. A channel attention module is applied to extract the features of vehicle color representative regions and reduce the weight of nonrepresentative color regions in the channel. The proposed loss function, called spatial clustering loss (SC-loss), consists of two channel-specific components, such as a concentration component and a diversity component. The concentration component forces all feature channels belonging to the same class to be concentrated through the channel cluster. The diversity components impose additional constraints on the channels through the mean distance coefficient, making them mutually exclusive in spatial dimensions. In the comparison experiments, the proposed method can achieve state-of-the-art performance on the public datasets, VCD, and VeRi, which are 96.1% and 96.2%, respectively. In addition, the ablation experiment further proves that SC-loss can effectively improve the accuracy of vehicle color recognition.Keywords: feature extraction, convolutional neural networks, intelligent transportation, vehicle color recognition
Procedia PDF Downloads 183