Search results for: automated optical inspection
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2868

Search results for: automated optical inspection

228 Study of Mixing Conditions for Different Endothelial Dysfunction in Arteriosclerosis

Authors: Sara Segura, Diego Nuñez, Miryam Villamil

Abstract:

In this work, we studied the microscale interaction of foreign substances with blood inside an artificial transparent artery system that represents medium and small muscular arteries. This artery system had channels ranging from 75 μm to 930 μm and was fabricated using glass and transparent polymer blends like Phenylbis(2,4,6-trimethylbenzoyl) phosphine oxide, Poly(ethylene glycol) and PDMS in order to be monitored in real time. The setup was performed using a computer controlled precision micropump and a high resolution optical microscope capable of tracking fluids at fast capture. Observation and analysis were performed using a real time software that reconstructs the fluid dynamics determining the flux velocity, injection dependency, turbulence and rheology. All experiments were carried out with fully computer controlled equipment. Interactions between substances like water, serum (0.9% sodium chloride and electrolyte with a ratio of 4 ppm) and blood cells were studied at microscale as high as 400nm of resolution and the analysis was performed using a frame-by-frame observation and HD-video capture. These observations lead us to understand the fluid and mixing behavior of the interest substance in the blood stream and to shed a light on the use of implantable devices for drug delivery at arteries with different Endothelial dysfunction. Several substances were tested using the artificial artery system. Initially, Milli-Q water was used as a control substance for the study of the basic fluid dynamics of the artificial artery system. However, serum and other low viscous substances were pumped into the system with the presence of other liquids to study the mixing profiles and behaviors. Finally, mammal blood was used for the final test while serum was injected. Different flow conditions, pumping rates, and time rates were evaluated for the determination of the optimal mixing conditions. Our results suggested the use of a very fine controlled microinjection for better mixing profiles with and approximately rate of 135.000 μm3/s for the administration of drugs inside arteries.

Keywords: artificial artery, drug delivery, microfluidics dynamics, arteriosclerosis

Procedia PDF Downloads 273
227 Surface Enhanced Infrared Absorption for Detection of Ultra Trace of 3,4- Methylene Dioxy- Methamphetamine (MDMA)

Authors: Sultan Ben Jaber

Abstract:

Optical properties of molecules exhibit dramatic changes when adsorbed close to nano-structure metallic surfaces such as gold and silver nanomaterial. This phenomena opened a wide range of research to improve conventional spectroscopies efficiency. A well-known technique that has an intensive focus of study is surface-enhanced Raman spectroscopy (SERS), as since the first observation of SERS phenomena, researchers have published a great number of articles about the potential mechanisms behind this effect as well as developing materials to maximize the enhancement. Infrared and Raman spectroscopy are complementary techniques; thus, surface-enhanced infrared absorption (SEIRA) also shows a noticeable enhancement of molecules in the mid-IR excitation on nonmetallic structure substrates. In the SEIRA, vibrational modes that gave change in dipole moments perpendicular to the nano-metallic substrate enhanced 200 times greater than the free molecule’s modes. SEIRA spectroscopy is promising for the characterization and identification of adsorbed molecules on metallic surfaces, especially at trace levels. IR reflection-absorption spectroscopy (IRAS) is a well-known technique for measuring IR spectra of adsorbed molecules on metallic surfaces. However, SEIRA spectroscopy sensitivity is up to 50 times higher than IRAS. SEIRA enhancement has been observed for a wide range of molecules adsorbed on metallic substrates such as Au, Ag, Pd, Pt, Al, and Ni, but Au and Ag substrates exhibited the highest enhancement among the other mentioned substrates. In this work, trace levels of 3,4-methylenedioxymethamphetamine (MDMA) have been detected using gold nanoparticles (AuNPs) substrates with surface-enhanced infrared absorption (SEIRA). AuNPs were first prepared and washed, then mixed with different concentrations of MDMA samples. The process of fabricating the substrate prior SEIRA measurements included mixing of AuNPs and MDMA samples followed by vigorous stirring. The stirring step is particularly crucial, as stirring allows molecules to be robustly adsorbed on AuNPs. Thus, remarkable SEIRA was observed for MDMA samples even at trace levels, showing the rigidity of our approach to preparing SEIRA substrates.

Keywords: surface-enhanced infrared absorption (SEIRA), gold nanoparticles (AuNPs), amphetamines, methylene dioxy- methamphetamine (MDMA), enhancement factor

Procedia PDF Downloads 57
226 Recovering Copper From Tailing and E-Waste to Create Copper Nanoparticles with Antimicrobial Properties

Authors: Erico R. Carmona, Lucas Hernandez-Saravia, Aliro Villacorta, Felipe Carevic

Abstract:

Tailings and electronic waste (e-waste) are an important source of global contamination. Chile is one of Organisation for Economic Co-operation and Development (OECD) member countries that least recycled this kind of industrial waste, reaching only 3% of the total. Tailings and e-waste recycling offers a valuable tool to minimize the increasing accumulation of waste, supplement the scarcity of some raw materials and to obtain economic benefits through the commercialization of these. It should be noted that this type of industrial waste is an important source of valuable metals, such as copper, which allow generating new business and added value through its transformation into new materials with advanced physical and biological properties. In this sense, the development of nanotechnology has led to the creation of nanomaterials with multiple applications given their unique physicochemical properties. Among others, copper nanoparticles (CuNPs) have gained great interest due to their optical, catalytic, conductive properties, and particularly because of their broad-spectrum antimicrobial activity. There are different synthesis methods of copper nanoparticles; however, green synthesis is one of the most promising methodologies, since it is simple, low-cost, ecological, and generates stable nanoparticles, which makes it a promising methodology for scaling up. Currently, there are few initiatives that involve the development of methods for the recovery and transformation of copper from waste to produce nanoparticles with new properties and better technological benefits. Thus, the objective of this work is to show preliminary data about the develop a sustainable transformation process of tailings and e-waste that allows obtaining a copper-based nanotechnological product with potential antimicrobial applications. For this, samples of tailings and e-waste collected from Tarapacá and Antofagasta region of northern Chile were used to recover copper through efficient, ecological, and low-cost alkaline hydrometallurgical treatments, which to allow obtaining copper with a high degree of purity. On the other hand, the transformation process from recycled copper to a nanomaterial was carried out through a green synthesis approach by using vegetal organic residue extracts that allows obtaining CuNPs following methodologies previously reported by authors. Initial physical characterization with UV-Vis, FTIR, AFM, and TEM methodologies will be reported for CuNPs synthesized.

Keywords: nanomaterials, industrial waste, chile, recycling

Procedia PDF Downloads 86
225 Synthesis, Characterization and Photocatalytic Applications of Ag-Doped-SnO₂ Nanoparticles by Sol-Gel Method

Authors: M. S. Abd El-Sadek, M. A. Omar, Gharib M. Taha

Abstract:

In recent years, photocatalytic degradation of various kinds of organic and inorganic pollutants using semiconductor powders as photocatalysts has been extensively studied. Owing to its relatively high photocatalytic activity, biological and chemical stability, low cost, nonpoisonous and long stable life, Tin oxide materials have been widely used as catalysts in chemical reactions, including synthesis of vinyl ketone, oxidation of methanol and so on. Tin oxide (SnO₂), with a rutile-type crystalline structure, is an n-type wide band gap (3.6 eV) semiconductor that presents a proper combination of chemical, electronic and optical properties that make it advantageous in several applications. In the present work, SnO₂ nanoparticles were synthesized at room temperature by the sol-gel process and thermohydrolysis of SnCl₂ in isopropanol by controlling the crystallite size through calculations. The synthesized nanoparticles were identified by using XRD analysis, TEM, FT-IR, and Uv-Visible spectroscopic techniques. The crystalline structure and grain size of the synthesized samples were analyzed by X-Ray diffraction analysis (XRD) and the XRD patterns confirmed the presence of tetragonal phase SnO₂. In this study, Methylene blue degradation was tested by using SnO₂ nanoparticles (at different calculations temperatures) as a photocatalyst under sunlight as a source of irradiation. The results showed that the highest percentage of degradation of Methylene blue dye was obtained by using SnO₂ photocatalyst at calculations temperature 800 ᵒC. The operational parameters were investigated to be optimized to the best conditions which result in complete removal of organic pollutants from aqueous solution. It was found that the degradation of dyes depends on several parameters such as irradiation time, initial dye concentration, the dose of the catalyst and the presence of metals such as silver as a dopant and its concentration. Percent degradation was increased with irradiation time. The degradation efficiency decreased as the initial concentration of the dye increased. The degradation efficiency increased as the dose of the catalyst increased to a certain level and by further increasing the SnO₂ photocatalyst dose, the degradation efficiency is decreased. The best degradation efficiency on which obtained from pure SnO₂ compared with SnO₂ which doped by different percentage of Ag.

Keywords: SnO₂ nanoparticles, a sol-gel method, photocatalytic applications, methylene blue, degradation efficiency

Procedia PDF Downloads 144
224 IEEE802.15.4e Based Scheduling Mechanisms and Systems for Industrial Internet of Things

Authors: Ho-Ting Wu, Kai-Wei Ke, Bo-Yu Huang, Liang-Lin Yan, Chun-Ting Lin

Abstract:

With the advances in advanced technology, wireless sensor network (WSN) has become one of the most promising candidates to implement the wireless industrial internet of things (IIOT) architecture. However, the legacy IEEE 802.15.4 based WSN technology such as Zigbee system cannot meet the stringent QoS requirement of low powered, real-time, and highly reliable transmission imposed by the IIOT environment. Recently, the IEEE society developed IEEE 802.15.4e Time Slotted Channel Hopping (TSCH) access mode to serve this purpose. Furthermore, the IETF 6TiSCH working group has proposed standards to integrate IEEE 802.15.4e with IPv6 protocol smoothly to form a complete protocol stack for IIOT. In this work, we develop key network technologies for IEEE 802.15.4e based wireless IIoT architecture, focusing on practical design and system implementation. We realize the OpenWSN-based wireless IIOT system. The system architecture is divided into three main parts: web server, network manager, and sensor nodes. The web server provides user interface, allowing the user to view the status of sensor nodes and instruct sensor nodes to follow commands via user-friendly browser. The network manager is responsible for the establishment, maintenance, and management of scheduling and topology information. It executes centralized scheduling algorithm, sends the scheduling table to each node, as well as manages the sensing tasks of each device. Sensor nodes complete the assigned tasks and sends the sensed data. Furthermore, to prevent scheduling error due to packet loss, a schedule inspection mechanism is implemented to verify the correctness of the schedule table. In addition, when network topology changes, the system will act to generate a new schedule table based on the changed topology for ensuring the proper operation of the system. To enhance the system performance of such system, we further propose dynamic bandwidth allocation and distributed scheduling mechanisms. The developed distributed scheduling mechanism enables each individual sensor node to build, maintain and manage the dedicated link bandwidth with its parent and children nodes based on locally observed information by exchanging the Add/Delete commands via two processes. The first process, termed as the schedule initialization process, allows each sensor node pair to identify the available idle slots to allocate the basic dedicated transmission bandwidth. The second process, termed as the schedule adjustment process, enables each sensor node pair to adjust their allocated bandwidth dynamically according to the measured traffic loading. Such technology can sufficiently satisfy the dynamic bandwidth requirement in the frequently changing environments. Last but not least, we propose a packet retransmission scheme to enhance the system performance of the centralized scheduling algorithm when the packet delivery rate (PDR) is low. We propose a multi-frame retransmission mechanism to allow every single network node to resend each packet for at least the predefined number of times. The multi frame architecture is built according to the number of layers of the network topology. Performance results via simulation reveal that such retransmission scheme is able to provide sufficient high transmission reliability while maintaining low packet transmission latency. Therefore, the QoS requirement of IIoT can be achieved.

Keywords: IEEE 802.15.4e, industrial internet of things (IIOT), scheduling mechanisms, wireless sensor networks (WSN)

Procedia PDF Downloads 151
223 Analysis of Stress and Strain in Head Based Control of Cooperative Robots through Tetraplegics

Authors: Jochen Nelles, Susanne Kohns, Julia Spies, Friederike Schmitz-Buhl, Roland Thietje, Christopher Brandl, Alexander Mertens, Christopher M. Schlick

Abstract:

Industrial robots as part of highly automated manufacturing are recently developed to cooperative (light-weight) robots. This offers the opportunity of using them as assistance robots and to improve the participation in professional life of disabled or handicapped people such as tetraplegics. Robots under development are located within a cooperation area together with the working person at the same workplace. This cooperation area is an area where the robot and the working person can perform tasks at the same time. Thus, working people and robots are operating in the immediate proximity. Considering the physical restrictions and the limited mobility of tetraplegics, a hands-free robot control could be an appropriate approach for a cooperative assistance robot. To meet these requirements, the research project MeRoSy (human-robot synergy) develops methods for cooperative assistance robots based on the measurement of head movements of the working person. One research objective is to improve the participation in professional life of people with disabilities and, in particular, mobility impaired persons (e.g. wheelchair users or tetraplegics), whose participation in a self-determined working life is denied. This raises the research question, how a human-robot cooperation workplace can be designed for hands-free robot control. Here, the example of a library scenario is demonstrated. In this paper, an empirical study that focuses on the impact of head movement related stress is presented. 12 test subjects with tetraplegia participated in the study. Tetraplegia also known as quadriplegia is the worst type of spinal cord injury. In the experiment, three various basic head movements were examined. Data of the head posture were collected by a motion capture system; muscle activity was measured via surface electromyography and the subjective mental stress was assessed via a mental effort questionnaire. The muscle activity was measured for the sternocleidomastoid (SCM), the upper trapezius (UT) or trapezius pars descendens, and the splenius capitis (SPL) muscle. For this purpose, six non-invasive surface electromyography sensors were mounted on the head and neck area. An analysis of variance shows differentiated muscular strains depending on the type of head movement. Systematically investigating the influence of different basic head movements on the resulting strain is an important issue to relate the research results to other scenarios. At the end of this paper, a conclusion will be drawn and an outlook of future work will be presented.

Keywords: assistance robot, human-robot interaction, motion capture, stress-strain-concept, surface electromyography, tetraplegia

Procedia PDF Downloads 304
222 Periodicity of Solutions to Impulsive Equations

Authors: Jin Liang, James H. Liu, Ti-Jun Xiao

Abstract:

It is known that there exist many physical phenomena where abrupt or impulsive changes occur either in the system dynamics, for example, ad-hoc network, or in the input forces containing impacts, for example, the bombardment of space antenna by micrometeorites. There are many other examples such as ultra high-speed optical signals over communication networks, the collision of particles, inventory control, government decisions, interest changes, changes in stock price, etc. These are impulsive phenomena. Hence, as a combination of the traditional initial value problems and the short-term perturbations whose duration can be negligible in comparison with the duration of the process, the systems with impulsive conditions (i.e., impulsive systems) are more realistic models for describing the impulsive phenomenon. Such a situation is also suitable for the delay systems, which include some of the past states of the system. So far, there have been a lot of research results in the study of impulsive systems with delay both in finite and infinite dimensional spaces. In this paper, we investigate the periodicity of solutions to the nonautonomous impulsive evolution equations with infinite delay in Banach spaces, where the coefficient operators (possibly unbounded) in the linear part depend on the time, which are impulsive systems in infinite dimensional spaces and come from the optimal control theory. It was indicated that the study of periodic solutions for these impulsive evolution equations with infinite delay was challenging because the fixed point theorems requiring some compactness conditions are not applicable to them due to the impulsive condition and the infinite delay. We are happy to report that after detailed analysis, we are able to combine the techniques developed in our previous papers, and some new ideas in this paper, to attack these impulsive evolution equations and derive periodic solutions. More specifically, by virtue of the related transition operator family (evolution family), we present a Poincaré operator given by the nonautonomous impulsive evolution system with infinite delay, and then show that the operator is a condensing operator with respect to Kuratowski's measure of non-compactness in a phase space by using an Amann's lemma. Finally, we derive periodic solutions from bounded solutions in view of the Sadovskii fixed point theorem. We also present a relationship between the boundedness and the periodicity of the solutions of the nonautonomous impulsive evolution system. The new results obtained here extend some earlier results in this area for evolution equations without impulsive conditions or without infinite delay.

Keywords: impulsive, nonautonomous evolution equation, optimal control, periodic solution

Procedia PDF Downloads 239
221 Contextual Toxicity Detection with Data Augmentation

Authors: Julia Ive, Lucia Specia

Abstract:

Understanding and detecting toxicity is an important problem to support safer human interactions online. Our work focuses on the important problem of contextual toxicity detection, where automated classifiers are tasked with determining whether a short textual segment (usually a sentence) is toxic within its conversational context. We use “toxicity” as an umbrella term to denote a number of variants commonly named in the literature, including hate, abuse, offence, among others. Detecting toxicity in context is a non-trivial problem and has been addressed by very few previous studies. These previous studies have analysed the influence of conversational context in human perception of toxicity in controlled experiments and concluded that humans rarely change their judgements in the presence of context. They have also evaluated contextual detection models based on state-of-the-art Deep Learning and Natural Language Processing (NLP) techniques. Counterintuitively, they reached the general conclusion that computational models tend to suffer performance degradation in the presence of context. We challenge these empirical observations by devising better contextual predictive models that also rely on NLP data augmentation techniques to create larger and better data. In our study, we start by further analysing the human perception of toxicity in conversational data (i.e., tweets), in the absence versus presence of context, in this case, previous tweets in the same conversational thread. We observed that the conclusions of previous work on human perception are mainly due to data issues: The contextual data available does not provide sufficient evidence that context is indeed important (even for humans). The data problem is common in current toxicity datasets: cases labelled as toxic are either obviously toxic (i.e., overt toxicity with swear, racist, etc. words), and thus context does is not needed for a decision, or are ambiguous, vague or unclear even in the presence of context; in addition, the data contains labeling inconsistencies. To address this problem, we propose to automatically generate contextual samples where toxicity is not obvious (i.e., covert cases) without context or where different contexts can lead to different toxicity judgements for the same tweet. We generate toxic and non-toxic utterances conditioned on the context or on target tweets using a range of techniques for controlled text generation(e.g., Generative Adversarial Networks and steering techniques). On the contextual detection models, we posit that their poor performance is due to limitations on both of the data they are trained on (same problems stated above) and the architectures they use, which are not able to leverage context in effective ways. To improve on that, we propose text classification architectures that take the hierarchy of conversational utterances into account. In experiments benchmarking ours against previous models on existing and automatically generated data, we show that both data and architectural choices are very important. Our model achieves substantial performance improvements as compared to the baselines that are non-contextual or contextual but agnostic of the conversation structure.

Keywords: contextual toxicity detection, data augmentation, hierarchical text classification models, natural language processing

Procedia PDF Downloads 159
220 Design of a Small and Medium Enterprise Growth Prediction Model Based on Web Mining

Authors: Yiea Funk Te, Daniel Mueller, Irena Pletikosa Cvijikj

Abstract:

Small and medium enterprises (SMEs) play an important role in the economy of many countries. When the overall world economy is considered, SMEs represent 95% of all businesses in the world, accounting for 66% of the total employment. Existing studies show that the current business environment is characterized as highly turbulent and strongly influenced by modern information and communication technologies, thus forcing SMEs to experience more severe challenges in maintaining their existence and expanding their business. To support SMEs at improving their competitiveness, researchers recently turned their focus on applying data mining techniques to build risk and growth prediction models. However, data used to assess risk and growth indicators is primarily obtained via questionnaires, which is very laborious and time-consuming, or is provided by financial institutes, thus highly sensitive to privacy issues. Recently, web mining (WM) has emerged as a new approach towards obtaining valuable insights in the business world. WM enables automatic and large scale collection and analysis of potentially valuable data from various online platforms, including companies’ websites. While WM methods have been frequently studied to anticipate growth of sales volume for e-commerce platforms, their application for assessment of SME risk and growth indicators is still scarce. Considering that a vast proportion of SMEs own a website, WM bears a great potential in revealing valuable information hidden in SME websites, which can further be used to understand SME risk and growth indicators, as well as to enhance current SME risk and growth prediction models. This study aims at developing an automated system to collect business-relevant data from the Web and predict future growth trends of SMEs by means of WM and data mining techniques. The envisioned system should serve as an 'early recognition system' for future growth opportunities. In an initial step, we examine how structured and semi-structured Web data in governmental or SME websites can be used to explain the success of SMEs. WM methods are applied to extract Web data in a form of additional input features for the growth prediction model. The data on SMEs provided by a large Swiss insurance company is used as ground truth data (i.e. growth-labeled data) to train the growth prediction model. Different machine learning classification algorithms such as the Support Vector Machine, Random Forest and Artificial Neural Network are applied and compared, with the goal to optimize the prediction performance. The results are compared to those from previous studies, in order to assess the contribution of growth indicators retrieved from the Web for increasing the predictive power of the model.

Keywords: data mining, SME growth, success factors, web mining

Procedia PDF Downloads 253
219 Development of an Interface between BIM-model and an AI-based Control System for Building Facades with Integrated PV Technology

Authors: Moser Stephan, Lukasser Gerald, Weitlaner Robert

Abstract:

Urban structures will be used more intensively in the future through redensification or new planned districts with high building densities. Especially, to achieve positive energy balances like requested for Positive Energy Districts (PED) the single use of roofs is not sufficient for dense urban areas. However, the increasing share of window significantly reduces the facade area available for use in PV generation. Through the use of PV technology at other building components, such as external venetian blinds, onsite generation can be maximized and standard functionalities of this product can be positively extended. While offering advantages in terms of infrastructure, sustainability in the use of resources and efficiency, these systems require an increased optimization in planning and control strategies of buildings. External venetian blinds with PV technology require an intelligent control concept to meet the required demands such as maximum power generation, glare prevention, high daylight autonomy, avoidance of summer overheating but also use of passive solar gains in wintertime. Today, geometric representation of outdoor spaces and at the building level, three-dimensional geometric information is available for planning with Building Information Modeling (BIM). In a research project, a web application which is called HELLA DECART was developed to provide this data structure to extract the data required for the simulation from the BIM models and to make it usable for the calculations and coupled simulations. The investigated object is uploaded as an IFC file to this web application and includes the object as well as the neighboring buildings and possible remote shading. This tool uses a ray tracing method to determine possible glare from solar reflections of a neighboring building as well as near and far shadows per window on the object. Subsequently, an annual estimate of the sunlight per window is calculated by taking weather data into account. This optimized daylight assessment per window provides the ability to calculate an estimation of the potential power generation at the integrated PV on the venetian blind but also for the daylight and solar entry. As a next step, these results of the calculations as well as all necessary parameters for the thermal simulation can be provided. The overall aim of this workflow is to advance the coordination between the BIM model and coupled building simulation with the resulting shading and daylighting system with the artificial lighting system and maximum power generation in a control system. In the research project Powershade, an AI based control concept for PV integrated façade elements with coupled simulation results is investigated. The developed automated workflow concept in this paper is tested by using an office living lab at the HELLA company.

Keywords: BIPV, building simulation, optimized control strategy, planning tool

Procedia PDF Downloads 99
218 Exo-III Assisted Amplification Strategy through Target Recycling of Hg²⁺ Detection in Water: A GNP Based Label-Free Colorimetry Employing T-Rich Hairpin-Loop Metallobase

Authors: Abdul Ghaffar Memon, Xiao Hong Zhou, Yunpeng Xing, Ruoyu Wang, Miao He

Abstract:

Due to deleterious environmental and health effects of the Hg²⁺ ions, various online, detection methods apart from the traditional analytical tools have been developed by researchers. Biosensors especially, label, label-free, colorimetric and optical sensors have advanced with sensitive detection. However, there remains a gap of ultrasensitive quantification as noise interact significantly especially in the AuNP based label-free colorimetry. This study reported an amplification strategy using Exo-III enzyme for target recycling of Hg²⁺ ions in a T-rich hairpin loop metallobase label-free colorimetric nanosensor with an improved sensitivity using unmodified gold nanoparticles (uGNPs) as an indicator. The two T-rich metallobase hairpin loop structures as 5’- CTT TCA TAC ATA GAA AAT GTA TGT TTG -3 (HgS1), and 5’- GGC TTT GAG CGC TAA GAA A TA GCG CTC TTT G -3’ (HgS2) were tested in the study. The thermodynamic properties of HgS1 and HgS2 were calculated using online tools (http://biophysics.idtdna.com/cgi-bin/meltCalculator.cgi). The lab scale synthesized uGNPs were utilized in the analysis. The DNA sequence had T-rich bases on both tails end, which in the presence of Hg²⁺ forms a T-Hg²⁺-T mismatch, promoting the formation of dsDNA. Later, the Exo-III incubation enable the enzyme to cleave stepwise mononucleotides from the 3’ end until the structure become single-stranded. These ssDNA fragments then adsorb on the surface of AuNPs in their presence and protect AuNPs from the induced salt aggregation. The visible change in color from blue (aggregation stage in the absence of Hg²⁺) and pink (dispersion state in the presence of Hg²⁺ and adsorption of ssDNA fragments) can be observed and analyzed through UV spectrometry. An ultrasensitive quantitative nanosensor employing Exo-III assisted target recycling of mercury ions through label-free colorimetry with nanomolar detection using uGNPs have been achieved and is further under the optimization to achieve picomolar range by avoiding the influence of the environmental matrix. The proposed strategy will supplement in the direction of uGNP based ultrasensitive, rapid, onsite, label-free colorimetric detection.

Keywords: colorimetric, Exo-III, gold nanoparticles, Hg²⁺ detection, label-free, signal amplification

Procedia PDF Downloads 303
217 Atmospheric Circulation Types Related to Dust Transport Episodes over Crete in the Eastern Mediterranean

Authors: K. Alafogiannis, E. E. Houssos, E. Anagnostou, G. Kouvarakis, N. Mihalopoulos, A. Fotiadi

Abstract:

The Mediterranean basin is an area where different aerosol types coexist, including urban/industrial, desert dust, biomass burning and marine particles. Particularly, mineral dust aerosols, mostly originated from North African deserts, significantly contribute to high aerosol loads above the Mediterranean. Dust transport, controlled by the variation of the atmospheric circulation throughout the year, results in a strong spatial and temporal variability of aerosol properties. In this study, the synoptic conditions which favor dust transport over the Eastern Mediterranean are thoroughly investigated. For this reason, three datasets are employed. Firstly, ground-based daily data of aerosol properties, namely Aerosol Optical Thickness (AOT), Ångström exponent (α440-870) and fine fraction from the FORTH-AERONET (Aerosol Robotic Network) station along with measurements of PM10 concentrations from Finokalia station, for the period 2003-2011, are used to identify days with high coarse aerosol load (episodes) over Crete. Then, geopotential height at 1000, 850 and 700 hPa levels obtained from the NCEP/NCAR Reanalysis Project, are utilized to depict the atmospheric circulation during the identified episodes. Additionally, air-mass back trajectories, calculated by HYSPLIT, are used to verify the origin of aerosols from neighbouring deserts. For the 227 identified dust episodes, the statistical methods of Factor and Cluster Analysis are applied on the corresponding atmospheric circulation data to reveal the main types of the synoptic conditions favouring dust transport towards Crete (Eastern Mediterranean). The 227 cases are classified into 11 distinct types (clusters). Dust episodes in Eastern Mediterranean, are found to be more frequent (52%) in spring with a secondary maximum in autumn. The main characteristic of the atmospheric circulation associated with dust episodes, is the presence of a low-pressure system at surface, either in southwestern Europe or western/central Mediterranean, which induces a southerly air flow favouring dust transport from African deserts. The exact position and the intensity of the low-pressure system vary notably among clusters. More rarely dust may originate from deserts of Arabian Peninsula.

Keywords: aerosols, atmospheric circulation, dust particles, Eastern Mediterranean

Procedia PDF Downloads 221
216 Study of Growth Behavior of Some Bacterial Fish Pathogens to Combined Selected Herbal Essential Oil

Authors: Ashkan Zargar, Ali Taheri Mirghaed, Zein Talal Barakat, Alireza Khosravi, Hamed Paknejad

Abstract:

With the increase of bacterial resistance to the chemical antibiotics, replacing it with ecofriendly herbal materials and with no adverse effects in the host body is very important. Therefore, in this study, the effect of combined essential oil (Thymus vulgaris-Origanum magorana and Ziziphora clinopodioides) on the growth behavior of Yersinia ruckeri, Aeromonas hydrophila and Lactococcus garvieae was evaluated. The compositions of the herbal essential oils used in this study were determined by gas chromatography-mass spectrometry (GC-MS) while, the investigating of antimicrobial effects was conducted by the agar-disc diffusion method, determination of minimum inhibitory concentration (MIC) and minimum bactericidal concentration (MBC), and bacterial growth curves determination relied on optical density (OD) at 630 nm. The main compounds were thymol (40.60 %) and limonene (15.98 %) for Thymus vulgaris while carvacrol (57.86 %) and thymol (13.54 %) were the major compounds in Origanum magorana. As regards Ziziphora clinopodiodes, α-pinene (22.6 %) and carvacrol (21.1 %) represented the major constituents. Concerning Yersinia ruckeri, disc-diffusion results showed that t.O.z (50 % Origanum majorana) combined essential oil was presented the best inhibition zone (30.66 mm) but it was exhibited no significant differences with other tested commercial antibiotics except oxytetracycline (P <0/05). The inhibitory activity and the bactericidal effect of the t.O.z, unveiled by the MIC= 0.2 μL /mL and MBC= 1.6 μL /mL values, were clearly the best between all combined oils. The growth behaviour of Yersinia ruckeri was affected by this combined essential oil and changes in temperature and pH conditions affected herbal oil performance. As regard Aeromonas hydrophila, its results were so similar to Yersinia ruckeri results and t.O.z (50 % Origanum majorana) was the best between all combined oils (inhibition zone= 26 mm, MIC= 0.4 μL /mL and MBC= 3.2 μL /mL, combined essential oil was affected bacterial growth behavior). Also for Lactococcus garvieae, t.O.z (50 % Origanum majorana) was the best between all combined oils having the best inhibition zone= 20.66 mm, MIC= 0.8 μL /mL and MBC= 1.6 μL /mL and best effect on inhibiting bacterial growth. Combined herbal essential oils have a good and noticeable effect on the growth behavior of pathogenic bacteria in the laboratory, and by continuing research in the host, they may be a suitable alternative to control, prevent and treat diseases caused by these bacteria.

Keywords: bacterial pathogen, herbal medicine, growth behavior, fish

Procedia PDF Downloads 59
215 Multicenter Evaluation of the ACCESS HBsAg and ACCESS HBsAg Confirmatory Assays on the DxI 9000 ACCESS Immunoassay Analyzer, for the Detection of Hepatitis B Surface Antigen

Authors: Vanessa Roulet, Marc Turini, Juliane Hey, Stéphanie Bord-Romeu, Emilie Bonzom, Mahmoud Badawi, Mohammed-Amine Chakir, Valérie Simon, Vanessa Viotti, Jérémie Gautier, Françoise Le Boulaire, Catherine Coignard, Claire Vincent, Sandrine Greaume, Isabelle Voisin

Abstract:

Background: Beckman Coulter, Inc. has recently developed fully automated assays for the detection of HBsAg on a new immunoassay platform. The objective of this European multicenter study was to evaluate the performance of the ACCESS HBsAg and ACCESS HBsAg Confirmatory assays† on the recently CE-marked DxI 9000 ACCESS Immunoassay Analyzer. Methods: The clinical specificity of the ACCESS HBsAg and HBsAg Confirmatory assays was determined using HBsAg-negative samples from blood donors and hospitalized patients. The clinical sensitivity was determined using presumed HBsAg-positive samples. Sample HBsAg status was determined using a CE-marked HBsAg assay (Abbott ARCHITECT HBsAg Qualitative II, Roche Elecsys HBsAg II, or Abbott PRISM HBsAg assay) and a CE-marked HBsAg confirmatory assay (Abbott ARCHITECT HBsAg Qualitative II Confirmatory or Abbott PRISM HBsAg Confirmatory assay) according to manufacturer package inserts and pre-determined testing algorithms. False initial reactive rate was determined on fresh hospitalized patient samples. The sensitivity for the early detection of HBV infection was assessed internally on thirty (30) seroconversion panels. Results: Clinical specificity was 99.95% (95% CI, 99.86 – 99.99%) on 6047 blood donors and 99.71% (95%CI, 99.15 – 99.94%) on 1023 hospitalized patient samples. A total of six (6) samples were found false positive with the ACCESS HBsAg assay. None were confirmed for the presence of HBsAg with the ACCESS HBsAg Confirmatory assay. Clinical sensitivity on 455 HBsAg-positive samples was 100.00% (95% CI, 99.19 – 100.00%) for the ACCESS HBsAg assay alone and for the ACCESS HBsAg Confirmatory assay. The false initial reactive rate on 821 fresh hospitalized patient samples was 0.24% (95% CI, 0.03 – 0.87%). Results obtained on 30 seroconversion panels demonstrated that the ACCESS HBsAg assay had equivalent sensitivity performances compared to the Abbott ARCHITECT HBsAg Qualitative II assay with an average bleed difference since first reactive bleed of 0.13. All bleeds found reactive in ACCESS HBsAg assay were confirmed in ACCESS HBsAg Confirmatory assay. Conclusion: The newly developed ACCESS HBsAg and ACCESS HBsAg Confirmatory assays from Beckman Coulter have demonstrated high clinical sensitivity and specificity, equivalent to currently marketed HBsAg assays, as well as a low false initial reactive rate. †Pending achievement of CE compliance; not yet available for in vitro diagnostic use. 2023-11317 Beckman Coulter and the Beckman Coulter product and service marks mentioned herein are trademarks or registered trademarks of Beckman Coulter, Inc. in the United States and other countries. All other trademarks are the property of their respective owners.

Keywords: dxi 9000 access immunoassay analyzer, hbsag, hbv, hepatitis b surface antigen, hepatitis b virus, immunoassay

Procedia PDF Downloads 73
214 Treatment and Diagnostic Imaging Methods of Fetal Heart Function in Radiology

Authors: Mahdi Farajzadeh Ajirlou

Abstract:

Prior evidence of normal cardiac anatomy is desirable to relieve the anxiety of cases with a family history of congenital heart disease or to offer the option of early gestation termination or close follow-up should a cardiac anomaly be proved. Fetal heart discovery plays an important part in the opinion of the fetus, and it can reflect the fetal heart function of the fetus, which is regulated by the central nervous system. Acquisition of ventricular volume and inflow data would be useful to quantify more valve regurgitation and ventricular function to determine the degree of cardiovascular concession in fetal conditions at threat for hydrops fetalis. This study discusses imaging the fetal heart with transvaginal ultrasound, Doppler ultrasound, three-dimensional ultrasound (3DUS) and four-dimensional (4D) ultrasound, spatiotemporal image correlation (STIC), glamorous resonance imaging and cardiac catheterization. Doppler ultrasound (DUS) image is a kind of real- time image with a better imaging effect on blood vessels and soft tissues. DUS imaging can observe the shape of the fetus, but it cannot show whether the fetus is hypoxic or distressed. Spatiotemporal image correlation (STIC) enables the acquisition of a volume of data concomitant with the beating heart. The automated volume accession is made possible by the array in the transducer performing a slow single reach, recording a single 3D data set conforming to numerous 2D frames one behind the other. The volume accession can be done in a stationary 3D, either online 4D (direct volume scan, live 3D ultrasound or a so-called 4D (3D/ 4D)), or either spatiotemporal image correlation-STIC (off-line 4D, which is a circular volume check-up). Fetal cardiovascular MRI would appear to be an ideal approach to the noninvasive disquisition of the impact of abnormal cardiovascular hemodynamics on antenatal brain growth and development. Still, there are practical limitations to the use of conventional MRI for fetal cardiovascular assessment, including the small size and high heart rate of the mortal fetus, the lack of conventional cardiac gating styles to attend data accession, and the implicit corruption of MRI data due to motherly respiration and unpredictable fetal movements. Fetal cardiac MRI has the implicit to complement ultrasound in detecting cardiovascular deformations and extracardiac lesions. Fetal cardiac intervention (FCI), minimally invasive catheter interventions, is a new and evolving fashion that allows for in-utero treatment of a subset of severe forms of congenital heart deficiency. In special cases, it may be possible to modify the natural history of congenital heart disorders. It's entirely possible that future generations will ‘repair’ congenital heart deficiency in utero using nanotechnologies or remote computer-guided micro-robots that work in the cellular layer.

Keywords: fetal, cardiac MRI, ultrasound, 3D, 4D, heart disease, invasive, noninvasive, catheter

Procedia PDF Downloads 20
213 Bioreactor for Cell-Based Impedance Measuring with Diamond Coated Gold Interdigitated Electrodes

Authors: Roman Matejka, Vaclav Prochazka, Tibor Izak, Jana Stepanovska, Martina Travnickova, Alexander Kromka

Abstract:

Cell-based impedance spectroscopy is suitable method for electrical monitoring of cell activity especially on substrates that cannot be easily inspected by optical microscope (without fluorescent markers) like decellularized tissues, nano-fibrous scaffold etc. Special sensor for this measurement was developed. This sensor consists of corning glass substrate with gold interdigitated electrodes covered with diamond layer. This diamond layer provides biocompatible non-conductive surface for cells. Also, a special PPFC flow cultivation chamber was developed. This chamber is able to fix sensor in place. The spring contacts are connecting sensor pads with external measuring device. Construction allows real-time live cell imaging. Combining with perfusion system allows medium circulation and generating shear stress stimulation. Experimental evaluation consist of several setups, including pure sensor without any coating and also collagen and fibrin coating was done. The Adipose derived stem cells (ASC) and Human umbilical vein endothelial cells (HUVEC) were seeded onto sensor in cultivation chamber. Then the chamber was installed into microscope system for live-cell imaging. The impedance measurement was utilized by vector impedance analyzer. The measured range was from 10 Hz to 40 kHz. These impedance measurements were correlated with live-cell microscopic imaging and immunofluorescent staining. Data analysis of measured signals showed response to cell adhesion of substrates, their proliferation and also change after shear stress stimulation which are important parameters during cultivation. Further experiments plan to use decellularized tissue as scaffold fixed on sensor. This kind of impedance sensor can provide feedback about cell culture conditions on opaque surfaces and scaffolds that can be used in tissue engineering in development artificial prostheses. This work was supported by the Ministry of Health, grants No. 15-29153A and 15-33018A.

Keywords: bio-impedance measuring, bioreactor, cell cultivation, diamond layer, gold interdigitated electrodes, tissue engineering

Procedia PDF Downloads 291
212 Contribution to the Study of Automatic Epileptiform Pattern Recognition in Long Term EEG Signals

Authors: Christine F. Boos, Fernando M. Azevedo

Abstract:

Electroencephalogram (EEG) is a record of the electrical activity of the brain that has many applications, such as monitoring alertness, coma and brain death; locating damaged areas of the brain after head injury, stroke and tumor; monitoring anesthesia depth; researching physiology and sleep disorders; researching epilepsy and localizing the seizure focus. Epilepsy is a chronic condition, or a group of diseases of high prevalence, still poorly explained by science and whose diagnosis is still predominantly clinical. The EEG recording is considered an important test for epilepsy investigation and its visual analysis is very often applied for clinical confirmation of epilepsy diagnosis. Moreover, this EEG analysis can also be used to help define the types of epileptic syndrome, determine epileptiform zone, assist in the planning of drug treatment and provide additional information about the feasibility of surgical intervention. In the context of diagnosis confirmation the analysis is made using long term EEG recordings with at least 24 hours long and acquired by a minimum of 24 electrodes in which the neurophysiologists perform a thorough visual evaluation of EEG screens in search of specific electrographic patterns called epileptiform discharges. Considering that the EEG screens usually display 10 seconds of the recording, the neurophysiologist has to evaluate 360 screens per hour of EEG or a minimum of 8,640 screens per long term EEG recording. Analyzing thousands of EEG screens in search patterns that have a maximum duration of 200 ms is a very time consuming, complex and exhaustive task. Because of this, over the years several studies have proposed automated methodologies that could facilitate the neurophysiologists’ task of identifying epileptiform discharges and a large number of methodologies used neural networks for the pattern classification. One of the differences between all of these methodologies is the type of input stimuli presented to the networks, i.e., how the EEG signal is introduced in the network. Five types of input stimuli have been commonly found in literature: raw EEG signal, morphological descriptors (i.e. parameters related to the signal’s morphology), Fast Fourier Transform (FFT) spectrum, Short-Time Fourier Transform (STFT) spectrograms and Wavelet Transform features. This study evaluates the application of these five types of input stimuli and compares the classification results of neural networks that were implemented using each of these inputs. The performance of using raw signal varied between 43 and 84% efficiency. The results of FFT spectrum and STFT spectrograms were quite similar with average efficiency being 73 and 77%, respectively. The efficiency of Wavelet Transform features varied between 57 and 81% while the descriptors presented efficiency values between 62 and 93%. After simulations we could observe that the best results were achieved when either morphological descriptors or Wavelet features were used as input stimuli.

Keywords: Artificial neural network, electroencephalogram signal, pattern recognition, signal processing

Procedia PDF Downloads 516
211 Understanding Magnetic Properties of Cd1-xSnxCr2Se4 Using Local Structure Probes

Authors: P. Suchismita Behera, V. G. Sathe, A. K. Nigam, P. A. Bhobe

Abstract:

Co-existence of long-range ferromagnetism and semi-conductivity with correlated behavior of structural, magnetic, optical and electrical properties in various sites doping at CdCr2Se4 makes it a most promising candidate for spin-based electronic applications and magnetic devices. It orders ferromagnetically below TC = 130 K with a direct band gap of ~ 1.5 eV. The magnetic ordering is believed to result from strong competition between the direct antiferromagnetic Cr-Cr spin couplings and the ferromagnetic Cr-Se-Cr exchange interactions. With an aim of understanding the influence of crystal structure on its magnetic properties without disturbing the magnetic site, we investigated four compositions with 3%, 5%, 7% and 10% of Sn-substitution at Cd-site. Partial substitution of Cd2+ (0.78Å) by small sized nonmagnetic ion, Sn4+ (0.55Å), is expected to bring about local lattice distortion as well as a change in electronic charge distribution. The structural disorder would affect the Cd/Sn – Se bonds thus affecting the Cr-Cr and Cr-Se-Cr bonds. Whereas, the charge imbalance created due to Sn4+ substitution at Cd2+ leads to the possibility of Cr mixed valence state. Our investigation of the local crystal structure using the EXAFS, Raman spectroscopy and magnetic properties using SQUID magnetometry of the Cd1-xSnxCr2Se4 series reflects this premise. All compositions maintain the Fd3m cubic symmetry with tetrahedral distribution of Sn at Cd-site, as confirmed by XRD analysis. Lattice parameters were determined from the Rietveld refinement technique of the XRD data and further confirmed from the EXAFS spectra recorded at Cr K-edge. Presence of five Raman-active phonon vibrational modes viz. (T2g (1), T2g (2), T2g (3), Eg, A1g) in the Raman spectra further confirms the crystal symmetry. Temperature dependence of the Raman data provides interesting insight to the spin– phonon coupling, known to dominate the magneto-capacitive properties in the parent compound. Below the magnetic ordering temperature, the longitudinal damping of Eg mode associated with Se-Cd/Sn-Se bending and T2g (2) mode associated to Cr-Se-Cr interaction, show interesting deviations with respect to increase in Sn substitution. Besides providing the estimate of TC, the magnetic measurements recorded as a function of field provide the values of total magnetic moment for all the studied compositions indicative of formation of multiple Cr valences.

Keywords: exchange interactions, EXAFS, ferromagnetism, Raman spectroscopy, spinel chalcogenides

Procedia PDF Downloads 268
210 Aerosol Direct Radiative Forcing Over the Indian Subcontinent: A Comparative Analysis from the Satellite Observation and Radiative Transfer Model

Authors: Shreya Srivastava, Sagnik Dey

Abstract:

Aerosol direct radiative forcing (ADRF) refers to the alteration of the Earth's energy balance from the scattering and absorption of solar radiation by aerosol particles. India experiences substantial ADRF due to high aerosol loading from various sources. These aerosols' radiative impact depends on their physical characteristics (such as size, shape, and composition) and atmospheric distribution. Quantifying ADRF is crucial for understanding aerosols’ impact on the regional climate and the Earth's radiative budget. In this study, we have taken radiation data from Clouds and the Earth’s Radiant Energy System (CERES, spatial resolution=1ox1o) for 22 years (2000-2021) over the Indian subcontinent. Except for a few locations, the short-wave DARF exhibits aerosol cooling at the TOA (values ranging from +2.5 W/m2 to -22.5W/m2). Cooling due to aerosols is more pronounced in the absence of clouds. Being an aerosol hotspot, higher negative ADRF is observed over the Indo-Gangetic Plain (IGP). Aerosol Forcing Efficiency (AFE) shows a decreasing seasonal trend in winter (DJF) over the entire study region while an increasing trend over IGP and western south India during the post-monsoon season (SON) in clear-sky conditions. Analysing atmospheric heating and AOD trends, we found that only the aerosol loading is not governing the change in atmospheric heating but also the aerosol composition and/or their vertical profile. We used a Multi-angle Imaging Spectro-Radiometer (MISR) Level-2 Version 23 aerosol products to look into aerosol composition. MISR incorporates 74 aerosol mixtures in its retrieval algorithm based on size, shape, and absorbing properties. This aerosol mixture information was used for analysing long-term changes in aerosol composition and dominating aerosol species corresponding to the aerosol forcing value. Further, ADRF derived from this method is compared with around 35 studies across India, where a plane parallel Radiative transfer model was used, and the model inputs were taken from the OPAC (Optical Properties of Aerosols and Clouds) utilizing only limited aerosol parameter measurements. The result shows a large overestimation of TOA warming by the latter (i.e., Model-based method).

Keywords: aerosol radiative forcing (ARF), aerosol composition, MISR, CERES, SBDART

Procedia PDF Downloads 41
209 Analytical Study and Conservation Processes of Scribe Box from Old Kingdom

Authors: Mohamed Moustafa, Medhat Abdallah, Ramy Magdy, Ahmed Abdrabou, Mohamed Badr

Abstract:

The scribe box under study dates back to the old kingdom. It was excavated by the Italian expedition in Qena (1935-1937). The box consists of 2pieces, the lid and the body. The inner side of the lid is decorated with ancient Egyptian inscriptions written with a black pigment. The box was made using several panels assembled together by wooden dowels and secured with plant ropes. The entire box is covered with a red pigment. This study aims to use analytical techniques in order to identify and have deep understanding for the box components. Moreover, the authors were significantly interested in using infrared reflectance transmission imaging (RTI-IR) to improve the hidden inscriptions on the lid. The identification of wood species included in this study. The visual observation and assessment were done to understand the condition of this box. 3Ddimensions and 2D programs were used to illustrate wood joints techniques. Optical microscopy (OM), X-ray diffraction (XRD), X-ray fluorescence portable (XRF) and Fourier Transform Infrared spectroscopy (FTIR) were used in this study in order to identify wood species, remains of insects bodies, red pigment, fibers plant and previous conservation adhesives, also RTI-IR technique was very effective to improve hidden inscriptions. The analysis results proved that wooden panels and dowels were identified as Acacia nilotica, wooden rail was Salix sp. the insects were identified as Lasioderma serricorne and Gibbium psylloids, the red pigment was Hematite, while the fiber plants were linen, previous adhesive was identified as cellulose nitrates. The historical study for the inscriptions proved that it’s a Hieratic writings of a funerary Text. After its transportation from the Egyptian museum storage to the wood conservation laboratory of the Grand Egyptian museum –conservation center (GEM-CC), conservation techniques were applied with high accuracy in order to restore the object including cleaning , consolidating of friable pigments and writings, removal of previous adhesive and reassembly, finally the conservation process that were applied were extremely effective for this box which became ready for display or storage in the grand Egyptian museum.

Keywords: scribe box, hieratic, 3D program, Acacia nilotica, XRD, cellulose nitrate, conservation

Procedia PDF Downloads 262
208 Statistical Analysis to Compare between Smart City and Traditional Housing

Authors: Taha Anjamrooz, Sareh Rajabi, Ayman Alzaatreh

Abstract:

Smart cities are playing important roles in real life. Integration and automation between different features of modern cities and information technologies improve smart city efficiency, energy management, human and equipment resource management, life quality and better utilization of resources for the customers. One of difficulties in this path, is use, interface and link between software, hardware, and other IT technologies to develop and optimize processes in various business fields such as construction, supply chain management and transportation in parallel to cost-effective and resource reduction impacts. Also, Smart cities are certainly intended to demonstrate a vital role in offering a sustainable and efficient model for smart houses while mitigating environmental and ecological matters. Energy management is one of the most important matters within smart houses in the smart cities and communities, because of the sensitivity of energy systems, reduction in energy wastage and maximization in utilizing the required energy. Specially, the consumption of energy in the smart houses is important and considerable in the economic balance and energy management in smart city as it causes significant increment in energy-saving and energy-wastage reduction. This research paper develops features and concept of smart city in term of overall efficiency through various effective variables. The selected variables and observations are analyzed through data analysis processes to demonstrate the efficiency of smart city and compare the effectiveness of each variable. There are ten chosen variables in this study to improve overall efficiency of smart city through increasing effectiveness of smart houses using an automated solar photovoltaic system, RFID System, smart meter and other major elements by interfacing between software and hardware devices as well as IT technologies. Secondly to enhance aspect of energy management by energy-saving within smart house through efficient variables. The main objective of smart city and smart houses is to reproduce energy and increase its efficiency through selected variables with a comfortable and harmless atmosphere for the customers within a smart city in combination of control over the energy consumption in smart house using developed IT technologies. Initially the comparison between traditional housing and smart city samples is conducted to indicate more efficient system. Moreover, the main variables involved in measuring overall efficiency of system are analyzed through various processes to identify and prioritize the variables in accordance to their influence over the model. The result analysis of this model can be used as comparison and benchmarking with traditional life style to demonstrate the privileges of smart cities. Furthermore, due to expensive and expected shortage of natural resources in near future, insufficient and developed research study in the region, and available potential due to climate and governmental vision, the result and analysis of this study can be used as key indicator to select most effective variables or devices during construction phase and design

Keywords: smart city, traditional housing, RFID, photovoltaic system, energy efficiency, energy saving

Procedia PDF Downloads 103
207 Seroepidemiological Study of Toxoplasma gondii Infection in Women of Child-Bearing Age in Communities in Osun State, Nigeria

Authors: Olarinde Olaniran, Oluyomi A. Sowemimo

Abstract:

Toxoplasmosis is frequently misdiagnosed or underdiagnosed, and it is the third most common cause of hospitalization due to food-borne infection. Intra-uterine infection with Toxoplasma gondii due to active parasitaemia during pregnancy can cause severe and often fatal cerebral damage, abortion, and stillbirth of a fetus. The aim of the study was to investigate the prevalence of T. gondii infection in women of childbearing age in selected communities of Osun State with a view to determining the risk factors which predispose to the T. gondii infection. Five (5) ml of blood was collected by venopuncture into a plain blood collection tube by a medical laboratory scientist. Serum samples were separated by centrifuging the blood samples at 3000 rpm for 5 mins. The sera were collected with Eppendorf tubes and stored at -20°C analysis for the presence of IgG and IgM antibodies against T. gondii by commercially available enzyme-linked immunosorbent assay (ELISA) kit (Demeditec Diagnostics GmbH, Germany) conducted according to the manufacturer’s instructions. The optical densities of wells were measured by a photometer at a wavelength of 450 nm. Data collected were analysed using appropriate computer software. The overall seroprevalence of T. gondii among the women of child-bearing age in selected seven communities in Osun state was 76.3%. Out of 76.3% positive for Toxoplasma gondii infection, 70.0% were positive for anti- T. gondii IgG, and 32.3% were positive for IgM, and 26.7% for both IgG and IgM. The prevalence of T. gondii was lowest (58.9%) among women from Ile Ife, a peri-urban community, and highest (100%) in women residing in Alajue, a rural community. The prevalence of infection was significantly higher (P= 0.000) among Islamic women (87.5%) than in Christian women (70.8%). The highest prevalence (86.3%) was recorded in women with primary education, while the lowest (61.2%) was recorded in women with tertiary education (p =0.016). The highest prevalence (79.7%) was recorded in women that reside in rural areas, and the lowest (70.1%) was recorded in women that reside in peri-urban area (p=0.025). The prevalence of T. gondii infection was highest (81.4%) in women with one miscarriage, while the prevalence was lowest in women with no miscarriages (75.9%). The age of the women (p=0.042), Islamic religion (p=0.001), the residence of the women (p=0.001), and water source were all positively associated with T. gondii infection. The study concluded that there was a high seroprevalence of T. gondii recorded among women of child-bearing age in the study area. Hence, there is a need for health education and create awareness of the disease and its transmission to women of reproductive age group in general and pregnant women in particular to reduce the risk of T. gondii in pregnant women.

Keywords: seroepidemiology, Toxoplasma gondii, women, child-bearing, age, communities, Ile -Ife, Nigeria

Procedia PDF Downloads 168
206 A Human Factors Approach to Workload Optimization for On-Screen Review Tasks

Authors: Christina Kirsch, Adam Hatzigiannis

Abstract:

Rail operators and maintainers worldwide are increasingly replacing walking patrols in the rail corridor with mechanized track patrols -essentially data capture on trains- and on-screen reviews of track infrastructure in centralized review facilities. The benefit is that infrastructure workers are less exposed to the dangers of the rail corridor. The impact is a significant change in work design from walking track sections and direct observation in the real world to sedentary jobs in the review facility reviewing captured data on screens. Defects in rail infrastructure can have catastrophic consequences. Reviewer performance regarding accuracy and efficiency of reviews within the available time frame is essential to ensure safety and operational performance. Rail operators must optimize workload and resource loading to transition to on-screen reviews successfully. Therefore, they need to know what workload assessment methodologies will provide reliable and valid data to optimize resourcing for on-screen reviews. This paper compares objective workload measures, including track difficulty ratings and review distance covered per hour, and subjective workload assessments (NASA TLX) and analyses the link between workload and reviewer performance, including sensitivity, precision, and overall accuracy. An experimental study was completed with eight on-screen reviewers, including infrastructure workers and engineers, reviewing track sections with different levels of track difficulty over nine days. Each day the reviewers completed four 90-minute sessions of on-screen inspection of the track infrastructure. Data regarding the speed of review (km/ hour), detected defects, false negatives, and false positives were collected. Additionally, all reviewers completed a subjective workload assessment (NASA TLX) after each 90-minute session and a short employee engagement survey at the end of the study period that captured impacts on job satisfaction and motivation. The results showed that objective measures for tracking difficulty align with subjective mental demand, temporal demand, effort, and frustration in the NASA TLX. Interestingly, review speed correlated with subjective assessments of physical and temporal demand, but to mental demand. Subjective performance ratings correlated with all accuracy measures and review speed. The results showed that subjective NASA TLX workload assessments accurately reflect objective workload. The analysis of the impact of workload on performance showed that subjective mental demand correlated with high precision -accurately detected defects, not false positives. Conversely, high temporal demand was negatively correlated with sensitivity and the percentage of detected existing defects. Review speed was significantly correlated with false negatives. With an increase in review speed, accuracy declined. On the other hand, review speed correlated with subjective performance assessments. Reviewers thought their performance was higher when they reviewed the track sections faster, despite the decline in accuracy. The study results were used to optimize resourcing and ensure that reviewers had enough time to review the allocated track sections to improve defect detection rates in accordance with the efficiency-thoroughness trade-off. Overall, the study showed the importance of a multi-method approach to workload assessment and optimization, combining subjective workload assessments with objective workload and performance measures to ensure that recommendations for work system optimization are evidence-based and reliable.

Keywords: automation, efficiency-thoroughness trade-off, human factors, job design, NASA TLX, performance optimization, subjective workload assessment, workload analysis

Procedia PDF Downloads 111
205 A Stepwise Approach for Piezoresistive Microcantilever Biosensor Optimization

Authors: Amal E. Ahmed, Levent Trabzon

Abstract:

Due to the low concentration of the analytes in biological samples, the use of Biological Microelectromechanical System (Bio-MEMS) biosensors for biomolecules detection results in a minuscule output signal that is not good enough for practical applications. In response to this, a need has arisen for an optimized biosensor capable of giving high output signal in response the detection of few analytes in the sample; the ultimate goal is being able to convert the attachment of a single biomolecule into a measurable quantity. For this purpose, MEMS microcantilevers based biosensors emerged as a promising sensing solution because it is simple, cheap, very sensitive and more importantly does not need analytes optical labeling (Label-free). Among the different microcantilever transducing techniques, piezoresistive based microcantilever biosensors became more prominent because it works well in liquid environments and has an integrated readout system. However, the design of piezoresistive microcantilevers is not a straightforward problem due to coupling between the design parameters, constraints, process conditions, and performance. It was found that the parameters that can be optimized to enhance the sensitivity of Piezoresistive microcantilever-based sensors are: cantilever dimensions, cantilever material, cantilever shape, piezoresistor material, piezoresistor doping level, piezoresistor dimensions, piezoresistor position, Stress Concentration Region's (SCR) shape and position. After a systematic analyzation of the effect of each design and process parameters on the sensitivity, a step-wise optimization approach was developed in which almost all these parameters were variated one at each step while fixing the others to get the maximum possible sensitivity at the end. At each step, the goal was to optimize the parameter in a way that it maximizes and concentrates the stress in the piezoresistor region for the same applied force thus get the higher sensitivity. Using this approach, an optimized sensor that has 73.5x times higher electrical sensitivity (ΔR⁄R) than the starting sensor was obtained. In addition to that, this piezoresistive microcantilever biosensor it is more sensitive than the other similar sensors previously reported in the open literature. The mechanical sensitivity of the final senior is -1.5×10-8 Ω/Ω ⁄pN; which means that for each 1pN (10-10 g) biomolecules attach to this biosensor; the piezoresistor resistivity will decrease by 1.5×10-8 Ω. Throughout this work COMSOL Multiphysics 5.0, a commercial Finite Element Analysis (FEA) tool, has been used to simulate the sensor performance.

Keywords: biosensor, microcantilever, piezoresistive, stress concentration region (SCR)

Procedia PDF Downloads 560
204 A Bottom-Up Approach for the Synthesis of Highly Ordered Fullerene-Intercalated Graphene Hybrids

Authors: A. Kouloumpis, P. Zygouri, G. Potsi, K. Spyrou, D. Gournis

Abstract:

Much of the research effort on graphene focuses on its use as building block for the development of new hybrid nanostructures with well-defined dimensions and behavior suitable for applications among else in gas storage, heterogeneous catalysis, gas/liquid separations, nanosensing and biology. Towards this aim, here we describe a new bottom-up approach, which combines the self-assembly with the Langmuir Schaefer technique, for the production of fullerene-intercalated graphene hybrid materials. This new method uses graphene nanosheets as a template for the grafting of various fullerene C60 molecules (pure C60, bromo-fullerenes, C60Br24, and fullerols, C60(OH)24) in a bi-dimensional array, and allows for perfect layer-by-layer growth with control at the molecular level. Our film preparation approach involves a bottom-up layer-by-layer process that includes the formation of a hybrid organo-graphene Langmuir film hosting fullerene molecules within its interlayer spacing. A dilute water solution of chemically oxidized graphene (GO) was used as subphase on the Langmuir-Blodgett deposition system while an appropriate amino surfactant (that binds covalently with the GO) was applied for the formation of hybridized organo-GO. After the horizontal lift of a hydrophobic substrate, a surface modification of the GO platelets was performed by bringing the surface of the transferred Langmuir film in contact with a second amino surfactant solution (capable to interact strongly with the fullerene derivatives). In the final step, the hybrid organo-graphene film was lowered in the solution of the appropriate fullerene derivative. Multilayer films were constructed by repeating this procedure. Hybrid fullerene-based thin films deposited on various hydrophobic substrates were characterized by X-ray diffraction (XRD) and X-ray reflectivity (XRR), FTIR, and Raman spectroscopies, Atomic Force Microscopy, and optical measurements. Acknowledgments. This research has been co‐financed by the European Union (European Social Fund – ESF) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National Strategic Reference Framework (NSRF)‐Research Funding Program: THALES. Investing in knowledge society through the European Social Fund (no. 377285).

Keywords: hybrids, graphene oxide, fullerenes, langmuir-blodgett, intercalated structures

Procedia PDF Downloads 316
203 Cockpit Integration and Piloted Assessment of an Upset Detection and Recovery System

Authors: Hafid Smaili, Wilfred Rouwhorst, Paul Frost

Abstract:

The trend of recent accident and incident cases worldwide show that the state-of-the-art automation and operations, for current and future demanding operational environments, does not provide the desired level of operational safety under crew peak workload conditions, specifically in complex situations such as loss-of-control in-flight (LOC-I). Today, the short term focus is on preparing crews to recognise and handle LOC-I situations through upset recovery training. This paper describes the cockpit integration aspects and piloted assessment of both a manually assisted and automatic upset detection and recovery system that has been developed and demonstrated within the European Advanced Cockpit for Reduction Of StreSs and workload (ACROSS) programme. The proposed system is a function that continuously monitors and intervenes when the aircraft enters an upset and provides either manually pilot-assisted guidance or takes over full control of the aircraft to recover from an upset. In order to mitigate the highly physical and psychological impact during aircraft upset events, the system provides new cockpit functionalities to support the pilot in recovering from any upset both manually assisted and automatically. A piloted simulator assessment was made in Oct-Nov 2015 using ten pilots in a representative civil large transport fly-by-wire aircraft in terms of the preference of the tested upset detection and recovery system configurations to reduce pilot workload, increase situational awareness and safe interaction with the manually assisted or automated modes. The piloted simulator evaluation of the upset detection and recovery system showed that the functionalities of the system are able to support pilots during an upset. The experiment showed that pilots are willing to rely on the guidance provided by the system during an upset. Thereby, it is important for pilots to see and understand what the aircraft is doing and trying to do especially in automatic modes. Comparing the manually assisted and the automatic recovery modes, the pilot’s opinion was that an automatic recovery reduces the workload so that they could perform a proper screening of the primary flight display. The results further show that the manually assisted recoveries, with recovery guidance cues on the cockpit primary flight display, reduced workload for severe upsets compared to today’s situation. The level of situation awareness was improved for automatic upset recoveries where the pilot could monitor what the system was trying to accomplish compared to automatic recovery modes without any guidance. An improvement in situation awareness was also noticeable with the manually assisted upset recovery functionalities as compared to the current non-assisted recovery procedures. This study shows that automatic upset detection and recovery functionalities are likely to positively impact the operational safety by means of reduced workload, improved situation awareness and crew stress reduction. It is thus believed that future developments for upset recovery guidance and loss-of-control prevention should focus on automatic recovery solutions.

Keywords: aircraft accidents, automatic flight control, loss-of-control, upset recovery

Procedia PDF Downloads 201
202 Detection and Identification of Antibiotic Resistant UPEC Using FTIR-Microscopy and Advanced Multivariate Analysis

Authors: Uraib Sharaha, Ahmad Salman, Eladio Rodriguez-Diaz, Elad Shufan, Klaris Riesenberg, Irving J. Bigio, Mahmoud Huleihel

Abstract:

Antimicrobial drugs have played an indispensable role in controlling illness and death associated with infectious diseases in animals and humans. However, the increasing resistance of bacteria to a broad spectrum of commonly used antibiotics has become a global healthcare problem. Many antibiotics had lost their effectiveness since the beginning of the antibiotic era because many bacteria have adapted defenses against these antibiotics. Rapid determination of antimicrobial susceptibility of a clinical isolate is often crucial for the optimal antimicrobial therapy of infected patients and in many cases can save lives. The conventional methods for susceptibility testing require the isolation of the pathogen from a clinical specimen by culturing on the appropriate media (this culturing stage lasts 24 h-first culturing). Then, chosen colonies are grown on media containing antibiotic(s), using micro-diffusion discs (second culturing time is also 24 h) in order to determine its bacterial susceptibility. Other methods, genotyping methods, E-test and automated methods were also developed for testing antimicrobial susceptibility. Most of these methods are expensive and time-consuming. Fourier transform infrared (FTIR) microscopy is rapid, safe, effective and low cost method that was widely and successfully used in different studies for the identification of various biological samples including bacteria; nonetheless, its true potential in routine clinical diagnosis has not yet been established. The new modern infrared (IR) spectrometers with high spectral resolution enable measuring unprecedented biochemical information from cells at the molecular level. Moreover, the development of new bioinformatics analyses combined with IR spectroscopy becomes a powerful technique, which enables the detection of structural changes associated with resistivity. The main goal of this study is to evaluate the potential of the FTIR microscopy in tandem with machine learning algorithms for rapid and reliable identification of bacterial susceptibility to antibiotics in time span of few minutes. The UTI E.coli bacterial samples, which were identified at the species level by MALDI-TOF and examined for their susceptibility by the routine assay (micro-diffusion discs), are obtained from the bacteriology laboratories in Soroka University Medical Center (SUMC). These samples were examined by FTIR microscopy and analyzed by advanced statistical methods. Our results, based on 700 E.coli samples, were promising and showed that by using infrared spectroscopic technique together with multivariate analysis, it is possible to classify the tested bacteria into sensitive and resistant with success rate higher than 90% for eight different antibiotics. Based on these preliminary results, it is worthwhile to continue developing the FTIR microscopy technique as a rapid and reliable method for identification antibiotic susceptibility.

Keywords: antibiotics, E.coli, FTIR, multivariate analysis, susceptibility, UTI

Procedia PDF Downloads 165
201 In-Flight Radiometric Performances Analysis of an Airborne Optical Payload

Authors: Caixia Gao, Chuanrong Li, Lingli Tang, Lingling Ma, Yaokai Liu, Xinhong Wang, Yongsheng Zhou

Abstract:

Performances analysis of remote sensing sensor is required to pursue a range of scientific research and application objectives. Laboratory analysis of any remote sensing instrument is essential, but not sufficient to establish a valid inflight one. In this study, with the aid of the in situ measurements and corresponding image of three-gray scale permanent artificial target, the in-flight radiometric performances analyses (in-flight radiometric calibration, dynamic range and response linearity, signal-noise-ratio (SNR), radiometric resolution) of self-developed short-wave infrared (SWIR) camera are performed. To acquire the inflight calibration coefficients of the SWIR camera, the at-sensor radiances (Li) for the artificial targets are firstly simulated with in situ measurements (atmosphere parameter and spectral reflectance of the target) and viewing geometries using MODTRAN model. With these radiances and the corresponding digital numbers (DN) in the image, a straight line with a formulation of L = G × DN + B is fitted by a minimization regression method, and the fitted coefficients, G and B, are inflight calibration coefficients. And then the high point (LH) and the low point (LL) of dynamic range can be described as LH= (G × DNH + B) and LL= B, respectively, where DNH is equal to 2n − 1 (n is the quantization number of the payload). Meanwhile, the sensor’s response linearity (δ) is described as the correlation coefficient of the regressed line. The results show that the calibration coefficients (G and B) are 0.0083 W·sr−1m−2µm−1 and −3.5 W·sr−1m−2µm−1; the low point of dynamic range is −3.5 W·sr−1m−2µm−1 and the high point is 30.5 W·sr−1m−2µm−1; the response linearity is approximately 99%. Furthermore, a SNR normalization method is used to assess the sensor’s SNR, and the normalized SNR is about 59.6 when the mean value of radiance is equal to 11.0 W·sr−1m−2µm−1; subsequently, the radiometric resolution is calculated about 0.1845 W•sr-1m-2μm-1. Moreover, in order to validate the result, a comparison of the measured radiance with a radiative-transfer-code-predicted over four portable artificial targets with reflectance of 20%, 30%, 40%, 50% respectively, is performed. It is noted that relative error for the calibration is within 6.6%.

Keywords: calibration and validation site, SWIR camera, in-flight radiometric calibration, dynamic range, response linearity

Procedia PDF Downloads 263
200 Forecasting Thermal Energy Demand in District Heating and Cooling Systems Using Long Short-Term Memory Neural Networks

Authors: Kostas Kouvaris, Anastasia Eleftheriou, Georgios A. Sarantitis, Apostolos Chondronasios

Abstract:

To achieve the objective of almost zero carbon energy solutions by 2050, the EU needs to accelerate the development of integrated, highly efficient and environmentally friendly solutions. In this direction, district heating and cooling (DHC) emerges as a viable and more efficient alternative to conventional, decentralized heating and cooling systems, enabling a combination of more efficient renewable and competitive energy supplies. In this paper, we develop a forecasting tool for near real-time local weather and thermal energy demand predictions for an entire DHC network. In this fashion, we are able to extend the functionality and to improve the energy efficiency of the DHC network by predicting and adjusting the heat load that is distributed from the heat generation plant to the connected buildings by the heat pipe network. Two case-studies are considered; one for Vransko, Slovenia and one for Montpellier, France. The data consists of i) local weather data, such as humidity, temperature, and precipitation, ii) weather forecast data, such as the outdoor temperature and iii) DHC operational parameters, such as the mass flow rate, supply and return temperature. The external temperature is found to be the most important energy-related variable for space conditioning, and thus it is used as an external parameter for the energy demand models. For the development of the forecasting tool, we use state-of-the-art deep neural networks and more specifically, recurrent networks with long-short-term memory cells, which are able to capture complex non-linear relations among temporal variables. Firstly, we develop models to forecast outdoor temperatures for the next 24 hours using local weather data for each case-study. Subsequently, we develop models to forecast thermal demand for the same period, taking under consideration past energy demand values as well as the predicted temperature values from the weather forecasting models. The contributions to the scientific and industrial community are three-fold, and the empirical results are highly encouraging. First, we are able to predict future thermal demand levels for the two locations under consideration with minimal errors. Second, we examine the impact of the outdoor temperature on the predictive ability of the models and how the accuracy of the energy demand forecasts decreases with the forecast horizon. Third, we extend the relevant literature with a new dataset of thermal demand and examine the performance and applicability of machine learning techniques to solve real-world problems. Overall, the solution proposed in this paper is in accordance with EU targets, providing an automated smart energy management system, decreasing human errors and reducing excessive energy production.

Keywords: machine learning, LSTMs, district heating and cooling system, thermal demand

Procedia PDF Downloads 130
199 Green-Synthesized β-Cyclodextrin Membranes for Humidity Sensors

Authors: Zeineb Baatout, Safa Teka, Nejmeddine Jaballah, Nawfel Sakly, Xiaonan Sun, Mustapha Majdoub

Abstract:

Currently, the economic interests linked to the development of bio-based materials make biomass one of the most interesting areas for science development. We are interested in the β-cyclodextrin (β-CD), one of the popular bio-sourced macromolecule, produced from the starch via enzymatic conversion. It is a cyclic oligosaccharide formed by the association of seven glucose units. It presents a rigid conical and amphiphilic structure with hydrophilic exterior, allowing it to be water-soluble. It has also a hydrophobic interior enabling the formation of inclusion complexes, which support its application for the elaboration of electrochemical and optical sensors. Nevertheless, the solubility of β-CD in water makes its use as sensitive layer limit and difficult due to their instability in aqueous media. To overcome this limitation, we chose to precede by modification of the hydroxyl groups to obtain hydrophobic derivatives which lead to water-stable sensing layers. Hence, a series of benzylated β-CDs were synthesized in basic aqueous media in one pot. This work reports the synthesis of a new family of substituted amphiphilic β-CDs using a green methodology. The obtained β-CDs showed different degree of substitution (DS) between 0.85 and 2.03. These organic macromolecular materials were soluble in common organic volatile solvents, and their structures were investigated by NMR, FT-IR and MALDI-TOF spectroscopies. Thermal analysis showed a correlation between the thermal properties of these derivatives and the benzylation degree. The surface properties of the thin films based on the benzylated β-CDs were characterized by contact angle measurements and atomic force microscopy (AFM). These organic materials were investigated as sensitive layers, deposited on quartz crystal microbalance (QCM) gravimetric transducer, for humidity sensor at room temperature. The results showed that the performances of the prepared sensors are greatly influenced by the benzylation degree of β-CD. The partially modified β-CD (DS=1) shows linear response with best sensitivity, good reproducibility, low hysteresis, fast response time (15s) and recovery time (17s) at higher relative humidity levels (RH) between 11% and 98% in room temperature.

Keywords: β-cyclodextrin, green synthesis, humidity sensor, quartz crystal microbalance

Procedia PDF Downloads 262