Search results for: resolution down converter
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1690

Search results for: resolution down converter

100 Using Inverted 4-D Seismic and Well Data to Characterise Reservoirs from Central Swamp Oil Field, Niger Delta

Authors: Emmanuel O. Ezim, Idowu A. Olayinka, Michael Oladunjoye, Izuchukwu I. Obiadi

Abstract:

Monitoring of reservoir properties prior to well placements and production is a requirement for optimisation and efficient oil and gas production. This is usually done using well log analyses and 3-D seismic, which are often prone to errors. However, 4-D (Time-lapse) seismic, incorporating numerous 3-D seismic surveys of the same field with the same acquisition parameters, which portrays the transient changes in the reservoir due to production effects over time, could be utilised because it generates better resolution. There is, however dearth of information on the applicability of this approach in the Niger Delta. This study was therefore designed to apply 4-D seismic, well-log and geologic data in monitoring of reservoirs in the EK field of the Niger Delta. It aimed at locating bypassed accumulations and ensuring effective reservoir management. The Field (EK) covers an area of about 1200km2 belonging to the early (18ma) Miocene. Data covering two 4-D vintages acquired over a fifteen-year interval were obtained from oil companies operating in the field. The data were analysed to determine the seismic structures, horizons, Well-to-Seismic Tie (WST), and wavelets. Well, logs and production history data from fifteen selected wells were also collected from the Oil companies. Formation evaluation, petrophysical analysis and inversion alongside geological data were undertaken using Petrel, Shell-nDi, Techlog and Jason Software. Well-to-seismic tie, formation evaluation and saturation monitoring using petrophysical and geological data and software were used to find bypassed hydrocarbon prospects. The seismic vintages were interpreted, and the amounts of change in the reservoir were defined by the differences in Acoustic Impedance (AI) inversions of the base and the monitor seismic. AI rock properties were estimated from all the seismic amplitudes using controlled sparse-spike inversion. The estimated rock properties were used to produce AI maps. The structural analysis showed the dominance of NW-SE trending rollover collapsed-crest anticlines in EK with hydrocarbons trapped northwards. There were good ties in wells EK 27, 39. Analysed wavelets revealed consistent amplitude and phase for the WST; hence, a good match between the inverted impedance and the good data. Evidence of large pay thickness, ranging from 2875ms (11420 TVDSS-ft) to about 2965ms, were found around EK 39 well with good yield properties. The comparison between the base of the AI and the current monitor and the generated AI maps revealed zones of untapped hydrocarbons as well as assisted in determining fluids movement. The inverted sections through EK 27, 39 (within 3101 m - 3695 m), indicated depletion in the reservoirs. The extent of the present non-uniform gas-oil contact and oil-water contact movements were from 3554 to 3575 m. The 4-D seismic approach led to better reservoir characterization, well development and the location of deeper and bypassed hydrocarbon reservoirs.

Keywords: reservoir monitoring, 4-D seismic, well placements, petrophysical analysis, Niger delta basin

Procedia PDF Downloads 98
99 Relationship between Thumb Length and Pointing Performance on Portable Terminal with Touch-Sensitive Screen

Authors: Takahiro Nishimura, Kouki Doi, Hiroshi Fujimoto

Abstract:

Touch-sensitive screens that serve as displays and input devices have been adopted in many portable terminals such as smartphones and personal media players, and the market of touch-sensitive screens has expanded greatly. One of the advantages of touch-sensitive screen is the flexibility in the graphical user interface (GUI) design, and it is imperative to design an appropriate GUI to realize an easy-to-use interface. Moreover, it is important to evaluate the relationship between pointing performance and GUI design. There is much knowledge regarding easy-to-use GUI designs for portable terminals with touch-sensitive screens, and most have focused on GUI design approaches for women or children with small hands. In contrast, GUI design approaches for users with large hands have not received sufficient attention. In this study, to obtain knowledge that contributes to the establishment of individualized easy-to-use GUI design guidelines, we conducted experiments to investigate the relationship between thumb length and pointing performance on portable terminals with touch-sensitive screens. In this study, fourteen college students who participated in the experiment were divided into two groups based on the length of their thumbs. Specifically, we categorized the participants into two groups, thumbs longer than 64.2 mm into L (Long) group, and thumbs longer than 57.4 mm but shorter than 64.2 mm into A (Average) group, based on Japanese anthropometric database. They took part in this study under the authorization of Waseda University’s ‘Ethics Review Committee on Research with Human Subjects’. We created an application for the experimental task and implemented it on the projected capacitive touch-sensitive screen portable terminal (iPod touch (4th generation)). The display size was 3.5 inch and 960 × 640 - pixel resolution at 326 ppi (pixels per inch). This terminal was selected as the experimental device, because of its wide use and market share. The operational procedure of the application is as follows. First, the participants placed their thumb on the start position. Then, one cross-shaped target in a 10 × 7 array of 70 positions appeared at random. The participants pointed the target with their thumb as accurately and as fast as possible. Then, they returned their thumb to the start position and waited. The operation ended when this procedure had been repeated until all 70 targets had each been pointed at once by the participants. We adopted the evaluation indices for absolute error, variable error, and pointing time to investigate pointing performance when using the portable terminal. The results showed that pointing performance varied with thumb length. In particular, on the lower right side of the screen, the performance of L group with long thumb was low. Further, we presented an approach for designing easy-to- use button GUI for users with long thumbs. The contributions of this study include revelation of the relationship between pointing performance and user’s thumb length when using a portable terminal in terms of accuracy, precision, and speed of pointing. We hope that these findings contribute to an easy-to-use GUI design for users with large hands.

Keywords: pointing performance, portable terminal, thumb length, touch-sensitive screen

Procedia PDF Downloads 138
98 Need for Elucidation of Palaeoclimatic Variability in the High Himalayan Mountains: A Multiproxy Approach

Authors: Sheikh Nawaz Ali, Pratima Pandey, P. Morthekai, Jyotsna Dubey, Md. Firoze Quamar

Abstract:

The high mountain glaciers are one of the most sensitive recorders of climate changes, because they have the tendency to respond to the combined effect of snow fall and temperature. The Himalayan glaciers have been studied with a good pace during the last decade. However, owing to its large ecological diversity and geographical vividness, major part of the Indian Himalaya is uninvestigated, and hence the palaeoclimatic patterns as well as the chronology of past glaciations in particular remain controversial for the entire Indian Himalayan transect. Although the Himalayan glaciers are nourished by two important climatic systems viz. the southwest summer monsoon and the mid-latitude westerlies, however, the influence of these systems is yet to be understood. Nevertheless, existing chronology (mostly exposure ages) indicate that irrespective of the geographical position, glaciers seem to grow during enhanced Indian summer monsoon (ISM). The Himalayan mountain glaciers are referred to the third pole or water tower of Asia as they form a huge reservoir of the fresh water supplies for the Asian countries. Mountain glaciers are sensitive probes of the local climate, and, thus, they present an opportunity and a challenge to interpret climates of the past as well as to predict future changes. The principle object of all the palaeoclimatic studies is to develop a futuristic models/scenario. However, it has been found that the glacial chronologies bracket the major phases of climatic events only, and other climatic proxies are sparse in Himalaya. This is the reason that compilation of data for rapid climatic change during the Holocene shows major gaps in this region. The sedimentation in proglacial lakes, conversely, is more continuous and, hence, can be used to reconstruct a more complete record of past climatic variability that is modulated by changing ice volume of the valley glacier. The Himalayan region has numerous proglacial lacustrine deposits formed during the late Quaternary period. However, there are only few such deposits which have been studied so far. Therefore, this is the high time when efforts have to be made to systematically map the moraines located in different climatic zones, reconstruct the local and regional moraine stratigraphy and use multiple dating techniques to bracket the events of glaciation. Besides this, emphasis must be given on carrying multiproxy studies on the lacustrine sediments that will provide a high resolution palaeoclimatic data from the alpine region of the Himalaya. Although the Himalayan glaciers fluctuated in accordance with the changing climatic conditions (natural forcing), however, it is too early to arrive at any conclusion. It is very crucial to generate multiproxy data sets covering wider geographical and ecological domains taking into consideration multiple parameters that directly or indirectly influence the glacier mass balance as well as the local climate of a region.

Keywords: glacial chronology, palaeoclimate, multiproxy, Himalaya

Procedia PDF Downloads 233
97 Artificial Neural Network and Satellite Derived Chlorophyll Indices for Estimation of Wheat Chlorophyll Content under Rainfed Condition

Authors: Muhammad Naveed Tahir, Wang Yingkuan, Huang Wenjiang, Raheel Osman

Abstract:

Numerous models used in prediction and decision-making process but most of them are linear in natural environment, and linear models reach their limitations with non-linearity in data. Therefore accurate estimation is difficult. Artificial Neural Networks (ANN) found extensive acceptance to address the modeling of the complex real world for the non-linear environment. ANN’s have more general and flexible functional forms than traditional statistical methods can effectively deal with. The link between information technology and agriculture will become more firm in the near future. Monitoring crop biophysical properties non-destructively can provide a rapid and accurate understanding of its response to various environmental influences. Crop chlorophyll content is an important indicator of crop health and therefore the estimation of crop yield. In recent years, remote sensing has been accepted as a robust tool for site-specific management by detecting crop parameters at both local and large scales. The present research combined the ANN model with satellite-derived chlorophyll indices from LANDSAT 8 imagery for predicting real-time wheat chlorophyll estimation. The cloud-free scenes of LANDSAT 8 were acquired (Feb-March 2016-17) at the same time when ground-truthing campaign was performed for chlorophyll estimation by using SPAD-502. Different vegetation indices were derived from LANDSAT 8 imagery using ERADAS Imagine (v.2014) software for chlorophyll determination. The vegetation indices were including Normalized Difference Vegetation Index (NDVI), Green Normalized Difference Vegetation Index (GNDVI), Chlorophyll Absorbed Ratio Index (CARI), Modified Chlorophyll Absorbed Ratio Index (MCARI) and Transformed Chlorophyll Absorbed Ratio index (TCARI). For ANN modeling, MATLAB and SPSS (ANN) tools were used. Multilayer Perceptron (MLP) in MATLAB provided very satisfactory results. For training purpose of MLP 61.7% of the data, for validation purpose 28.3% of data and rest 10% of data were used to evaluate and validate the ANN model results. For error evaluation, sum of squares error and relative error were used. ANN model summery showed that sum of squares error of 10.786, the average overall relative error was .099. The MCARI and NDVI were revealed to be more sensitive indices for assessing wheat chlorophyll content with the highest coefficient of determination R²=0.93 and 0.90 respectively. The results suggested that use of high spatial resolution satellite imagery for the retrieval of crop chlorophyll content by using ANN model provides accurate, reliable assessment of crop health status at a larger scale which can help in managing crop nutrition requirement in real time.

Keywords: ANN, chlorophyll content, chlorophyll indices, satellite images, wheat

Procedia PDF Downloads 120
96 Technology in Commercial Law Enforcement: Tanzania, Canada, and Singapore Comparatively

Authors: Katarina Revocati Mteule

Abstract:

The background of this research arises from global demands for fair business opportunities. As one of responses to these demands, nations embarked on reforms in commercial laws. In 1990s Tanzania resorted to economic transformation through liberalization to attract more investments included reform in commercial laws enforcement. This research scrutinizes the effectiveness of reforms in Tanzania in comparison with Canada and Singapore and the role of technology. The methodology to be used is doctrinal legal research mixed with international comparative legal research. It involves comparative analysis of library, online, and internet resources as well as Case Laws and Statutory Laws. Tanzania, Canada and Singapore are sampled comparators basing on their distinct level of economic development. The criteria of analysis includes the nature of reforms, type of technology, technological infrastructure and human resource technical competence in each country. As the world progresses towards reforms in commercial laws, improvements in law, policy, and regulatory frameworks are paramount. Specifically, commercial laws are essential in contract enforcement and dispute resolution and how it copes with modern technologies is a concern. Harnessing the best technology is necessary to cope with the modernity in world businesses. In line with this, Tanzania is improving its business environment, including law enforcement mechanisms that are supportive to investments. Reforms such as specialized commercial law enforcement coupled with alternative dispute resolutions such as arbitration, mediation, and reconciliation are emphasized. Court technology as one of the reform tools given high priority. This research evaluates the progress and the effectiveness of the reforms in Commercial Laws towards friendly business environment in Tanzania in comparison with Canada and Singapore. The experience of Tanzania is compared with Canada and Singapore to see what to improve for each country to enhance quick and fair enforcement of commercial law. The research proposes necessary global standards of procedures and in national laws to offer a business-friendly environment and the use of appropriate technology. Solutions are proposed in tackling the challenges of delays in enforcing Commercial Laws such as case management, funding, legal and procedural hindrances, laxity among staff, and abuse of Court process among litigants, all in line with modern technology. It is the finding of the research that proper use of technology has managed to reduce case backlogs and time taken to resolve a commercial dispute, to increase court integrity by minimizing human contacts in commercial law enforcement which may lead to solicitation of favors and saving of parties’ time due to online service. Among the three countries, each one is facing a distinct challenge due to the level of poverty and remoteness from online service. How solutions are found in one country is a lesson to another. To conclude, this paper is suggesting solutions for improving the commercial law enforcement mechanisms in line with modern technology. The call for technological transformation is essential for the enforcement of commercial laws.

Keywords: commercial law, enforcement, technology

Procedia PDF Downloads 33
95 Regularized Euler Equations for Incompressible Two-Phase Flow Simulations

Authors: Teng Li, Kamran Mohseni

Abstract:

This paper presents an inviscid regularization technique for the incompressible two-phase flow simulations. This technique is known as observable method due to the understanding of observability that any feature smaller than the actual resolution (physical or numerical), i.e., the size of wire in hotwire anemometry or the grid size in numerical simulations, is not able to be captured or observed. Differ from most regularization techniques that applies on the numerical discretization, the observable method is employed at PDE level during the derivation of equations. Difficulties in the simulation and analysis of realistic fluid flow often result from discontinuities (or near-discontinuities) in the calculated fluid properties or state. Accurately capturing these discontinuities is especially crucial when simulating flows involving shocks, turbulence or sharp interfaces. Over the past several years, the properties of this new regularization technique have been investigated that show the capability of simultaneously regularizing shocks and turbulence. The observable method has been performed on the direct numerical simulations of shocks and turbulence where the discontinuities are successfully regularized and flow features are well captured. In the current paper, the observable method will be extended to two-phase interfacial flows. Multiphase flows share the similar features with shocks and turbulence that is the nonlinear irregularity caused by the nonlinear terms in the governing equations, namely, Euler equations. In the direct numerical simulation of two-phase flows, the interfaces are usually treated as the smooth transition of the properties from one fluid phase to the other. However, in high Reynolds number or low viscosity flows, the nonlinear terms will generate smaller scales which will sharpen the interface, causing discontinuities. Many numerical methods for two-phase flows fail at high Reynolds number case while some others depend on the numerical diffusion from spatial discretization. The observable method regularizes this nonlinear mechanism by filtering the convective terms and this process is inviscid. The filtering effect is controlled by an observable scale which is usually about a grid length. Single rising bubble and Rayleigh-Taylor instability are studied, in particular, to examine the performance of the observable method. A pseudo-spectral method is used for spatial discretization which will not introduce numerical diffusion, and a Total Variation Diminishing (TVD) Runge Kutta method is applied for time integration. The observable incompressible Euler equations are solved for these two problems. In rising bubble problem, the terminal velocity and shape of the bubble are particularly examined and compared with experiments and other numerical results. In the Rayleigh-Taylor instability, the shape of the interface are studied for different observable scale and the spike and bubble velocities, as well as positions (under a proper observable scale), are compared with other simulation results. The results indicate that this regularization technique can potentially regularize the sharp interface in the two-phase flow simulations

Keywords: Euler equations, incompressible flow simulation, inviscid regularization technique, two-phase flow

Procedia PDF Downloads 472
94 Artificial Intelligence Models for Detecting Spatiotemporal Crop Water Stress in Automating Irrigation Scheduling: A Review

Authors: Elham Koohi, Silvio Jose Gumiere, Hossein Bonakdari, Saeid Homayouni

Abstract:

Water used in agricultural crops can be managed by irrigation scheduling based on soil moisture levels and plant water stress thresholds. Automated irrigation scheduling limits crop physiological damage and yield reduction. Knowledge of crop water stress monitoring approaches can be effective in optimizing the use of agricultural water. Understanding the physiological mechanisms of crop responding and adapting to water deficit ensures sustainable agricultural management and food supply. This aim could be achieved by analyzing and diagnosing crop characteristics and their interlinkage with the surrounding environment. Assessments of plant functional types (e.g., leaf area and structure, tree height, rate of evapotranspiration, rate of photosynthesis), controlling changes, and irrigated areas mapping. Calculating thresholds of soil water content parameters, crop water use efficiency, and Nitrogen status make irrigation scheduling decisions more accurate by preventing water limitations between irrigations. Combining Remote Sensing (RS), the Internet of Things (IoT), Artificial Intelligence (AI), and Machine Learning Algorithms (MLAs) can improve measurement accuracies and automate irrigation scheduling. This paper is a review structured by surveying about 100 recent research studies to analyze varied approaches in terms of providing high spatial and temporal resolution mapping, sensor-based Variable Rate Application (VRA) mapping, the relation between spectral and thermal reflectance and different features of crop and soil. The other objective is to assess RS indices formed by choosing specific reflectance bands and identifying the correct spectral band to optimize classification techniques and analyze Proximal Optical Sensors (POSs) to control changes. The innovation of this paper can be defined as categorizing evaluation methodologies of precision irrigation (applying the right practice, at the right place, at the right time, with the right quantity) controlled by soil moisture levels and sensitiveness of crops to water stress, into pre-processing, processing (retrieval algorithms), and post-processing parts. Then, the main idea of this research is to analyze the error reasons and/or values in employing different approaches in three proposed parts reported by recent studies. Additionally, as an overview conclusion tried to decompose different approaches to optimizing indices, calibration methods for the sensors, thresholding and prediction models prone to errors, and improvements in classification accuracy for mapping changes.

Keywords: agricultural crops, crop water stress detection, irrigation scheduling, precision agriculture, remote sensing

Procedia PDF Downloads 43
93 Sugar-Induced Stabilization Effect of Protein Structure

Authors: Mitsuhiro Hirai, Satoshi Ajito, Nobutaka Shimizu, Noriyuki Igarashi, Hiroki Iwase, Shinichi Takata

Abstract:

Sugars and polyols are known to be bioprotectants preventing such as protein denaturation and enzyme deactivation and widely used as a nontoxic additive in various industrial and medical products. The mechanism of their protective actions has been explained by specific bindings between biological components and additives, changes in solvent viscosities, and surface tension and free energy changes upon transfer of those components into additive solutions. On the other hand, some organisms having tolerances against extreme environment produce stress proteins and/or accumulate sugars in cells, which is called cryptobiosis. In particular, trehalose has been drawing attention relevant to cryptobiosis under external stress such as high or low temperature, drying, osmotic pressure, and so on. The function of cryptobiosis by trehalose has been explained relevant to the restriction of the intra-and/or-inter-molecular movement by vitrification or from the replacement of water molecule by trehalose. Previous results suggest that the structure and interaction between sugar and water are a key determinant for understanding cryptobiosis. Recently, we have shown direct evidence that the protein hydration (solvation) and structural stability against chemical and thermal denaturation significantly depend on sugar species and glycerol. Sugar and glycerol molecules tend to be preferentially or weakly excluded from the protein surface and preserved the native protein hydration shell. Due to the protective action of the protein hydration shell by those molecules, the protein structure is stabilized against chemical (guanidinium chloride) and thermal denaturation. The protective action depends on sugar species. To understand the above trend and difference in detail, it is essentially important to clarify the characteristics of solutions containing those additives. In this study, by using wide-angle X-ray scattering technique covering a wide spatial region (~3-120 Å), we have clarified structures of sugar solutions with the concentration from 5% w/w to 65% w/w. The sugars measured in the present study were monosaccharides (glucose, fructose, mannose) and disaccharides (sucrose, trehalose, maltose). Due to observed scattering data with a wide spatial resolution, we have succeeded in obtaining information on the internal structure of individual sugar molecules and on the correlation between them. Every sugar gradually shortened the average inter-molecular distance as the concentration increased. The inter-molecular interaction between sugar molecules was essentially showed an exclusive tendency for every sugar, which appeared as the presence of a repulsive correlation hole. This trend was more weakly seen for trehalose compared to other sugars. The intermolecular distance and spread of individual molecule clearly showed the dependence of sugar species. We will discuss the relation between the characteristic of sugar solution and its protective action of biological materials.

Keywords: hydration, protein, sugar, X-ray scattering

Procedia PDF Downloads 118
92 Current Applications of Artificial Intelligence (AI) in Chest Radiology

Authors: Angelis P. Barlampas

Abstract:

Learning Objectives: The purpose of this study is to inform briefly the reader about the applications of AI in chest radiology. Background: Currently, there are 190 FDA-approved radiology AI applications, with 42 (22%) pertaining specifically to thoracic radiology. Imaging findings OR Procedure details Aids of AI in chest radiology1: Detects and segments pulmonary nodules. Subtracts bone to provide an unobstructed view of the underlying lung parenchyma and provides further information on nodule characteristics, such as nodule location, nodule two-dimensional size or three dimensional (3D) volume, change in nodule size over time, attenuation data (i.e., mean, minimum, and/or maximum Hounsfield units [HU]), morphological assessments, or combinations of the above. Reclassifies indeterminate pulmonary nodules into low or high risk with higher accuracy than conventional risk models. Detects pleural effusion . Differentiates tension pneumothorax from nontension pneumothorax. Detects cardiomegaly, calcification, consolidation, mediastinal widening, atelectasis, fibrosis and pneumoperitoneum. Localises automatically vertebrae segments, labels ribs and detects rib fractures. Measures the distance from the tube tip to the carina and localizes both endotracheal tubes and central vascular lines. Detects consolidation and progression of parenchymal diseases such as pulmonary fibrosis or chronic obstructive pulmonary disease (COPD).Can evaluate lobar volumes. Identifies and labels pulmonary bronchi and vasculature and quantifies air-trapping. Offers emphysema evaluation. Provides functional respiratory imaging, whereby high-resolution CT images are post-processed to quantify airflow by lung region and may be used to quantify key biomarkers such as airway resistance, air-trapping, ventilation mapping, lung and lobar volume, and blood vessel and airway volume. Assesses the lung parenchyma by way of density evaluation. Provides percentages of tissues within defined attenuation (HU) ranges besides furnishing automated lung segmentation and lung volume information. Improves image quality for noisy images with built-in denoising function. Detects emphysema, a common condition seen in patients with history of smoking and hyperdense or opacified regions, thereby aiding in the diagnosis of certain pathologies, such as COVID-19 pneumonia. It aids in cardiac segmentation and calcium detection, aorta segmentation and diameter measurements, and vertebral body segmentation and density measurements. Conclusion: The future is yet to come, but AI already is a helpful tool for the daily practice in radiology. It is assumed, that the continuing progression of the computerized systems and the improvements in software algorithms , will redder AI into the second hand of the radiologist.

Keywords: artificial intelligence, chest imaging, nodule detection, automated diagnoses

Procedia PDF Downloads 43
91 Howard Mold Count of Tomato Pulp Commercialized in the State of São Paulo, Brazil

Authors: M. B. Atui, A. M. Silva, M. A. M. Marciano, M. I. Fioravanti, V. A. Franco, L. B. Chasin, A. R. Ferreira, M. D. Nogueira

Abstract:

Fungi attack large amount of fruits and those who have suffered an injury on the surface are more susceptible to the growth, as they have pectinolytic enzymes that destroy the edible portion forming an amorphous and soft dough. The spores can reach the plant by the wind, rain and insects and fruit may have on its surface, besides the contaminants from the fruit trees, land and water, forming a flora composed mainly of yeasts and molds. Other contamination can occur for the equipment used to harvest, for the use of boxes and contaminated water to the fruit washing, for storage in dirty places. The hyphae in tomato products indicate the use of raw materials contaminated or unsuitable hygiene conditions during processing. Although fungi are inactivated in heat processing step, its hyphae remain in the final product and search for detection and quantification is an indicator of the quality of raw material. Howard Method count of fungi mycelia in industrialized pulps evaluates the amount of decayed fruits existing in raw material. The Brazilian legislation governing processed and packaged products set the limit of 40% of positive fields in tomato pulps. The aim of this study was to evaluate the quality of the tomato pulp sold in greater São Paulo, through a monitoring during the four seasons of the year. All over 2010, 110 samples have been examined; 21 were taking in spring, 31 in summer, 31 in fall and 27 in winter, all from different lots and trademarks. Samples have been picked up in several stores located in the city of São Paulo. Howard method was used, recommended by the AOAC, 19th ed, 2011 16:19:02 technique - method 965.41. Hundred percent of the samples contained fungi mycelia. The count average of fungi mycelia per season was 23%, 28%, 8,2% and 9,9% in spring, summer, fall and winter, respectively. Regarding the spring samples of the 21 samples analyzed, 14.3% were off-limits proposed by the legislation. As for the samples of the fall and winter, all were in accordance with the legislation and the average of mycelial filament count has not exceeded 20%, which can be explained by the low temperatures during this time of the year. The acquired samples in the summer and spring showed high percentage of fungal mycelium in the final product, related to the high temperatures in these seasons. Considering that the limit of 40% of positive fields is accepted for the Brazilian Legislation (RDC nº 14/2014), 3 spring samples (14%) and 6 summer samples (19%) will be over this limit and subject to law penalties. According to gathered data, 82% of manufacturers of this product manage to keep acceptable levels of fungi mycelia in their product. In conclusion, only 9.2% samples were for the limits established by Resolution RDC. 14/2014, showing that the limit of 40% is feasible and can be used by these segment industries. The result of the filament count mycelial by Howard method is an important tool in the microscopic analysis since it measures the quality of raw material used in the production of tomato products.

Keywords: fungi, howard, method, tomato, pulps

Procedia PDF Downloads 355
90 Platform Virtual for Joint Amplitude Measurement Based in MEMS

Authors: Mauro Callejas-Cuervo, Andrea C. Alarcon-Aldana, Andres F. Ruiz-Olaya, Juan C. Alvarez

Abstract:

Motion capture (MC) is the construction of a precise and accurate digital representation of a real motion. Systems have been used in the last years in a wide range of applications, from films special effects and animation, interactive entertainment, medicine, to high competitive sport where a maximum performance and low injury risk during training and competition is seeking. This paper presents an inertial and magnetic sensor based technological platform, intended for particular amplitude monitoring and telerehabilitation processes considering an efficient cost/technical considerations compromise. Our platform particularities offer high social impact possibilities by making telerehabilitation accessible to large population sectors in marginal socio-economic sector, especially in underdeveloped countries that in opposition to developed countries specialist are scarce, and high technology is not available or inexistent. This platform integrates high-resolution low-cost inertial and magnetic sensors with adequate user interfaces and communication protocols to perform a web or other communication networks available diagnosis service. The amplitude information is generated by sensors then transferred to a computing device with adequate interfaces to make it accessible to inexperienced personnel, providing a high social value. Amplitude measurements of the platform virtual system presented a good fit to its respective reference system. Analyzing the robotic arm results (estimation error RMSE 1=2.12° and estimation error RMSE 2=2.28°), it can be observed that during arm motion in any sense, the estimation error is negligible; in fact, error appears only during sense inversion what can easily be explained by the nature of inertial sensors and its relation to acceleration. Inertial sensors present a time constant delay which acts as a first order filter attenuating signals at large acceleration values as is the case for a change of sense in motion. It can be seen a damped response of platform virtual in other images where error analysis show that at maximum amplitude an underestimation of amplitude is present whereas at minimum amplitude estimations an overestimation of amplitude is observed. This work presents and describes the platform virtual as a motion capture system suitable for telerehabilitation with the cost - quality and precision - accessibility relations optimized. These particular characteristics achieved by efficiently using the state of the art of accessible generic technology in sensors and hardware, and adequate software for capture, transmission analysis and visualization, provides the capacity to offer good telerehabilitation services, reaching large more or less marginal populations where technologies and specialists are not available but accessible with basic communication networks.

Keywords: inertial sensors, joint amplitude measurement, MEMS, telerehabilitation

Procedia PDF Downloads 232
89 Remote Radiation Mapping Based on UAV Formation

Authors: Martin Arguelles Perez, Woosoon Yim, Alexander Barzilov

Abstract:

High-fidelity radiation monitoring is an essential component in the enhancement of the situational awareness capabilities of the Department of Energy’s Office of Environmental Management (DOE-EM) personnel. In this paper, multiple units of unmanned aerial vehicles (UAVs) each equipped with a cadmium zinc telluride (CZT) gamma-ray sensor are used for radiation source localization, which can provide vital real-time data for the EM tasks. To achieve this goal, a fully autonomous system of multicopter-based UAV swarm in 3D tetrahedron formation is used for surveying the area of interest and performing radiation source localization. The CZT sensor used in this study is suitable for small-size multicopter UAVs due to its small size and ease of interfacing with the UAV’s onboard electronics for high-resolution gamma spectroscopy enabling the characterization of radiation hazards. The multicopter platform with a fully autonomous flight feature is suitable for low-altitude applications such as radiation contamination sites. The conventional approach uses a single UAV mapping in a predefined waypoint path to predict the relative location and strength of the source, which can be time-consuming for radiation localization tasks. The proposed UAV swarm-based approach can significantly improve its ability to search for and track radiation sources. In this paper, two approaches are developed using (a) 2D planar circular (3 UAVs) and (b) 3D tetrahedron formation (4 UAVs). In both approaches, accurate estimation of the gradient vector is crucial for heading angle calculation. Each UAV carries the CZT sensor; the real-time radiation data are used for the calculation of a bulk heading vector for the swarm to achieve a UAV swarm’s source-seeking behavior. Also, a spinning formation is studied for both cases to improve gradient estimation near a radiation source. In the 3D tetrahedron formation, a UAV located closest to the source is designated as a lead unit to maintain the tetrahedron formation in space. Such a formation demonstrated a collective and coordinated movement for estimating a gradient vector for the radiation source and determining an optimal heading direction of the swarm. The proposed radiation localization technique is studied by computer simulation and validated experimentally in the indoor flight testbed using gamma sources. The technology presented in this paper provides the capability to readily add/replace radiation sensors to the UAV platforms in the field conditions enabling extensive condition measurement and greatly improving situational awareness and event management. Furthermore, the proposed radiation localization approach allows long-term measurements to be efficiently performed at wide areas of interest to prevent disasters and reduce dose risks to people and infrastructure.

Keywords: radiation, unmanned aerial system(UAV), source localization, UAV swarm, tetrahedron formation

Procedia PDF Downloads 54
88 Negotiating Communication Options for Deaf-Disabled Children

Authors: Steven J. Singer, Julianna F. Kamenakis, Allison R. Shapiro, Kimberly M. Cacciato

Abstract:

Communication and language are topics frequently studied among deaf children. However, there is limited research that focuses specifically on the communication and language experiences of Deaf-Disabled children. In this ethnography, researchers investigated the language experiences of six sets of parents with Deaf-Disabled children who chose American Sign Language (ASL) as the preferred mode of communication for their child. Specifically, the researchers were interested in the factors that influenced the parents’ decisions regarding their child’s communication options, educational placements, and social experiences. Data collection in this research included 18 hours of semi-structured interviews, 20 hours of participant observations, over 150 pages of reflexive journals and field notes, and a 2-hour focus group. The team conducted constant comparison qualitative analysis using NVivo software and an inductive coding procedure. The four researchers each read the data several times until they were able to chunk it into broad categories about communication and social influences. The team compared the various categories they developed, selecting ones that were consistent among researchers and redefining categories that differed. Continuing to use open inductive coding, the research team refined the categories until they were able to develop distinct themes. Two team members developed each theme through a process of independent coding, comparison, discussion, and resolution. The research team developed three themes: 1) early medical needs provided time for the parents to explore various communication options for their Deaf-Disabled child, 2) without intervention from medical professionals or educators, ASL emerged as a prioritized mode of communication for the family, 3) atypical gender roles affected familial communication dynamics. While managing the significant health issues of their Deaf-Disabled child at birth, families and medical professionals were so fixated on tending to the medical needs of the child that the typical pressures of determining a mode of communication were deprioritized. This allowed the families to meticulously research various methods of communication, resulting in an informed, rational, and well-considered decision to use ASL as the primary mode of communication with their Deaf-Disabled child. It was evident that having a Deaf-Disabled child meant an increased amount of labor and responsibilities for parents. This led to a shift in the roles of the family members. During the child’s development, the mother transformed from fulfilling the stereotypical roles of nurturer and administrator to that of administrator and champion. The mother facilitated medical proceedings and educational arrangements while the father became the caretaker and nurturer of their Deaf-Disabled child in addition to the traditional role of earning the family’s primary income. Ultimately, this research led to a deeper understanding of the critical role that time plays in parents’ decision-making process regarding communication methods with their Deaf-Disabled child.

Keywords: American Sign Language, deaf-disabled, ethnography, sociolinguistics

Procedia PDF Downloads 92
87 Exploring the Impact of Eye Movement Desensitization and Reprocessing (EMDR) And Mindfulness for Processing Trauma and Facilitating Healing During Ayahuasca Ceremonies

Authors: J. Hash, J. Converse, L. Gibson

Abstract:

Plant medicines are of growing interest for addressing mental health concerns. Ayahuasca, a traditional plant-based medicine, has established itself as a powerful way of processing trauma and precipitating healing and mood stabilization. Eye Movement Desensitization and Reprocessing (EMDR) is another treatment modality that aids in the rapid processing and resolution of trauma. We investigated group EMDR therapy, G-TEP, as a preparatory practice before Ayahuasca ceremonies to determine if the combination of these modalities supports participants in their journeys of letting go of past experiences negatively impacting mental health, thereby accentuating the healing of the plant medicine. We surveyed 96 participants (51 experimental G-TEP, 45 control grounding prior to their ceremony; age M=38.6, SD=9.1; F=57, M=34; white=39, Hispanic/Latinx=23, multiracial=11, Asian/Pacific Islander=10, other=7) in a pre-post, mixed methods design. Participants were surveyed for demographic characteristics, symptoms of PTSD and cPTSD (International Trauma Questionnaire (ITQ), depression (Beck Depression Inventory, BDI), and stress (Perceived Stress Scale, PSS) before the ceremony and at the end of the ceremony weekend. Open-ended questions also inquired about their expectations of the ceremony and results at the end. No baseline differences existed between the control and experimental participants. Overall, participants reported a decrease in meeting the threshold for PTSD symptoms (p<0.01); surprisingly, the control group reported significantly fewer thresholds met for symptoms of affective dysregulation, 2(1)=6.776, p<.01, negative self-concept, 2 (1)=7.122, p<.01, and disturbance in relationships, 2 (1)=9.804, p<.01, on subscales of the ITQ as compared to the experimental group. All participants also experienced a significant decrease in scores on the BDI, t(94)=8.995, p<.001, and PSS, t(91)=6.892, p<.001. Similar to patterns of PTSD symptoms, the control group reported significantly lower scores on the BDI, t(65.115)=-2.587, p<.01, and a trend toward lower PSS, t(90)=-1.775, p=.079 (this was significant with a one-sided test at p<.05), compared to the experimental group following the ceremony. Qualitative interviews among participants revealed a potential explanation for these relatively higher levels of depression and stress in the experimental group following the ceremony. Many participants reported needing more time to process their experience to gain an understanding of the effects of the Ayahuasca medicine. Others reported a sense of hopefulness and understanding of the sources of their trauma and the necessary steps to heal moving forward. This suggests increased introspection and openness to processing trauma, therefore making them more receptive to their emotions. The integration process of an Ayahuasca ceremony is a week- to months-long process that was not accessible in this stage of research, yet it is an integral process to understanding the full effects of the Ayahuasca medicine following the closure of a ceremony. Our future research aims to assess participants weeks into their integration process to determine the effectiveness of EMDR, and if the higher levels of depression and stress indicate the initial reaction to greater awareness of trauma and receptivity to healing.

Keywords: ayahuasca, EMDR, PTSD, mental health

Procedia PDF Downloads 36
86 Effect of Ion Irradiation on the Microstructure and Properties of Chromium Coatings on Zircaloy-4 Substrate

Authors: Alexia Wu, Joel Ribis, Jean-Christophe Brachet, Emmanuel Clouet, Benoit Arnal, Elodie Rouesne, Stéphane Urvoy, Justine Roubaud, Yves Serruys, Frederic Lepretre

Abstract:

To enhance the safety of Light Water Reactor, accident tolerant fuel (ATF) claddings materials are under development. In the framework of CEA-AREVA-EDF collaborative program on ATF cladding materials, CEA has engaged specific studies on chromium coated zirconium alloys. Especially for Loss-of-Coolant-Accident situations, chromium coated claddings have shown some additional 'coping' time before achieving full embrittlement of the oxidized cladding, when compared to uncoated references – both tested in steam environment up to 1300°C. Nevertheless, the behavior of chromium coatings and the stability of the Zr-Cr interface under neutron irradiation remain unknown. Two main points are addressed: 1. Bulk Cr behavior under irradiation: Due to its BCC crystallographic structure, Cr is prone to Ductile-to-Brittle-Transition at quite high temperature. Irradiation could be responsible for a significant additional DBTT shift towards higher temperatures. 2. Zircaloy/Cr interface behavior under irradiation: Preliminary TEM examinations of un-irradiated samples revealed a singular Zircaloy-4/Cr interface with nanometric intermetallic phase layers. Such particular interfaces highlight questions of how they would behave under irradiation - intermetallic zirconium phases are known to be more or less stable under irradiations. Another concern is a potential enhancement of chromium diffusion into the zirconium-alpha based substrate. The purpose of this study is then to determine the behavior of such coatings after ion irradiations, as a surrogate to neutron irradiation. Ion irradiations were performed at the Jannus-Saclay facility (France). 20 MeV Kr8+ ions at 400°C with a flux of 2.8x1011 ions.cm-2.s-1 were used to irradiate chromium coatings of 1-2 µm thick on Zircaloy-4 sheets substrate. At the interface, the calculated damage is close to 10 dpa (SRIM, Quick Calculation Damage mode). Thin foil samples were prepared with FIB for both as-received and irradiated coated samples. Transmission Electron Microscopy (TEM) and in-situ tensile tests in a Scanning Electron Microscope are being used to characterize the un-irradiated and irradiated materials. High Resolution TEM highlights a great complexity of the interface before irradiation since it is formed of an alternation of intermetallic phases – C14 and C15. The interfaces formed by these intermetallic phases with chromium and zirconium show semi-coherency. Chemical analysis performed before irradiation shows some iron enrichment at the interface. The chromium coating bulk microstructures and properties are also studied before and after irradiation. On-going in-situ tensile tests focus on the capacity of chromium coatings to sustain some plastic deformation when tested up to 350°C. The stability of the Cr/Zr interface is shown after ion irradiation up to 10 dpa. This observation constitutes the first result after irradiation on these new coated claddings materials.

Keywords: accident tolerant fuel, HRTEM, interface, ion-irradiation

Procedia PDF Downloads 337
85 Detection and Identification of Antibiotic Resistant UPEC Using FTIR-Microscopy and Advanced Multivariate Analysis

Authors: Uraib Sharaha, Ahmad Salman, Eladio Rodriguez-Diaz, Elad Shufan, Klaris Riesenberg, Irving J. Bigio, Mahmoud Huleihel

Abstract:

Antimicrobial drugs have played an indispensable role in controlling illness and death associated with infectious diseases in animals and humans. However, the increasing resistance of bacteria to a broad spectrum of commonly used antibiotics has become a global healthcare problem. Many antibiotics had lost their effectiveness since the beginning of the antibiotic era because many bacteria have adapted defenses against these antibiotics. Rapid determination of antimicrobial susceptibility of a clinical isolate is often crucial for the optimal antimicrobial therapy of infected patients and in many cases can save lives. The conventional methods for susceptibility testing require the isolation of the pathogen from a clinical specimen by culturing on the appropriate media (this culturing stage lasts 24 h-first culturing). Then, chosen colonies are grown on media containing antibiotic(s), using micro-diffusion discs (second culturing time is also 24 h) in order to determine its bacterial susceptibility. Other methods, genotyping methods, E-test and automated methods were also developed for testing antimicrobial susceptibility. Most of these methods are expensive and time-consuming. Fourier transform infrared (FTIR) microscopy is rapid, safe, effective and low cost method that was widely and successfully used in different studies for the identification of various biological samples including bacteria; nonetheless, its true potential in routine clinical diagnosis has not yet been established. The new modern infrared (IR) spectrometers with high spectral resolution enable measuring unprecedented biochemical information from cells at the molecular level. Moreover, the development of new bioinformatics analyses combined with IR spectroscopy becomes a powerful technique, which enables the detection of structural changes associated with resistivity. The main goal of this study is to evaluate the potential of the FTIR microscopy in tandem with machine learning algorithms for rapid and reliable identification of bacterial susceptibility to antibiotics in time span of few minutes. The UTI E.coli bacterial samples, which were identified at the species level by MALDI-TOF and examined for their susceptibility by the routine assay (micro-diffusion discs), are obtained from the bacteriology laboratories in Soroka University Medical Center (SUMC). These samples were examined by FTIR microscopy and analyzed by advanced statistical methods. Our results, based on 700 E.coli samples, were promising and showed that by using infrared spectroscopic technique together with multivariate analysis, it is possible to classify the tested bacteria into sensitive and resistant with success rate higher than 90% for eight different antibiotics. Based on these preliminary results, it is worthwhile to continue developing the FTIR microscopy technique as a rapid and reliable method for identification antibiotic susceptibility.

Keywords: antibiotics, E.coli, FTIR, multivariate analysis, susceptibility, UTI

Procedia PDF Downloads 152
84 Leveraging Multimodal Neuroimaging Techniques to in vivo Address Compensatory and Disintegration Patterns in Neurodegenerative Disorders: Evidence from Cortico-Cerebellar Connections in Multiple Sclerosis

Authors: Efstratios Karavasilis, Foteini Christidi, Georgios Velonakis, Agapi Plousi, Kalliopi Platoni, Nikolaos Kelekis, Ioannis Evdokimidis, Efstathios Efstathopoulos

Abstract:

Introduction: Advanced structural and functional neuroimaging techniques contribute to the study of anatomical and functional brain connectivity and its role in the pathophysiology and symptoms’ heterogeneity in several neurodegenerative disorders, including multiple sclerosis (MS). Aim: In the present study, we applied multiparametric neuroimaging techniques to investigate the structural and functional cortico-cerebellar changes in MS patients. Material: We included 51 MS patients (28 with clinically isolated syndrome [CIS], 31 with relapsing-remitting MS [RRMS]) and 51 age- and gender-matched healthy controls (HC) who underwent MRI in a 3.0T MRI scanner. Methodology: The acquisition protocol included high-resolution 3D T1 weighted, diffusion-weighted imaging and echo planar imaging sequences for the analysis of volumetric, tractography and functional resting state data, respectively. We performed between-group comparisons (CIS, RRMS, HC) using CAT12 and CONN16 MATLAB toolboxes for the analysis of volumetric (cerebellar gray matter density) and functional (cortico-cerebellar resting-state functional connectivity) data, respectively. Brainance suite was used for the analysis of tractography data (cortico-cerebellar white matter integrity; fractional anisotropy [FA]; axial and radial diffusivity [AD; RD]) to reconstruct the cerebellum tracts. Results: Patients with CIS did not show significant gray matter (GM) density differences compared with HC. However, they showed decreased FA and increased diffusivity measures in cortico-cerebellar tracts, and increased cortico-cerebellar functional connectivity. Patients with RRMS showed decreased GM density in cerebellar regions, decreased FA and increased diffusivity measures in cortico-cerebellar WM tracts, as well as a pattern of increased and mostly decreased functional cortico-cerebellar connectivity compared to HC. The comparison between CIS and RRMS patients revealed significant GM density difference, reduced FA and increased diffusivity measures in WM cortico-cerebellar tracts and increased/decreased functional connectivity. The identification of decreased WM integrity and increased functional cortico-cerebellar connectivity without GM changes in CIS and the pattern of decreased GM density decreased WM integrity and mostly decreased functional connectivity in RRMS patients emphasizes the role of compensatory mechanisms in early disease stages and the disintegration of structural and functional networks with disease progression. Conclusions: In conclusion, our study highlights the added value of multimodal neuroimaging techniques for the in vivo investigation of cortico-cerebellar brain changes in neurodegenerative disorders. An extension and future opportunity to leverage multimodal neuroimaging data inevitably remain the integration of such data in the recently-applied mathematical approaches of machine learning algorithms to more accurately classify and predict patients’ disease course.

Keywords: advanced neuroimaging techniques, cerebellum, MRI, multiple sclerosis

Procedia PDF Downloads 118
83 Single Cell Rna Sequencing Operating from Benchside to Bedside: An Interesting Entry into Translational Genomics

Authors: Leo Nnamdi Ozurumba-Dwight

Abstract:

Single-cell genomic analytical systems have proved to be a platform to isolate bulk cells into selected single cells for genomic, proteomic, and related metabolomic studies. This is enabling systematic investigations of the level of heterogeneity in a diverse and wide pool of cell populations. Single cell technologies, embracing techniques such as high parameter flow cytometry, single-cell sequencing, and high-resolution images are playing vital roles in these investigations on messenger ribonucleic acid (mRNA) molecules and related gene expressions in tracking the nature and course of disease conditions. This entails targeted molecular investigations on unit cells that help us understand cell behavoiur and expressions, which can be examined for their health implications on the health state of patients. One of the vital good sides of single-cell RNA sequencing (scRNA seq) is its probing capacity to detect deranged or abnormal cell populations present within homogenously perceived pooled cells, which would have evaded cursory screening on the pooled cell populations of biological samples obtained as part of diagnostic procedures. Despite conduction of just single-cell transcriptome analysis, scRNAseq now permits comparison of the transcriptome of the individual cells, which can be evaluated for gene expressional patterns that depict areas of heterogeneity with pharmaceutical drug discovery and clinical treatment applications. It is vital to strictly work through the tools of investigations from wet lab to bioinformatics and computational tooled analyses. In the precise steps for scRNAseq, it is critical to do thorough and effective isolation of viable single cells from the tissues of interest using dependable techniques (such as FACS) before proceeding to lysis, as this enhances the appropriate picking of quality mRNA molecules for subsequent sequencing (such as by the use of Polymerase Chain Reaction machine). Interestingly, scRNAseq can be deployed to analyze various types of biological samples such as embryos, nervous systems, tumour cells, stem cells, lymphocytes, and haematopoietic cells. In haematopoietic cells, it can be used to stratify acute myeloid leukemia patterns in patients, sorting them out into cohorts that enable re-modeling of treatment regimens based on stratified presentations. In immunotherapy, it can furnish specialist clinician-immunologist with tools to re-model treatment for each patient, an attribute of precision medicine. Finally, the good predictive attribute of scRNAseq can help reduce the cost of treatment for patients, thus attracting more patients who would have otherwise been discouraged from seeking quality clinical consultation help due to perceived high cost. This is a positive paradigm shift for patients’ attitudes primed towards seeking treatment.

Keywords: immunotherapy, transcriptome, re-modeling, mRNA, scRNA-seq

Procedia PDF Downloads 145
82 Isolation and Structural Elucidation of 20 Hydroxyecdystone from Vitex doniana Sweet Stem Bark

Authors: Mustapha A. Tijjani, Fanna I. Abdulrahman, Irfan Z. Khan, Umar K. Sandabe, Cong Li

Abstract:

Air dried sample V. doniana after collection and identification was extracted with ethanol and further partition with chloroform, ethyl acetate and n-butanol. Ethanolic extract (11.9g) was fractionated on a silica gel accelerated column chromatography using solvents such as n-hexane, ethyl acetate and methanol. Each eluent fractions (150ml aliquots) were collected and monitored with thin layer chromatography. Fractions with similar Rf values from same solvents system were pooled together. Phytochemical test of all the fractions were performed using standard procedure. Complete elution yielded 48 fractions (150ml/fraction) which were pooled to 24 fractions base on the Rf values. It was further recombined and 12 fractions were obtained on the basis on Rf values and coded Vd1 to Vd12 fractions. Vd8 was further eluted with ethylacetate and methanol and gave fourteen sub fractions Vd8-a, -Vd8-m. Fraction Vd8-a (56mg) gave a white crystal compound coded V1. It was further checked on TLC and observed under ultraviolet lamp and was found to give a single spot. The Rf values were calculated to be 0.433. The melting point was determined using Gallenkamp capillary melting point apparatus and found to be 241-243°C uncorrected. Characterization of the isolated compound coded V1 was done using FT-infra-red spectroscopy, HNMR, 13CNMR(1and 2D) and HRESI-MS. The IR spectrum of compound V1 shows prominent peaks that corresponds to OHstr (3365cm-1) and C=0 (1652cm-1) etc. This spectrum suggests that among the functional moiety in compound V1 are the carbonyl and hydroxyl group. The 1H NMR (400 MHz) spectrum of compound V1 in DMSO-d6 displayed five singlet signals at δ 0.72 (3H, s, H-18), 0.79 (3H, s, H-19), 1.03 (3H, s, H-21), 1.04 (3H, s, H-26), 1.06 (3H, s, H-27) each integrating for three protons indicating the five methyl functional groups present in the compound. It further showed a broad singlet at δ 5.58 integrated for 1 H due to an olefinic H-atom adjacent to the carbonyl carbon atom. Three signals at δ 3.10 (d, J = 9.0 Hz, H-22), 3.59 (m, 1H, 2H-a) and 3.72 (m, 1H, 3H-e), each integrating for one proton is due to oxymethine protons indicating that three oxymethine H-atoms are present in the compound. These all signals are characteristic to the ecdysteroid skeletons. The 13C-NMR spectrum showed the presence of 27 carbon atoms, suggesting that may be steroid skeleton. The DEPT-135 experiment showed the presence of five CH3, eight CH2, and seven CH groups, and seven quaternary C-atoms. The molecular formula was established as C27H44O7 by high resolution electron spray ionization-mass spectroscopy (HRESI-MS) positive ion mode m/z 481.3179. The signals in mass spectrum are 463, 445, and 427 peaks corresponding to losses of one, two, three, or four water molecules characteristic for ecdysterone skeleton reported in the literature. Based on the spectral analysis (HNMR, 13CNMR, DEPT, HMQC, IR, HRESI-MS) the compound V1 is thus concluded to have ecdysteriod skeleton and conclusively conforms with 2β, 3β 14α, 20R, 22R, 25-hexahydroxy-5 β cholest-7-ene-6- one, or 2, 3, 14, 20, 22, 25 hexahydroxy cholest-7-ene-6-one commonly known as 20-hydroxyecdysone.

Keywords: vitex, phytochemical, purification, isolation, chromatography, spectroscopy

Procedia PDF Downloads 326
81 Flux-Gate vs. Anisotropic Magneto Resistance Magnetic Sensors Characteristics in Closed-Loop Operation

Authors: Neoclis Hadjigeorgiou, Spyridon Angelopoulos, Evangelos V. Hristoforou, Paul P. Sotiriadis

Abstract:

The increasing demand for accurate and reliable magnetic measurements over the past decades has paved the way for the development of different types of magnetic sensing systems as well as of more advanced measurement techniques. Anisotropic Magneto Resistance (AMR) sensors have emerged as a promising solution for applications requiring high resolution, providing an ideal balance between performance and cost. However, certain issues of AMR sensors such as non-linear response and measurement noise are rarely discussed in the relevant literature. In this work, an analog closed loop compensation system is proposed, developed and tested as a means to eliminate the non-linearity of AMR response, reduce the 1/f noise and enhance the sensitivity of magnetic sensor. Additional performance aspects, such as cross-axis and hysteresis effects are also examined. This system was analyzed using an analytical model and a P-Spice model, considering both the sensor itself as well as the accompanying electronic circuitry. In addition, a commercial closed loop architecture Flux-Gate sensor (calibrated and certified), has been used for comparison purposes. Three different experimental setups have been constructed for the purposes of this work, each one utilized for DC magnetic field measurements, AC magnetic field measurements and Noise density measurements respectively. The DC magnetic field measurements have been conducted in laboratory environment employing a cubic Helmholtz coil setup in order to calibrate and characterize the system under consideration. A high-accuracy DC power supply has been used for providing the operating current to the Helmholtz coils. The results were recorded by a multichannel voltmeter The AC magnetic field measurements have been conducted in laboratory environment employing a cubic Helmholtz coil setup in order to examine the effective bandwidth not only of the proposed system but also for the Flux-Gate sensor. A voltage controlled current source driven by a function generator has been utilized for the Helmholtz coil excitation. The result was observed by the oscilloscope. The third experimental apparatus incorporated an AC magnetic shielding construction composed of several layers of electric steel that had been demagnetized prior to the experimental process. Each sensor was placed alone and the response was captured by the oscilloscope. The preliminary experimental results indicate that closed loop AMR response presented a maximum deviation of 0.36% with respect to the ideal linear response, while the corresponding values for the open loop AMR system and the Fluxgate sensor reached 2% and 0.01% respectively. Moreover, the noise density of the proposed close loop AMR sensor system remained almost as low as the noise density of the AMR sensor itself, yet considerably higher than that of the Flux-Gate sensor. All relevant numerical data are presented in the paper.

Keywords: AMR sensor, chopper, closed loop, electronic noise, magnetic noise, memory effects, flux-gate sensor, linearity improvement, sensitivity improvement

Procedia PDF Downloads 401
80 Impact of Insect-Feeding and Fire-Heating Wounding on Wood Properties of Lodgepole Pine

Authors: Estelle Arbellay, Lori D. Daniels, Shawn D. Mansfield, Alice S. Chang

Abstract:

Mountain pine beetle (MPB) outbreaks are currently devastating lodgepole pine forests in western North America, which are also widely disturbed by frequent wildfires. Both MPB and fire can leave scars on lodgepole pine trees, thereby diminishing their commercial value and possibly compromising their utilization in solid wood products. In order to fully exploit the affected resource, it is crucial to understand how wounding from these two disturbance agents impact wood properties. Moreover, previous research on lodgepole pine has focused solely on sound wood and stained wood resulting from the MPB-transmitted blue fungi. By means of a quantitative multi-proxy approach, we tested the hypotheses that (i) wounding (of either MPB or fire origin) caused significant changes in wood properties of lodgepole pine and that (ii) MPB-induced wound effects could differ from those induced by fire in type and magnitude. Pith-to-bark strips were extracted from 30 MPB scars and 30 fire scars. Strips were cut immediately adjacent to the wound margin and encompassed 12 rings from normal wood formed prior to wounding and 12 rings from wound wood formed after wounding. Wood properties evaluated within this 24-year window included ring width, relative wood density, cellulose crystallinity, fibre dimensions, and carbon and nitrogen concentrations. Methods used to measure these proxies at a (sub-)annual resolution included X-ray densitometry, X-ray diffraction, fibre quality analysis, and elemental analysis. Results showed a substantial growth release in wound wood compared to normal wood, as both earlywood and latewood width increased over a decade following wounding. Wound wood was also shown to have a significantly different latewood density than normal wood 4 years after wounding. Latewood density decreased in MPB scars while the opposite was true in fire scars. By contrast, earlywood density was presented only minor variations following wounding. Cellulose crystallinity decreased in wound wood compared to normal wood, being especially diminished in MPB scars the first year after wounding. Fibre dimensions also decreased following wounding. However, carbon and nitrogen concentrations did not substantially differ between wound wood and normal wood. Nevertheless, insect-feeding and fire-heating wounding were shown to significantly alter most wood properties of lodgepole pine, as demonstrated by the existence of several morphological anomalies in wound wood. MPB and fire generally elicited similar anomalies, with the major exception of latewood density. In addition to providing quantitative criteria for differentiating between biotic (MPB) and abiotic (fire) disturbances, this study provides the wood industry with fundamental information on the physiological response of lodgepole pine to wounding in order to evaluate the utilization of scarred trees in solid wood products.

Keywords: elemental analysis, fibre quality analysis, lodgepole pine, wood properties, wounding, X-ray densitometry, X-ray diffraction

Procedia PDF Downloads 297
79 Waveguiding in an InAs Quantum Dots Nanomaterial for Scintillation Applications

Authors: Katherine Dropiewski, Michael Yakimov, Vadim Tokranov, Allan Minns, Pavel Murat, Serge Oktyabrsky

Abstract:

InAs Quantum Dots (QDs) in a GaAs matrix is a well-documented luminescent material with high light yield, as well as thermal and ionizing radiation tolerance due to quantum confinement. These benefits can be leveraged for high-efficiency, room temperature scintillation detectors. The proposed scintillator is composed of InAs QDs acting as luminescence centers in a GaAs stopping medium, which also acts as a waveguide. This system has appealing potential properties, including high light yield (~240,000 photons/MeV) and fast capture of photoelectrons (2-5ps), orders of magnitude better than currently used inorganic scintillators, such as LYSO or BaF2. The high refractive index of the GaAs matrix (n=3.4) ensures light emitted by the QDs is waveguided, which can be collected by an integrated photodiode (PD). Scintillation structures were grown using Molecular Beam Epitaxy (MBE) and consist of thick GaAs waveguiding layers with embedded sheets of modulation p-type doped InAs QDs. An AlAs sacrificial layer is grown between the waveguide and the GaAs substrate for epitaxial lift-off to separate the scintillator film and transfer it to a low-index substrate for waveguiding measurements. One consideration when using a low-density material like GaAs (~5.32 g/cm³) as a stopping medium is the matrix thickness in the dimension of radiation collection. Therefore, luminescence properties of very thick (4-20 microns) waveguides with up to 100 QD layers were studied. The optimization of the medium included QD shape, density, doping, and AlGaAs barriers at the waveguide surfaces to prevent non-radiative recombination. To characterize the efficiency of QD luminescence, low temperature photoluminescence (PL) (77-450 K) was measured and fitted using a kinetic model. The PL intensity degrades by only 40% at RT, with an activation energy for electron escape from QDs to the barrier of ~60 meV. Attenuation within the waveguide (WG) is a limiting factor for the lateral size of a scintillation detector, so PL spectroscopy in the waveguiding configuration was studied. Spectra were measured while the laser (630 nm) excitation point was scanned away from the collecting fiber coupled to the edge of the WG. The QD ground state PL peak at 1.04 eV (1190 nm) was inhomogeneously broadened with FWHM of 28 meV (33 nm) and showed a distinct red-shift due to self-absorption in the QDs. Attenuation stabilized after traveling over 1 mm through the WG, at about 3 cm⁻¹. Finally, a scintillator sample was used to test detection and evaluate timing characteristics using 5.5 MeV alpha particles. With a 2D waveguide and a small area of integrated PD, the collected charge averaged 8.4 x10⁴ electrons, corresponding to a collection efficiency of about 7%. The scintillation response had 80 ps noise-limited time resolution and a QD decay time of 0.6 ns. The data confirms unique properties of this scintillation detector which can be potentially much faster than any currently used inorganic scintillator.

Keywords: GaAs, InAs, molecular beam epitaxy, quantum dots, III-V semiconductor

Procedia PDF Downloads 235
78 The Artificial Intelligence Driven Social Work

Authors: Avi Shrivastava

Abstract:

Our world continues to grapple with a lot of social issues. Economic growth and scientific advancements have not completely eradicated poverty, homelessness, discrimination and bias, gender inequality, health issues, mental illness, addiction, and other social issues. So, how do we improve the human condition in a world driven by advanced technology? The answer is simple: we will have to leverage technology to address some of the most important social challenges of the day. AI, or artificial intelligence, has emerged as a critical tool in the battle against issues that deprive marginalized and disadvantaged groups of the right to enjoy benefits that a society offers. Social work professionals can transform their lives by harnessing it. The lack of reliable data is one of the reasons why a lot of social work projects fail. Social work professionals continue to rely on expensive and time-consuming primary data collection methods, such as observation, surveys, questionnaires, and interviews, instead of tapping into AI-based technology to generate useful, real-time data and necessary insights. By leveraging AI’s data-mining ability, we can gain a deeper understanding of how to solve complex social problems and change lives of people. We can do the right work for the right people and at the right time. For example, AI can enable social work professionals to focus their humanitarian efforts on some of the world’s poorest regions, where there is extreme poverty. An interdisciplinary team of Stanford scientists, Marshall Burke, Stefano Ermon, David Lobell, Michael Xie, and Neal Jean, used AI to spot global poverty zones – identifying such zones is a key step in the fight against poverty. The scientists combined daytime and nighttime satellite imagery with machine learning algorithms to predict poverty in Nigeria, Uganda, Tanzania, Rwanda, and Malawi. In an article published by Stanford News, Stanford researchers use dark of night and machine learning, Ermon explained that they provided the machine-learning system, an application of AI, with the high-resolution satellite images and asked it to predict poverty in the African region. “The system essentially learned how to solve the problem by comparing those two sets of images [daytime and nighttime].” This is one example of how AI can be used by social work professionals to reach regions that need their aid the most. It can also help identify sources of inequality and conflict, which could reduce inequalities, according to Nature’s study, titled The role of artificial intelligence in achieving the Sustainable Development Goals, published in 2020. The report also notes that AI can help achieve 79 percent of the United Nation’s (UN) Sustainable Development Goals (SDG). AI is impacting our everyday lives in multiple amazing ways, yet some people do not know much about it. If someone is not familiar with this technology, they may be reluctant to use it to solve social issues. So, before we talk more about the use of AI to accomplish social work objectives, let’s put the spotlight on how AI and social work can complement each other.

Keywords: social work, artificial intelligence, AI based social work, machine learning, technology

Procedia PDF Downloads 77
77 Reconstruction of Age-Related Generations of Siberian Larch to Quantify the Climatogenic Dynamics of Woody Vegetation Close the Upper Limit of Its Growth

Authors: A. P. Mikhailovich, V. V. Fomin, E. M. Agapitov, V. E. Rogachev, E. A. Kostousova, E. S. Perekhodova

Abstract:

Woody vegetation among the upper limit of its habitat is a sensitive indicator of biota reaction to regional climate changes. Quantitative assessment of temporal and spatial changes in the distribution of trees and plant biocenoses calls for the development of new modeling approaches based upon selected data from measurements on the ground level and ultra-resolution aerial photography. Statistical models were developed for the study area located in the Polar Urals. These models allow obtaining probabilistic estimates for placing Siberian Larch trees into one of the three age intervals, namely 1-10, 11-40 and over 40 years, based on the Weilbull distribution of the maximum horizontal crown projection. Authors developed the distribution map for larch trees with crown diameters exceeding twenty centimeters by deciphering aerial photographs made by a UAV from an altitude equal to fifty meters. The total number of larches was equal to 88608, forming the following distribution row across the abovementioned intervals: 16980, 51740, and 19889 trees. The results demonstrate that two processes can be observed in the course of recent decades: first is the intensive forestation of previously barren or lightly wooded fragments of the study area located within the patches of wood, woodlands, and sparse stand, and second, expansion into mountain tundra. The current expansion of the Siberian Larch in the region replaced the depopulation process that occurred in the course of the Little Ice Age from the late 13ᵗʰ to the end of the 20ᵗʰ century. Using data from field measurements of Siberian larch specimen biometric parameters (including height, diameter at root collar and at 1.3 meters, and maximum projection of the crown in two orthogonal directions) and data on tree ages obtained at nine circular test sites, authors developed a model for artificial neural network including two layers with three and two neurons, respectively. The model allows quantitative assessment of a specimen's age based on height and maximum crone projection values. Tree height and crown diameters can be quantitatively assessed using data from aerial photographs and lidar scans. The resulting model can be used to assess the age of all Siberian larch trees. The proposed approach, after validation, can be applied to assessing the age of other tree species growing near the upper tree boundaries in other mountainous regions. This research was collaboratively funded by the Russian Ministry for Science and Education (project No. FEUG-2023-0002) and Russian Science Foundation (project No. 24-24-00235) in the field of data modeling on the basis of artificial neural network.

Keywords: treeline, dynamic, climate, modeling

Procedia PDF Downloads 33
76 Mesalazine-Induced Myopericarditis in a Professional Athlete

Authors: Tristan R. Fraser, Christopher D. Steadman, Christopher J. Boos

Abstract:

Myopericarditis is an inflammation syndrome characterised by clinical diagnostic criteria for pericarditis, such as chest pain, combined with evidence of myocardial involvement, such as elevation of biomarkers of myocardial damage, e.g., troponins. It can rarely be a complication of therapeutics used for dysregulated immune-mediated diseases such as inflammatory bowel disease (IBD), for example, mesalazine. The infrequency of mesalazine-induced myopericarditis adds to the challenge in its recognition. Rapid diagnosis and the early introduction of treatment are crucial. This case report follows a 24-year-old professional footballer with a past medical history of ulcerative colitis, recently started on mesalazine for disease control. Three weeks after mesalazine was initiated, he was admitted with fever, shortness of breath, and chest pain worse whilst supine and on deep inspiration, as well as elevated venous blood cardiac troponin T level (cTnT, 288ng/L; normal: <13ng/L). Myocarditis was confirmed on initial inpatient cardiac MRI, revealing the presence of florid myocarditis with preserved left ventricular systolic function and an ejection fraction of 67%. This was a longitudinal case study following the progress of a single individual with myopericarditis over four acute hospital admissions over nine weeks, with admissions ranging from two to five days. Parameters examined included clinical signs and symptoms, serum troponin, transthoracic echocardiogram, and cardiac MRI. Serial measurements of cardiac function, including cardiac MRI and transthoracic echocardiogram, showed progressive deterioration of cardiac function whilst mesalazine was continued. Prior to cessation of mesalazine, transthoracic echocardiography revealed a small global pericardial effusion of < 1cm and worsening left ventricular systolic function with an ejection fraction of 45%. After recognition of mesalazine as a potential cause and consequent cessation of the drug, symptoms resolved, with cardiac MRI performed as an outpatient showing resolution of myocardial oedema. The patient plans to make a return to competitive sport. Patients suffering from myopericarditis are advised to refrain from competitive sport for at least six months in order to reduce the risk of cardiac remodelling and sudden cardiac death. Additional considerations must be taken in individuals for whom competitive sport is an essential component of their livelihood, such as professional athletes. Myopericarditis is an uncommon, however potentially serious medical condition with a wide variety of aetiologies, including viral, autoimmune, and drug-related causes. Management is mainly supportive and relies on prompt recognition and removal of the aetiological process. Mesalazine-induced myopericarditis is a rare condition; as such increasing awareness of mesalazine as a precipitant of myopericarditis is vital for optimising the management of these patients.

Keywords: myopericarditis, mesalazine, inflammatory bowel disease, professional athlete

Procedia PDF Downloads 111
75 Seafloor and Sea Surface Modelling in the East Coast Region of North America

Authors: Magdalena Idzikowska, Katarzyna Pająk, Kamil Kowalczyk

Abstract:

Seafloor topography is a fundamental issue in geological, geophysical, and oceanographic studies. Single-beam or multibeam sonars attached to the hulls of ships are used to emit a hydroacoustic signal from transducers and reproduce the topography of the seabed. This solution provides relevant accuracy and spatial resolution. Bathymetric data from ships surveys provides National Centers for Environmental Information – National Oceanic and Atmospheric Administration. Unfortunately, most of the seabed is still unidentified, as there are still many gaps to be explored between ship survey tracks. Moreover, such measurements are very expensive and time-consuming. The solution is raster bathymetric models shared by The General Bathymetric Chart of the Oceans. The offered products are a compilation of different sets of data - raw or processed. Indirect data for the development of bathymetric models are also measurements of gravity anomalies. Some forms of seafloor relief (e.g. seamounts) increase the force of the Earth's pull, leading to changes in the sea surface. Based on satellite altimetry data, Sea Surface Height and marine gravity anomalies can be estimated, and based on the anomalies, it’s possible to infer the structure of the seabed. The main goal of the work is to create regional bathymetric models and models of the sea surface in the area of the east coast of North America – a region of seamounts and undulating seafloor. The research includes an analysis of the methods and techniques used, an evaluation of the interpolation algorithms used, model thickening, and the creation of grid models. Obtained data are raster bathymetric models in NetCDF format, survey data from multibeam soundings in MB-System format, and satellite altimetry data from Copernicus Marine Environment Monitoring Service. The methodology includes data extraction, processing, mapping, and spatial analysis. Visualization of the obtained results was carried out with Geographic Information System tools. The result is an extension of the state of the knowledge of the quality and usefulness of the data used for seabed and sea surface modeling and knowledge of the accuracy of the generated models. Sea level is averaged over time and space (excluding waves, tides, etc.). Its changes, along with knowledge of the topography of the ocean floor - inform us indirectly about the volume of the entire water ocean. The true shape of the ocean surface is further varied by such phenomena as tides, differences in atmospheric pressure, wind systems, thermal expansion of water, or phases of ocean circulation. Depending on the location of the point, the higher the depth, the lower the trend of sea level change. Studies show that combining data sets, from different sources, with different accuracies can affect the quality of sea surface and seafloor topography models.

Keywords: seafloor, sea surface height, bathymetry, satellite altimetry

Procedia PDF Downloads 56
74 Caged Compounds as Light-Dependent Initiators for Enzyme Catalysis Reactions

Authors: Emma Castiglioni, Nigel Scrutton, Derren Heyes, Alistair Fielding

Abstract:

By using light as trigger, it is possible to study many biological processes, such as the activity of genes, proteins, and other molecules, with precise spatiotemporal control. Caged compounds, where biologically active molecules are generated from an inert precursor upon laser photolysis, offer the potential to initiate such biological reactions with high temporal resolution. As light acts as the trigger for cleaving the protecting group, the ‘caging’ technique provides a number of advantages as it can be intracellular, rapid and controlled in a quantitative manner. We are developing caging strategies to study the catalytic cycle of a number of enzyme systems, such as nitric oxide synthase and ethanolamine ammonia lyase. These include the use of caged substrates, caged electrons and the possibility of caging the enzyme itself. In addition, we are developing a novel freeze-quench instrument to study these reactions, which combines rapid mixing and flashing capabilities. Reaction intermediates will be trapped at low temperatures and will be analysed by using electron paramagnetic resonance (EPR) spectroscopy to identify the involvement of any radical species during catalysis. EPR techniques typically require relatively long measurement times and very often, low temperatures to fully characterise these short-lived species. Therefore, common rapid mixing techniques, such as stopped-flow or quench-flow are not directly suitable. However, the combination of rapid freeze-quench (RFQ) followed by EPR analysis provides the ideal approach to kinetically trap and spectroscopically characterise these transient radical species. In a typical RFQ experiment, two reagent solutions are delivered to the mixer via two syringes driven by a pneumatic actuator or stepper motor. The new mixed solution is then sprayed into a cryogenic liquid or surface, and the frozen sample is then collected and packed into an EPR tube for analysis. The earliest RFQ instrument consisted of a hydraulic ram unit as a drive unit with direct spraying of the sample into a cryogenic liquid (nitrogen, isopentane or petroleum). Improvements to the RFQ technique have arisen from the design of new mixers in order to reduce both the volume and the mixing time. In addition, the cryogenic isopentane bath has been coupled to a filtering system or replaced by spraying the solution onto a surface that is frozen via thermal conductivity with a cryogenic liquid. In our work, we are developing a novel RFQ instrument which combines the freeze-quench technology with flashing capabilities to enable the studies of both thermally-activated and light-activated biological reactions. This instrument also uses a new rotating plate design based on magnetic couplings and removes the need for mechanical motorised rotation, which can otherwise be problematic at cryogenic temperatures.

Keywords: caged compounds, freeze-quench apparatus, photolysis, radicals

Procedia PDF Downloads 187
73 Alkaloid Levels in Experimental Lines of Ryegrass in Southtern Chile

Authors: Leonardo Parra, Manuel Chacón-Fuentes, Andrés Quiroz

Abstract:

One of the most important factors in beef and dairy production in the world as well as also in Chile, is related to the correct choice of cultivars or mixtures of forage grasses and legumes to ensure high yields and quality of grassland. However, a great problem is the persistence of the grasses as a result of the action of different hypogeous as epigean pests. The complex insect pests associated with grassland include white grubs (Hylamorpha elegans, Phytoloema herrmanni), blackworm (Dalaca pallens) and Argentine stem weevil (Listronotus bonariensis). In Chile, the principal strategy utilized for controlling this pest is chemical control, through the use of synthetic insecticides, however, underground feeding habits of larval and flight activity of adults makes this uneconomic method. Furthermore, due to problems including environmental degradation, development of resistance and chemical residues, there is a worldwide interest in the use of alternative environmentally friendly pest control methods. In this sense, in recent years there has been an increasing interest in determining the role of endophyte fungi in controlling epigean and hypogeous pest. Endophytes from ryegrass (Lolium perenne), establish a biotrophic relationship with the host, defined as mutualistic symbiosis. The plant-fungi association produces a “cocktail of alkaloids” where peramine is the main toxic substance present in endophyte of ryegrass and responsible for damage reduction of L. bonariensis. In the last decade, few studies have been developed on the effectiveness of new ryegrass cultivars carriers of endophyte in controlling insect pests. Therefore, the aim of this research is to provide knowledge concerning to evaluate the alkaloid content, such as peramine and Lolitrem B, present in new experimental lines of ryegrass and feasible to be used in grasslands of southern Chile. For this, during 2016, ryegrass plants of six experimental lines and two commercial cultivars sown at the Instituto de Investigaciones Agropecuarias Carrillanca (Vilcún, Chile) were collected and subjected to a process of chemical extraction to identify and quantify the presence of peramine and lolitrem B by the technique of liquid chromatography of high resolution (HPLC). The results indicated that the experimental lines EL-1 and EL-3 had high content of peramine (0.25 and 0.43 ppm, respectively) than with lolitrem B (0.061 and 0.19 ppm, respectively). Furthermore, the higher contents of lolitrem B were detected in the EL-4 and commercial cultivar Alto (positive control) with 0.08 and 0.17 ppm, respectively. Peramine and lolitrem B were not detected in the cultivar Jumbo (negative control). These results suggest that EL-3 would have potential as future cultivate because it has high content of peramine, alkaloid responsible for controlling insect pest. However, their current role on the complex insects attacking ryegrass grasslands should be evaluated. The information obtained in this research could be used to improve control strategies against hypogeous and epigean pests of grassland in southern Chile and also to reduce the use of synthetic pesticides.

Keywords: HPLC, Lolitrem B, peramine, pest

Procedia PDF Downloads 215
72 Mechanical Response Investigation of Wafer Probing Test with Vertical Cobra Probe via the Experiment and Transient Dynamic Simulation

Authors: De-Shin Liu, Po-Chun Wen, Zhen-Wei Zhuang, Hsueh-Chih Liu, Pei-Chen Huang

Abstract:

Wafer probing tests play an important role in semiconductor manufacturing procedures in accordance with the yield and reliability requirement of the wafer after the backend-of-the-line process. Accordingly, the stable physical and electrical contact between the probe and the tested wafer during wafer probing is regarded as an essential issue in identifying the known good die. The probe card can be integrated with multiple probe needles, which are classified as vertical, cantilever and micro-electro-mechanical systems type probe selections. Among all potential probe types, the vertical probe has several advantages as compared with other probe types, including maintainability, high probe density and feasibility for high-speed wafer testing. In the present study, the mechanical response of the wafer probing test with the vertical cobra probe on 720 μm thick silicon (Si) substrate with a 1.4 μm thick aluminum (Al) pad is investigated by the experiment and transient dynamic simulation approach. Because the deformation mechanism of the vertical cobra probe is determined by both bending and buckling mechanisms, the stable correlation between contact forces and overdrive (OD) length must be carefully verified. Moreover, the decent OD length with corresponding contact force contributed to piercing the native oxide layer of the Al pad and preventing the probing test-induced damage on the interconnect system. Accordingly, the scratch depth of the Al pad under various OD lengths is estimated by the atomic force microscope (AFM) and simulation work. In the wafer probing test configuration, the contact phenomenon between the probe needle and the tested object introduced large deformation and twisting of mesh gridding, causing the subsequent numerical divergence issue. For this reason, the arbitrary Lagrangian-Eulerian method is utilized in the present simulation work to conquer the aforementioned issue. The analytic results revealed a slight difference when the OD is considered as 40 μm, and the simulated is almost identical to the measured scratch depths of the Al pad under higher OD lengths up to 70 μm. This phenomenon can be attributed to the unstable contact of the probe at low OD length with the scratch depth below 30% of Al pad thickness, and the contact status will be being stable when the scratch depth over 30% of pad thickness. The splash of the Al pad is observed by the AFM, and the splashed Al debris accumulates on a specific side; this phenomenon is successfully simulated in the transient dynamic simulation. Thus, the preferred testing OD lengths are found as 45 μm to 70 μm, and the corresponding scratch depths on the Al pad are represented as 31.4% and 47.1% of Al pad thickness, respectively. The investigation approach demonstrated in this study contributed to analyzing the mechanical response of wafer probing test configuration under large strain conditions and assessed the geometric designs and material selections of probe needles to meet the requirement of high resolution and high-speed wafer-level probing test for thinned wafer application.

Keywords: wafer probing test, vertical probe, probe mark, mechanical response, FEA simulation

Procedia PDF Downloads 27
71 An Interoperability Concept for Detect and Avoid and Collision Avoidance Systems: Results from a Human-In-The-Loop Simulation

Authors: Robert Rorie, Lisa Fern

Abstract:

The integration of Unmanned Aircraft Systems (UAS) into the National Airspace System (NAS) poses a variety of technical challenges to UAS developers and aviation regulators. In response to growing demand for access to civil airspace in the United States, the Federal Aviation Administration (FAA) has produced a roadmap identifying key areas requiring further research and development. One such technical challenge is the development of a ‘detect and avoid’ system (DAA; previously referred to as ‘sense and avoid’) to replace the ‘see and avoid’ requirement in manned aviation. The purpose of the DAA system is to support the pilot, situated at a ground control station (GCS) rather than in the cockpit of the aircraft, in maintaining ‘well clear’ of nearby aircraft through the use of GCS displays and alerts. In addition to its primary function of aiding the pilot in maintaining well clear, the DAA system must also safely interoperate with existing NAS systems and operations, such as the airspace management procedures of air traffic controllers (ATC) and collision avoidance (CA) systems currently in use by manned aircraft, namely the Traffic alert and Collision Avoidance System (TCAS) II. It is anticipated that many UAS architectures will integrate both a DAA system and a TCAS II. It is therefore necessary to explicitly study the integration of DAA and TCAS II alerting structures and maneuver guidance formats to ensure that pilots understand the appropriate type and urgency of their response to the various alerts. This paper presents a concept of interoperability for the two systems. The concept was developed with the goal of avoiding any negative impact on the performance level of TCAS II (understanding that TCAS II must largely be left as-is) while retaining a DAA system that still effectively enables pilots to maintain well clear, and, as a result, successfully reduces the frequency of collision hazards. The interoperability concept described in the paper focuses primarily on facilitating the transition from a late-stage DAA encounter (where a loss of well clear is imminent) to a TCAS II corrective Resolution Advisory (RA), which requires pilot compliance with the directive RA guidance (e.g., climb, descend) within five seconds of its issuance. The interoperability concept was presented to 10 participants (6 active UAS pilots and 4 active commercial pilots) in a medium-fidelity, human-in-the-loop simulation designed to stress different aspects of the DAA and TCAS II systems. Pilot response times, compliance rates and subjective assessments were recorded. Results indicated that pilots exhibited comprehension of, and appropriate prioritization within, the DAA-TCAS II combined alert structure. Pilots demonstrated a high rate of compliance with TCAS II RAs and were also seen to respond to corrective RAs within the five second requirement established for manned aircraft. The DAA system presented under test was also shown to be effective in supporting pilots’ ability to maintain well clear in the overwhelming majority of cases in which pilots had sufficient time to respond. The paper ends with a discussion of next steps for research on integrating UAS into civil airspace.

Keywords: detect and avoid, interoperability, traffic alert and collision avoidance system (TCAS II), unmanned aircraft systems

Procedia PDF Downloads 244