Search results for: anaplastic large cell lymphoma
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10370

Search results for: anaplastic large cell lymphoma

1250 Antigen Stasis can Predispose Primary Ciliary Dyskinesia (PCD) Patients to Asthma

Authors: Nadzeya Marozkina, Joe Zein, Benjamin Gaston

Abstract:

Introduction: We have observed that many patients with Primary Ciliary Dyskinesia (PCD) benefit from asthma medications. In healthy airways, the ciliary function is normal. Antigens and irritants are rapidly cleared, and NO enters the gas phase normally to be exhaled. In the PCD airways, however, antigens, such as Dermatophagoides, are not as well cleared. This defect leads to oxidative stress, marked by increased DUOX1 expression and decreased superoxide dismutase [SOD] activity (manuscript under revision). H₂O₂, in high concentrations in the PCD airway, injures the airway. NO is oxidized rather than being exhaled, forming cytotoxic peroxynitrous acid. Thus, antigen stasis on PCD airway epithelium leads to airway injury and may predispose PCD patients to asthma. Indeed, recent population genetics suggest that PCD genes may be associated with asthma. We therefore hypothesized that PCD patients would be predisposed to having asthma. Methods. We analyzed our database of 18 million individual electronic medical records (EMRs) in the Indiana Network for Patient Care research database (INPCR). There is not an ICD10 code for PCD itself; code Q34.8 is most commonly used clinically. To validate analysis of this code, we queried patients who had an ICD10 code for both bronchiectasis and situs inversus totalis in INPCR. We also studied a validation cohort using the IBM Explorys® database (over 80 million individuals). Analyses were adjusted for age, sex and race using a 1 PCD: 3 controls matching method in INPCR and multivariable logistic regression in the IBM Explorys® database. Results. The prevalence of asthma ICD10 codes in subjects with a code Q34.8 was 67% vs 19% in controls (P < 0.0001) (Regenstrief Institute). Similarly, in IBM*Explorys, the OR [95% CI] for having asthma if a patient also had ICD10 code 34.8, relative to controls, was =4.04 [3.99; 4.09]. For situs inversus alone the OR [95% CI] was 4.42 [4.14; 4.71]; and bronchiectasis alone the OR [95% CI] =10.68 (10.56; 10.79). For both bronchiectasis and situs inversus together, the OR [95% CI] =28.80 (23.17; 35.81). Conclusions: PCD causes antigen stasis in the human airway (under review), likely predisposing to asthma in addition to oxidative and nitrosative stress and to airway injury. Here, we show that, by several different population-based metrics, and using two large databases, patients with PCD appear to have between a three- and 28-fold increased risk of having asthma. These data suggest that additional studies should be undertaken to understand the role of ciliary dysfunction in the pathogenesis and genetics of asthma. Decreased antigen clearance caused by ciliary dysfunction may be a risk factor for asthma development.

Keywords: antigen, PCD, asthma, nitric oxide

Procedia PDF Downloads 102
1249 Compression and Air Storage Systems for Small Size CAES Plants: Design and Off-Design Analysis

Authors: Coriolano Salvini, Ambra Giovannelli

Abstract:

The use of renewable energy sources for electric power production leads to reduced CO2 emissions and contributes to improving the domestic energy security. On the other hand, the intermittency and unpredictability of their availability poses relevant problems in fulfilling safely and in a cost efficient way the load demand along the time. Significant benefits in terms of “grid system applications”, “end-use applications” and “renewable applications” can be achieved by introducing energy storage systems. Among the currently available solutions, CAES (Compressed Air Energy Storage) shows favorable features. Small-medium size plants equipped with artificial air reservoirs can constitute an interesting option to get efficient and cost-effective distributed energy storage systems. The present paper is addressed to the design and off-design analysis of the compression system of small size CAES plants suited to absorb electric power in the range of hundreds of kilowatt. The system of interest is constituted by an intercooled (in case aftercooled) multi-stage reciprocating compressor and a man-made reservoir obtained by connecting large diameter steel pipe sections. A specific methodology for the system preliminary sizing and off-design modeling has been developed. Since during the charging phase the electric power absorbed along the time has to change according to the peculiar CAES requirements and the pressure ratio increases continuously during the filling of the reservoir, the compressor has to work at variable mass flow rate. In order to ensure an appropriately wide range of operations, particular attention has been paid to the selection of the most suitable compressor capacity control device. Given the capacity regulation margin of the compressor and the actual level of charge of the reservoir, the proposed approach allows the instant-by-instant evaluation of minimum and maximum electric power absorbable from the grid. The developed tool gives useful information to appropriately size the compression system and to manage it in the most effective way. Various cases characterized by different system requirements are analysed. Results are given and widely discussed.

Keywords: artificial air storage reservoir, compressed air energy storage (CAES), compressor design, compression system management.

Procedia PDF Downloads 226
1248 Using Photogrammetric Techniques to Map the Mars Surface

Authors: Ahmed Elaksher, Islam Omar

Abstract:

For many years, Mars surface has been a mystery for scientists. Lately with the help of geospatial data and photogrammetric procedures researchers were able to capture some insights about this planet. Two of the most imperative data sources to explore Mars are the The High Resolution Imaging Science Experiment (HiRISE) and the Mars Orbiter Laser Altimeter (MOLA). HiRISE is one of six science instruments carried by the Mars Reconnaissance Orbiter, launched August 12, 2005, and managed by NASA. The MOLA sensor is a laser altimeter carried by the Mars Global Surveyor (MGS) and launched on November 7, 1996. In this project, we used MOLA-based DEMs to orthorectify HiRISE optical images for generating a more accurate and trustful surface of Mars. The MOLA data was interpolated using the kriging interpolation technique. Corresponding tie points were digitized from both datasets. These points were employed in co-registering both datasets using GIS analysis tools. In this project, we employed three different 3D to 2D transformation models. These are the parallel projection (3D affine) transformation model; the extended parallel projection transformation model; the Direct Linear Transformation (DLT) model. A set of tie-points was digitized from both datasets. These points were split into two sets: Ground Control Points (GCPs), used to evaluate the transformation parameters using least squares adjustment techniques, and check points (ChkPs) to evaluate the computed transformation parameters. Results were evaluated using the RMSEs between the precise horizontal coordinates of the digitized check points and those estimated through the transformation models using the computed transformation parameters. For each set of GCPs, three different configurations of GCPs and check points were tested, and average RMSEs are reported. It was found that for the 2D transformation models, average RMSEs were in the range of five meters. Increasing the number of GCPs from six to ten points improve the accuracy of the results with about two and half meters. Further increasing the number of GCPs didn’t improve the results significantly. Using the 3D to 2D transformation parameters provided three to two meters accuracy. Best results were reported using the DLT transformation model. However, increasing the number of GCPS didn’t have substantial effect. The results support the use of the DLT model as it provides the required accuracy for ASPRS large scale mapping standards. However, well distributed sets of GCPs is a key to provide such accuracy. The model is simple to apply and doesn’t need substantial computations.

Keywords: mars, photogrammetry, MOLA, HiRISE

Procedia PDF Downloads 56
1247 Land Cover Mapping Using Sentinel-2, Landsat-8 Satellite Images, and Google Earth Engine: A Study Case of the Beterou Catchment

Authors: Ella Sèdé Maforikan

Abstract:

Accurate land cover mapping is essential for effective environmental monitoring and natural resources management. This study focuses on assessing the classification performance of two satellite datasets and evaluating the impact of different input feature combinations on classification accuracy in the Beterou catchment, situated in the northern part of Benin. Landsat-8 and Sentinel-2 images from June 1, 2020, to March 31, 2021, were utilized. Employing the Random Forest (RF) algorithm on Google Earth Engine (GEE), a supervised classification categorized the land into five classes: forest, savannas, cropland, settlement, and water bodies. GEE was chosen due to its high-performance computing capabilities, mitigating computational burdens associated with traditional land cover classification methods. By eliminating the need for individual satellite image downloads and providing access to an extensive archive of remote sensing data, GEE facilitated efficient model training on remote sensing data. The study achieved commendable overall accuracy (OA), ranging from 84% to 85%, even without incorporating spectral indices and terrain metrics into the model. Notably, the inclusion of additional input sources, specifically terrain features like slope and elevation, enhanced classification accuracy. The highest accuracy was achieved with Sentinel-2 (OA = 91%, Kappa = 0.88), slightly surpassing Landsat-8 (OA = 90%, Kappa = 0.87). This underscores the significance of combining diverse input sources for optimal accuracy in land cover mapping. The methodology presented herein not only enables the creation of precise, expeditious land cover maps but also demonstrates the prowess of cloud computing through GEE for large-scale land cover mapping with remarkable accuracy. The study emphasizes the synergy of different input sources to achieve superior accuracy. As a future recommendation, the application of Light Detection and Ranging (LiDAR) technology is proposed to enhance vegetation type differentiation in the Beterou catchment. Additionally, a cross-comparison between Sentinel-2 and Landsat-8 for assessing long-term land cover changes is suggested.

Keywords: land cover mapping, Google Earth Engine, random forest, Beterou catchment

Procedia PDF Downloads 62
1246 An Alternative Credit Scoring System in China’s Consumer Lendingmarket: A System Based on Digital Footprint Data

Authors: Minjuan Sun

Abstract:

Ever since the late 1990s, China has experienced explosive growth in consumer lending, especially in short-term consumer loans, among which, the growth rate of non-bank lending has surpassed bank lending due to the development in financial technology. On the other hand, China does not have a universal credit scoring and registration system that can guide lenders during the processes of credit evaluation and risk control, for example, an individual’s bank credit records are not available for online lenders to see and vice versa. Given this context, the purpose of this paper is three-fold. First, we explore if and how alternative digital footprint data can be utilized to assess borrower’s creditworthiness. Then, we perform a comparative analysis of machine learning methods for the canonical problem of credit default prediction. Finally, we analyze, from an institutional point of view, the necessity of establishing a viable and nationally universal credit registration and scoring system utilizing online digital footprints, so that more people in China can have better access to the consumption loan market. Two different types of digital footprint data are utilized to match with bank’s loan default records. Each separately captures distinct dimensions of a person’s characteristics, such as his shopping patterns and certain aspects of his personality or inferred demographics revealed by social media features like profile image and nickname. We find both datasets can generate either acceptable or excellent prediction results, and different types of data tend to complement each other to get better performances. Typically, the traditional types of data banks normally use like income, occupation, and credit history, update over longer cycles, hence they can’t reflect more immediate changes, like the financial status changes caused by the business crisis; whereas digital footprints can update daily, weekly, or monthly, thus capable of providing a more comprehensive profile of the borrower’s credit capabilities and risks. From the empirical and quantitative examination, we believe digital footprints can become an alternative information source for creditworthiness assessment, because of their near-universal data coverage, and because they can by and large resolve the "thin-file" issue, due to the fact that digital footprints come in much larger volume and higher frequency.

Keywords: credit score, digital footprint, Fintech, machine learning

Procedia PDF Downloads 158
1245 Sensitivity to Misusing Verb Inflections in Both Finite and Non-Finite Clauses in Native and Non-Native Russian: A Self-Paced Reading Investigation

Authors: Yang Cao

Abstract:

Analyzing the oral production of Chinese-speaking learners of English as a second language (L2), we can find a large variety of verb inflections – Why does it seem so hard for them to use consistent correct past morphologies in obligatory past contexts? Failed Functional Features Hypothesis (FFFH) attributes the rather non-target-like performance to the absence of [±past] feature in their L1 Chinese, arguing that for post puberty learners, new features in L2 are no more accessible. By contrast, Missing Surface Inflection Hypothesis (MSIH) tends to believe that all features are actually acquirable for late L2 learners, while due to the mapping difficulties from features to forms, it is hard for them to realize the consistent past morphologies on the surface. However, most of the studies are limited to the verb morphologies in finite clauses and few studies have ever attempted to figure out these learners’ performance in non-finite clauses. Additionally, it has been discussed that Chinese learners may be able to tell the finite/infinite distinction (i.e. the [±finite] feature might be selected in Chinese, even though the existence of [±past] is denied). Therefore, adopting a self-paced reading task (SPR), the current study aims to analyze the processing patterns of Chinese-speaking learners of L2 Russian, in order to find out if they are sensitive to misuse of tense morphologies in both finite and non-finite clauses and whether they are sensitive to the finite/infinite distinction presented in Russian. The study targets L2 Russian due to its systematic morphologies in both present and past tenses. A native Russian group, as well as a group of English-speaking learners of Russian, whose L1 has definitely selected both [±finite] and [±past] features, will also be involved. By comparing and contrasting performance of the three language groups, the study is going to further examine and discuss the two theories, FFFH and MSIH. Preliminary hypotheses are: a) Russian native speakers are expected to spend longer time reading the verb forms which violate the grammar; b) it is expected that Chinese participants are, at least, sensitive to the misuse of inflected verbs in non-finite clauses, although no sensitivity to the misuse of infinitives in finite clauses might be found. Therefore, an interaction of finite and grammaticality is expected to be found, which indicate that these learners are able to tell the finite/infinite distinction; and c) having selected [±finite] and [±past], English-speaking learners of Russian are expected to behave target-likely, supporting L1 transfer.

Keywords: features, finite clauses, morphosyntax, non-finite clauses, past morphologies, present morphologies, Second Language Acquisition, self-paced reading task, verb inflections

Procedia PDF Downloads 106
1244 Development of Vertically Integrated 2D Lake Victoria Flow Models in COMSOL Multiphysics

Authors: Seema Paul, Jesper Oppelstrup, Roger Thunvik, Vladimir Cvetkovic

Abstract:

Lake Victoria is the second largest fresh water body in the world, located in East Africa with a catchment area of 250,000 km², of which 68,800 km² is the actual lake surface. The hydrodynamic processes of the shallow (40–80 m deep) water system are unique due to its location at the equator, which makes Coriolis effects weak. The paper describes a St.Venant shallow water model of Lake Victoria developed in COMSOL Multiphysics software, a general purpose finite element tool for solving partial differential equations. Depth soundings taken in smaller parts of the lake were combined with recent more extensive data to resolve the discrepancies of the lake shore coordinates. The topography model must have continuous gradients, and Delaunay triangulation with Gaussian smoothing was used to produce the lake depth model. The model shows large-scale flow patterns, passive tracer concentration and water level variations in response to river and tracer inflow, rain and evaporation, and wind stress. Actual data of precipitation, evaporation, in- and outflows were applied in a fifty-year simulation model. It should be noted that the water balance is dominated by rain and evaporation and model simulations are validated by Matlab and COMSOL. The model conserves water volume, the celerity gradients are very small, and the volume flow is very slow and irrotational except at river mouths. Numerical experiments show that the single outflow can be modelled by a simple linear control law responding only to mean water level, except for a few instances. Experiments with tracer input in rivers show very slow dispersion of the tracer, a result of the slow mean velocities, in turn, caused by the near-balance of rain with evaporation. The numerical and hydrodynamical model can evaluate the effects of wind stress which is exerted by the wind on the lake surface that will impact on lake water level. Also, model can evaluate the effects of the expected climate change, as manifest in changes to rainfall over the catchment area of Lake Victoria in the future.

Keywords: bathymetry, lake flow and steady state analysis, water level validation and concentration, wind stress

Procedia PDF Downloads 225
1243 Effect of Automatic Self Transcending Meditation on Perceived Stress and Sleep Quality in Adults

Authors: Divya Kanchibhotla, Shashank Kulkarni, Shweta Singh

Abstract:

Chronic stress and sleep quality reduces mental health and increases the risk of developing depression and anxiety as well. There is increasing evidence for the utility of meditation as an adjunct clinical intervention for conditions like depression and anxiety. The present study is an attempt to explore the impact of Sahaj Samadhi Meditation (SSM), a category of Automatic Self Transcending Meditation (ASTM), on perceived stress and sleep quality in adults. The study design was a single group pre-post assessment. Perceived Stress Scale (PSS) and the Pittsburgh Sleep Quality Index (PSQI) were used in this study. Fifty-two participants filled PSS, and 60 participants filled PSQI at the beginning of the program (day 0), after two weeks (day 16) and at two months (day 60). Significant pre-post differences for the perceived stress level on Day 0 - Day 16 (p < 0.01; Cohen's d = 0.46) and Day 0 - Day 60 (p < 0.01; Cohen's d = 0.76) clearly demonstrated that by practicing SSM, participants experienced reduction in the perceived stress. The effect size of the intervention observed on the 16th day of assessment was small to medium, but on the 60th day, a medium to large effect size of the intervention was observed. In addition to this, significant pre-post differences for the sleep quality on Day 0 - Day 16 and Day 0 - Day 60 (p < 0.05) clearly demonstrated that by practicing SSM, participants experienced improvement in the sleep quality. Compared with Day 0 assessment, participants demonstrated significant improvement in the quality of sleep on Day 16 and Day 60. The effect size of the intervention observed on the 16th day of assessment was small, but on the 60th day, a small to medium effect size of the intervention was observed. In the current study we found out that after practicing SSM for two months, participants reported a reduction in the perceived stress, they felt that they are more confident about their ability to handle personal problems, were able to cope with all the things that they had to do, felt that they were on top of the things, and felt less angered. Participants also reported that their overall sleep quality improved; they took less time to fall asleep; they had less disturbances in sleep and less daytime dysfunction due to sleep deprivation. The present study provides clear evidence of the efficacy and safety of non-pharmacological interventions such as SSM in reducing stress and improving sleep quality. Thus, ASTM may be considered a useful intervention to reduce psychological distress in healthy, non-clinical populations, and it can be an alternative remedy for treating poor sleep among individuals and decreasing the use of harmful sedatives.

Keywords: automatic self transcending meditation, Sahaj Samadhi meditation, sleep, stress

Procedia PDF Downloads 133
1242 Population Dynamics of Cyprinid Fish Species (Mahseer: Tor Species) and Its Conservation in Yamuna River of Garhwal Region, India

Authors: Davendra Singh Malik

Abstract:

India is one of the mega-biodiversity countries in the world and contributing about 11.72% of global fish diversity. The Yamuna river is the longest tributary of Ganga river ecosystem, providing a natural habitat for existing fish diversity of Himalayan region of Indian subcontinent. The several hydropower dams and barrages have been constructed on different locations of major rivers in Garhwal region. These dams have caused a major ecological threat to change existing fresh water ecosystems altering water flows, interrupting ecological connectivity, fragmenting habitats and native riverine fish species. Mahseer fishes (Indian carp) of the genus Tor, are large cyprinids endemic to continental Asia popularly known as ‘Game or sport fishes’ have continued to be decimated by fragmented natural habitats due to damming the water flow in riverine system and categorized as threatened fishes of India. The fresh water fish diversity as 24 fish species were recorded from Yamuna river. The present fish catch data has revealed that mahseer fishes (Tor tor and Tor putitora) were contributed about 32.5 %, 25.6 % and 18.2 % in upper, middle and lower riverine stretches of Yaumna river. The length range of mahseer (360-450mm) recorded as dominant size of catch composition. The CPUE (catch per unit effort) of mahseer fishes also indicated about a sharp decline of fish biomass, changing growth pattern, sex ratio and maturity stages of fishes. Only 12.5 – 14.8 % mahseer female brooders have showed only maturity phases in breeding months. The fecundity of mature mahseer female fish brooders ranged from 2500-4500 no. of ova during breeding months. The present status of mahseer fishery has attributed to the over exploitative nature in Yamuna river. The mahseer population is shrinking continuously in down streams of Yamuna river due to cumulative effects of various ecological stress. Mahseer conservation programme have implemented as 'in situ fish conservation' for enhancement of viable population size of mahseer species and restore the genetic loss of mahseer fish germplasm in Yamuna river of Garhwal Himalayan region.

Keywords: conservation practice, population dynamics, tor fish species, Yamuna River

Procedia PDF Downloads 254
1241 Detailed Analysis of Multi-Mode Optical Fiber Infrastructures for Data Centers

Authors: Matej Komanec, Jan Bohata, Stanislav Zvanovec, Tomas Nemecek, Jan Broucek, Josef Beran

Abstract:

With the exponential growth of social networks, video streaming and increasing demands on data rates, the number of newly built data centers rises proportionately. The data centers, however, have to adjust to the rapidly increased amount of data that has to be processed. For this purpose, multi-mode (MM) fiber based infrastructures are often employed. It stems from the fact, the connections in data centers are typically realized within a short distance, and the application of MM fibers and components considerably reduces costs. On the other hand, the usage of MM components brings specific requirements for installation service conditions. Moreover, it has to be taken into account that MM fiber components have a higher production tolerance for parameters like core and cladding diameters, eccentricity, etc. Due to the high demands for the reliability of data center components, the determination of properly excited optical field inside the MM fiber core belongs to the key parameters while designing such an MM optical system architecture. Appropriately excited mode field of the MM fiber provides optimal power budget in connections, leads to the decrease of insertion losses (IL) and achieves effective modal bandwidth (EMB). The main parameter, in this case, is the encircled flux (EF), which should be properly defined for variable optical sources and consequent different mode-field distribution. In this paper, we present detailed investigation and measurements of the mode field distribution for short MM links purposed in particular for data centers with the emphasis on reliability and safety. These measurements are essential for large MM network design. The various scenarios, containing different fibers and connectors, were tested in terms of IL and mode-field distribution to reveal potential challenges. Furthermore, we focused on estimation of particular defects and errors, which can realistically occur like eccentricity, connector shifting or dust, were simulated and measured, and their dependence to EF statistics and functionality of data center infrastructure was evaluated. The experimental tests were performed at two wavelengths, commonly used in MM networks, of 850 nm and 1310 nm to verify EF statistics. Finally, we provide recommendations for data center systems and networks, using OM3 and OM4 MM fiber connections.

Keywords: optical fiber, multi-mode, data centers, encircled flux

Procedia PDF Downloads 375
1240 Diverse High-Performing Teams: An Interview Study on the Balance of Demands and Resources

Authors: Alana E. Jansen

Abstract:

With such a large proportion of organisations relying on the use of team-based structures, it is surprising that so few teams would be classified as high-performance teams. While the impact of team composition on performance has been researched frequently, there have been conflicting findings as to the effects, particularly when examined alongside other team factors. To broaden the theoretical perspectives on this topic and potentially explain some of the inconsistencies in research findings left open by other various models of team effectiveness and high-performing teams, the present study aims to use the Job-Demands-Resources model, typically applied to burnout and engagement, as a framework to examine how team composition factors (particularly diversity in team member characteristics) can facilitate or hamper team effectiveness. This study used a virtual interview design where participants were asked to both rate and describe their experiences, in one high-performing and one low-performing team, over several factors relating to demands, resources, team composition, and team effectiveness. A semi-structured interview protocol was developed, which combined the use of the Likert style and exploratory questions. A semi-targeted sampling approach was used to invite participants ranging in age, gender, and ethnic appearance (common surface-level diversity characteristics) and those from different specialties, roles, educational and industry backgrounds (deep-level diversity characteristics). While the final stages of data analyses are still underway, thematic analysis using a grounded theory approach was conducted concurrently with data collection to identify the point of thematic saturation, resulting in 35 interviews being completed. Analyses examine differences in perceptions of demands and resources as they relate to perceived team diversity. Preliminary results suggest that high-performing and low-performing teams differ in perceptions of the type and range of both demands and resources. The current research is likely to offer contributions to both theory and practice. The preliminary findings suggest there is a range of demands and resources which vary between high and low-performing teams, factors which may play an important role in team effectiveness research going forward. Findings may assist in explaining some of the more complex interactions between factors experienced in the team environment, making further progress towards understanding the intricacies of why only some teams achieve high-performance status.

Keywords: diversity, high-performing teams, job demands and resources, team effectiveness

Procedia PDF Downloads 186
1239 Identification of Igneous Intrusions in South Zallah Trough-Sirt Basin

Authors: Mohamed A. Saleem

Abstract:

Using mostly seismic data, this study intends to show some examples of igneous intrusions found in some areas of the Sirt Basin and explore the period of their emplacement as well as the interrelationships between these sills. The study area is located in the south of the Zallah Trough, south-west Sirt basin, Libya. It is precisely between the longitudes 18.35ᵒ E and 19.35ᵒ E, and the latitudes 27.8ᵒ N and 28.0ᵒ N. Based on a variety of criteria that are usually used as marks on the igneous intrusions, twelve igneous intrusions (Sills), have been detected and analysed using 3D seismic data. One or more of the following were used as identification criteria: the high amplitude reflectors paired with abrupt reflector terminations, vertical offsets, or what is described as a dike-like connection, the violation, the saucer form, and the roughness. Because of their laying between the hosting layers, the majority of these intrusions are classified as sills. Another distinguishing feature is the intersection geometry link between some of these sills. Every single sill has given a name just to distinguish the sills from each other such as S-1, S-2, and …S-12. To avoid the repetition of description, the common characteristics and some statistics of these sills are shown in summary tables, while the specific characters that are not common and have been noticed for each sill are shown individually. The sills, S-1, S-2, and S-3, are approximately parallel to one other, with the shape of these sills being governed by the syncline structure of their host layers. The faults that dominated the strata (pre-upper Cretaceous strata) have a significant impact on the sills; they caused their discontinuity, while the upper layers have a shape of anticlines. S-1 and S-10 are the group's deepest and highest sills, respectively, with S-1 seated near the basement's top and S-10 extending into the sequence of the upper cretaceous. The dramatic escalation of sill S-4 can be seen in N-S profiles. The majority of the interpreted sills are influenced and impacted by a large number of normal faults that strike in various directions and propagate vertically from the surface to the basement's top. This indicates that the sediment sequences were existed before the sill’s intrusion, were deposited, and that the younger faults occurred more recently. The pre-upper cretaceous unit is the current geological depth for the Sills S-1, S-2 … S-9, while Sills S-10, S-11, and S-12 are hosted by the Cretaceous unit. Over the sills S-1, S-2, and S-3, which are the deepest sills, the pre-upper cretaceous surface has a slightly forced folding, these forced folding is also noticed above the right and left tips of sill S-8 and S-6, respectively, while the absence of these marks on the above sequences of layers supports the idea that the aforementioned sills were emplaced during the early upper cretaceous period.

Keywords: Sirt Basin, Zallah Trough, igneous intrusions, seismic data

Procedia PDF Downloads 112
1238 Handling, Exporting and Archiving Automated Mineralogy Data Using TESCAN TIMA

Authors: Marek Dosbaba

Abstract:

Within the mining sector, SEM-based Automated Mineralogy (AM) has been the standard application for quickly and efficiently handling mineral processing tasks. Over the last decade, the trend has been to analyze larger numbers of samples, often with a higher level of detail. This has necessitated a shift from interactive sample analysis performed by an operator using a SEM, to an increased reliance on offline processing to analyze and report the data. In response to this trend, TESCAN TIMA Mineral Analyzer is designed to quickly create a virtual copy of the studied samples, thereby preserving all the necessary information. Depending on the selected data acquisition mode, TESCAN TIMA can perform hyperspectral mapping and save an X-ray spectrum for each pixel or segment, respectively. This approach allows the user to browse through elemental distribution maps of all elements detectable by means of energy dispersive spectroscopy. Re-evaluation of the existing data for the presence of previously unconsidered elements is possible without the need to repeat the analysis. Additional tiers of data such as a secondary electron or cathodoluminescence images can also be recorded. To take full advantage of these information-rich datasets, TIMA utilizes a new archiving tool introduced by TESCAN. The dataset size can be reduced for long-term storage and all information can be recovered on-demand in case of renewed interest. TESCAN TIMA is optimized for network storage of its datasets because of the larger data storage capacity of servers compared to local drives, which also allows multiple users to access the data remotely. This goes hand in hand with the support of remote control for the entire data acquisition process. TESCAN also brings a newly extended open-source data format that allows other applications to extract, process and report AM data. This offers the ability to link TIMA data to large databases feeding plant performance dashboards or geometallurgical models. The traditional tabular particle-by-particle or grain-by-grain export process is preserved and can be customized with scripts to include user-defined particle/grain properties.

Keywords: Tescan, electron microscopy, mineralogy, SEM, automated mineralogy, database, TESCAN TIMA, open format, archiving, big data

Procedia PDF Downloads 108
1237 Biodegradation of Phenazine-1-Carboxylic Acid by Rhodanobacter sp. PCA2 Proceeds via Decarboxylation and Cleavage of Nitrogen-Containing Ring

Authors: Miaomiao Zhang, Sabrina Beckmann, Haluk Ertan, Rocky Chau, Mike Manefield

Abstract:

Phenazines are a large class of nitrogen-containing aromatic heterocyclic compounds, which are almost exclusively produced by bacteria from diverse genera including Pseudomonas and Streptomyces. Phenazine-1-carboxylic acid (PCA) as one of 'core' phenazines are converted from chorismic acid before modified to other phenazine derivatives in different cells. Phenazines have attracted enormous interests because of their multiple roles on biocontrol, bacterial interaction, biofilm formation and fitness of their producers. However, in spite of ecological importance, degradation as a part of phenazines’ fate only have extremely limited attention now. Here, to isolate PCA-degrading bacteria, 200 mg L-1 PCA was supplied as sole carbon, nitrogen and energy source in minimal mineral medium. Quantitative PCR and Reverse-transcript PCR were employed to study abundance and activity of functional gene MFORT 16269 in PCA degradation, respectively. Intermediates and products of PCA degradation were identified with LC-MS/MS. After enrichment and isolation, a PCA-degrading strain was selected from soil and was designated as Rhodanobacter sp. PCA2 based on full 16S rRNA sequencing. As determined by HPLC, strain PCA2 consumed 200 mg L-1 (836 µM) PCA at a rate of 17.4 µM h-1, accompanying with significant cells yield from 1.92 × 105 to 3.11 × 106 cells per mL. Strain PCA2 was capable of degrading other phenazines as well, including phenazine (4.27 µM h-1), pyocyanin (2.72 µM h-1), neutral red (1.30 µM h-1) and 1-hydroxyphenazine (0.55 µM h-1). Moreover, during the incubation, transcript copies of MFORT 16269 gene increased significantly from 2.13 × 106 to 8.82 × 107 copies mL-1, which was 2.77 times faster than that of the corresponding gene copy number (2.20 × 106 to 3.32 × 107 copies mL-1), indicating that MFORT 16269 gene was activated and played roles on PCA degradation. As analyzed by LC-MS/MS, decarboxylation from the ring structure was determined as the first step of PCA degradation, followed by cleavage of nitrogen-containing ring by dioxygenase which catalyzed phenazine to nitrosobenzene. Subsequently, phenylhydroxylamine was detected after incubation for two days and was then transferred to aniline and catechol. Additionally, genomic and proteomic analyses were also carried out for strain PCA2. Overall, the findings presented here showed that a newly isolated strain Rhodanobacter sp. PCA2 was capable of degrading phenazines through decarboxylation and cleavage of nitrogen-containing ring, during which MFORT 16269 gene was activated and played important roles.

Keywords: decarboxylation, MFORT16269 gene, phenazine-1-carboxylic acid degradation, Rhodanobacter sp. PCA2

Procedia PDF Downloads 222
1236 Towards End-To-End Disease Prediction from Raw Metagenomic Data

Authors: Maxence Queyrel, Edi Prifti, Alexandre Templier, Jean-Daniel Zucker

Abstract:

Analysis of the human microbiome using metagenomic sequencing data has demonstrated high ability in discriminating various human diseases. Raw metagenomic sequencing data require multiple complex and computationally heavy bioinformatics steps prior to data analysis. Such data contain millions of short sequences read from the fragmented DNA sequences and stored as fastq files. Conventional processing pipelines consist in multiple steps including quality control, filtering, alignment of sequences against genomic catalogs (genes, species, taxonomic levels, functional pathways, etc.). These pipelines are complex to use, time consuming and rely on a large number of parameters that often provide variability and impact the estimation of the microbiome elements. Training Deep Neural Networks directly from raw sequencing data is a promising approach to bypass some of the challenges associated with mainstream bioinformatics pipelines. Most of these methods use the concept of word and sentence embeddings that create a meaningful and numerical representation of DNA sequences, while extracting features and reducing the dimensionality of the data. In this paper we present an end-to-end approach that classifies patients into disease groups directly from raw metagenomic reads: metagenome2vec. This approach is composed of four steps (i) generating a vocabulary of k-mers and learning their numerical embeddings; (ii) learning DNA sequence (read) embeddings; (iii) identifying the genome from which the sequence is most likely to come and (iv) training a multiple instance learning classifier which predicts the phenotype based on the vector representation of the raw data. An attention mechanism is applied in the network so that the model can be interpreted, assigning a weight to the influence of the prediction for each genome. Using two public real-life data-sets as well a simulated one, we demonstrated that this original approach reaches high performance, comparable with the state-of-the-art methods applied directly on processed data though mainstream bioinformatics workflows. These results are encouraging for this proof of concept work. We believe that with further dedication, the DNN models have the potential to surpass mainstream bioinformatics workflows in disease classification tasks.

Keywords: deep learning, disease prediction, end-to-end machine learning, metagenomics, multiple instance learning, precision medicine

Procedia PDF Downloads 124
1235 BLS-2/BSL-3 Laboratory for Diagnosis of Pathogens on the Colombia-Ecuador Border Region: A Post-COVID Commitment to Public Health

Authors: Anderson Rocha-Buelvas, Jaqueline Mena Huertas, Edith Burbano Rosero, Arsenio Hidalgo Troya, Mauricio Casas Cruz

Abstract:

COVID-19 is a disruptive pandemic for the public health and economic system of whole countries, including Colombia. Nariño Department is the southwest of the country and draws attention to being on the border with Ecuador, constantly facing demographic transition affecting infections between countries. In Nariño, the early routine diagnosis of SARS-CoV-2, which can be handled at BSL-2, has affected the transmission dynamics of COVID-19. However, new emerging and re-emerging viruses with biological flexibility classified as a Risk Group 3 agent can take advantage of epidemiological opportunities, generating the need to increase clinical diagnosis, mainly in border regions between countries. The overall objective of this project was to assure the quality of the analytical process in the diagnosis of high biological risk pathogens in Nariño by building a laboratory that includes biosafety level (BSL)-2 and (BSL)-3 containment zones. The delimitation of zones was carried out according to the Verification Tool of the National Health Institute of Colombia and following the standard requirements for the competence of testing and calibration laboratories of the International Organization for Standardization. This is achieved by harmonization of methods and equipment for effective and durable diagnostics of the large-scale spread of highly pathogenic microorganisms, employing negative-pressure containment systems and UV Systems in accordance with a finely controlled electrical system and PCR systems as new diagnostic tools. That increases laboratory capacity. Protection in BSL-3 zones will separate the handling of potentially infectious aerosols within the laboratory from the community and the environment. It will also allow the handling and inactivation of samples with suspected pathogens and the extraction of molecular material from them, allowing research with pathogens with high risks, such as SARS-CoV-2, Influenza, and syncytial virus, and malaria, among others. The diagnosis of these pathogens will be articulated across the spectrum of basic, applied, and translational research that could receive about 60 daily samples. It is expected that this project will be articulated with the health policies of neighboring countries to increase research capacity.

Keywords: medical laboratory science, SARS-CoV-2, public health surveillance, Colombia

Procedia PDF Downloads 90
1234 Using a Phenomenological Approach to Explore the Experiences of Nursing Students in Coping with Their Emotional Responses in Caring for End-Of-Life Patients

Authors: Yun Chan Lee

Abstract:

Background: End-of-life care is a large area of all nursing practice and student nurses are likely to meet dying patients in many placement areas. It is therefore important to understand the emotional responses and coping strategies of student nurses in order for nursing education systems to have some appreciation of how nursing students might be supported in the future. Methodology: This research used a qualitative phenomenological approach. Six student nurses understanding a degree-level adult nursing course were interviewed. Their responses to questions were analyzed using interpretative phenomenological analysis. Finding: The findings identified 3 main themes. First, the common experience of ‘unpreparedness’. A very small number of participants felt that this was unavoidable and that ‘no preparation is possible’, the majority felt that they were unprepared because of ‘insufficient input’ from the university and as a result of wider ‘social taboos’ around death and dying. The second theme showed that emotions were affected by ‘the personal connection to the patient’ and the important sub-themes of ‘the evoking of memories’, ‘involvement in care’ and ‘sense of responsibility’. The third theme, the coping strategies used by students, seemed to fall into two broad areas those ‘internal’ with the student and those ‘external’. In terms of the internal coping strategies, ‘detachment’, ‘faith’, ‘rationalization’ and ‘reflective skills’ are the important components of this part. Regarding the external coping strategies, ‘clinical staff’ and ‘the importance of family and friends’ are the importance of accessing external forms of support. Implication: It is clear that student nurses are affected emotionally by caring for dying patients and many of them have apprehension even before they begin on their placements but very often this is unspoken. Those anxieties before the placement become more pronounced during and continue after the placements. This has implications for when support is offered and possibly its duration. Another significant point of the study is that participants often highlighted their wish to speak to qualified nurses after their experiences of being involved in end-of-life care and especially when they had been present at the time of death. Many of the students spoke that qualified nurses were not available to them. This seemed to be due to a number of reasons. Because the qualified nurses were not available, students had to make use of family members and friends to talk to. Consequently, the implication of this study is not only to educate student nurses but also to educate the qualified mentors on the importance of providing emotional support to students.

Keywords: nursing students, coping strategies, end-of-life care, emotional responses

Procedia PDF Downloads 161
1233 Biogas Production from Kitchen Waste for a Household Sustainability

Authors: Vuiswa Lucia Sethunya, Tonderayi Matambo, Diane Hildebrandt

Abstract:

South African’s informal settlements produce tonnes of kitchen waste (KW) per year which is dumped into the landfill. These landfill sites are normally located in close proximity to the household of the poor communities; this is a problem in which the young children from those communities end up playing in these landfill sites which may result in some health hazards because of methane, carbon dioxide and sulphur gases which are produced. To reduce this large amount of organic materials being deposited into landfills and to provide a cleaner place for those within the community especially the children, an energy conversion process such as anaerobic digestion of the organic waste to produce biogas was implemented. In this study, the digestion of various kitchen waste was investigated in order to understand and develop a system that is suitable for household use to produce biogas for cooking. Three sets of waste of different nutritional compositions were digested as per acquired in the waste streams of a household at mesophilic temperature (35ᵒC). These sets of KW were co-digested with cow dung (CW) at different ratios to observe the microbial behaviour and the system’s stability in a laboratory scale system. The gas chromatography-flame ionization detector analyses have been performed to identify and quantify the presence of organic compounds in the liquid samples from co-digested and mono-digested food waste. Acetic acid, propionic acid, butyric acid and valeric acid are the fatty acids which were studied. Acetic acid (1.98 g/L), propionic acid (0.75 g/L) and butyric acid (2.16g/L) were the most prevailing fatty acids. The results obtained from organic acids analysis suggest that the KW can be an innovative substituent to animal manure for biogas production. The faster degradation period in which the microbes break down the organic compound to produce the fatty acids during the anaerobic process of KW also makes it a better feedstock during high energy demand periods. The C/N ratio analysis showed that from the three waste streams the first stream containing vegetables (55%), fruits (16%), meat (25%) and pap (4%) yielded more methane-based biogas of 317mL/g of volatile solids (VS) at C/N of 21.06. Generally, this shows that a household will require a heterogeneous composition of nutrient-based waste to be fed into the digester to acquire the best biogas yield to sustain a households cooking needs.

Keywords: anaerobic digestion, biogas, kitchen waste, household

Procedia PDF Downloads 198
1232 Linguistic Analysis of Argumentation Structures in Georgian Political Speeches

Authors: Mariam Matiashvili

Abstract:

Argumentation is an integral part of our daily communications - formal or informal. Argumentative reasoning, techniques, and language tools are used both in personal conversations and in the business environment. Verbalization of the opinions requires the use of extraordinary syntactic-pragmatic structural quantities - arguments that add credibility to the statement. The study of argumentative structures allows us to identify the linguistic features that make the text argumentative. Knowing what elements make up an argumentative text in a particular language helps the users of that language improve their skills. Also, natural language processing (NLP) has become especially relevant recently. In this context, one of the main emphases is on the computational processing of argumentative texts, which will enable the automatic recognition and analysis of large volumes of textual data. The research deals with the linguistic analysis of the argumentative structures of Georgian political speeches - particularly the linguistic structure, characteristics, and functions of the parts of the argumentative text - claims, support, and attack statements. The research aims to describe the linguistic cues that give the sentence a judgmental/controversial character and helps to identify reasoning parts of the argumentative text. The empirical data comes from the Georgian Political Corpus, particularly TV debates. Consequently, the texts are of a dialogical nature, representing a discussion between two or more people (most often between a journalist and a politician). The research uses the following approaches to identify and analyze the argumentative structures Lexical Classification & Analysis - Identify lexical items that are relevant in argumentative texts creating process - Creating the lexicon of argumentation (presents groups of words gathered from a semantic point of view); Grammatical Analysis and Classification - means grammatical analysis of the words and phrases identified based on the arguing lexicon. Argumentation Schemas - Describe and identify the Argumentation Schemes that are most likely used in Georgian Political Speeches. As a final step, we analyzed the relations between the above mentioned components. For example, If an identified argument scheme is “Argument from Analogy”, identified lexical items semantically express analogy too, and they are most likely adverbs in Georgian. As a result, we created the lexicon with the words that play a significant role in creating Georgian argumentative structures. Linguistic analysis has shown that verbs play a crucial role in creating argumentative structures.

Keywords: georgian, argumentation schemas, argumentation structures, argumentation lexicon

Procedia PDF Downloads 69
1231 Preparation and CO2 Permeation Properties of Carbonate-Ceramic Dual-Phase Membranes

Authors: H. Ishii, S. Araki, H. Yamamoto

Abstract:

In recent years, the carbon dioxide (CO2) separation technology is required in terms of the reduction of emission of global warming gases and the efficient use of fossil fuels. Since the emission amount of CO2 gas occupies the large part of greenhouse effect gases, it is considered that CO2 have the most influence on global warming. Therefore, we need to establish the CO2 separation technologies with high efficiency at low cost. In this study, we focused on the membrane separation compared with conventional separation technique such as distillation or cryogenic separation. In this study, we prepared carbonate-ceramic dual-phase membranes to separate CO2 at high temperature. As porous ceramic substrate, the (Pr0.9La0.1)2(Ni0.74Cu0.21Ga0.05)O4+σ, La0.6Sr0.4Ti0.3 Fe0.7O3 and Ca0.8Sr0.2Ti0.7Fe0.3O3-α (PLNCG, LSTF and CSTF) were examined. PLNCG, LSTF and CSTF have the perovskite structure. The perovskite structure has high stability and shows ion-conducting doped by another metal ion. PLNCG, LSTF and CSTF have perovskite structure and has high stability and high oxygen ion diffusivity. PLNCG, LSTF and CSTF powders were prepared by a solid-phase process using the appropriate carbonates or oxides. To prepare porous substrates, these powders mixed with carbon black (20 wt%) and a few drops of polyvinyl alcohol (5 wt%) aqueous solution. The powder mixture were packed into stainless steel mold (13 mm) and uniaxially pressed into disk shape under a pressure of 20 MPa for 1 minute. PLNCG, LSTF and CSTF disks were calcined in air for 6 h at 1473, 1573 and 1473 K, respectively. The carbonate mixture (Li2CO3/Na2CO3/K2CO3: 42.5/32.5/25 in mole percent ratio) was placed inside a crucible and heated to 793 K. Porous substrates were infiltrated with the molten carbonate mixture at 793 K. Crystalline structures of the fresh membranes and after the infiltration with the molten carbonate mixtures were determined by X-ray diffraction (XRD) measurement. We confirmed the crystal structure of PLNCG and CSTF slightly changed after infiltration with the molten carbonate mixture. CO2 permeation experiments with PLNCG-carbonate, LSTF-carbonate and CSTF-carbonate membranes were carried out at 773-1173 K. The gas mixture of CO2 (20 mol%) and He was introduced at the flow rate of 50 ml/min to one side of membrane. The permeated CO2 was swept by N2 (50 ml/min). We confirmed the effect of ceramic materials and temperature on the CO2 permeation at high temperature.

Keywords: membrane, perovskite structure, dual-phase, carbonate

Procedia PDF Downloads 365
1230 Analyzing the Risk Based Approach in General Data Protection Regulation: Basic Challenges Connected with Adapting the Regulation

Authors: Natalia Kalinowska

Abstract:

The adoption of the General Data Protection Regulation, (GDPR) finished the four-year work of the European Commission in this area in the European Union. Considering far-reaching changes, which will be applied by GDPR, the European legislator envisaged two-year transitional period. Member states and companies have to prepare for a new regulation until 25 of May 2018. The idea, which becomes a new look at an attitude to data protection in the European Union is risk-based approach. So far, as a result of implementation of Directive 95/46/WE, in many European countries (including Poland) there have been adopted very particular regulations, specifying technical and organisational security measures e.g. Polish implementing rules indicate even how long password should be. According to the new approach from May 2018, controllers and processors will be obliged to apply security measures adequate to level of risk associated with specific data processing. The risk in GDPR should be interpreted as the likelihood of a breach of the rights and freedoms of the data subject. According to Recital 76, the likelihood and severity of the risk to the rights and freedoms of the data subject should be determined by reference to the nature, scope, context and purposes of the processing. GDPR does not indicate security measures which should be applied – in recitals there are only examples such as anonymization or encryption. It depends on a controller’s decision what type of security measures controller considered as sufficient and he will be responsible if these measures are not sufficient or if his identification of risk level is incorrect. Data protection regulation indicates few levels of risk. Recital 76 indicates risk and high risk, but some lawyers think, that there is one more category – low risk/now risk. Low risk/now risk data processing is a situation when it is unlikely to result in a risk to the rights and freedoms of natural persons. GDPR mentions types of data processing when a controller does not have to evaluate level of risk because it has been classified as „high risk” processing e.g. processing on a large scale of special categories of data, processing with using new technologies. The methodology will include analysis of legal regulations e.g. GDPR, the Polish Act on the Protection of personal data. Moreover: ICO Guidelines and articles concerning risk based approach in GDPR. The main conclusion is that an appropriate risk assessment is a key to keeping data safe and avoiding financial penalties. On the one hand, this approach seems to be more equitable, not only for controllers or processors but also for data subjects, but on the other hand, it increases controllers’ uncertainties in the assessment which could have a direct impact on incorrect data protection and potential responsibility for infringement of regulation.

Keywords: general data protection regulation, personal data protection, privacy protection, risk based approach

Procedia PDF Downloads 251
1229 Wireless Integrated Switched Oscillator Impulse Generator with Application in Wireless Passive Electric Field Sensors

Authors: S. Mohammadzamani, B. Kordi

Abstract:

Wireless electric field sensors are in high demand in the number of applications that requires measuring electric field such as investigations of high power systems and testing the high voltage apparatus. Passive wireless electric field sensors are most desired since they do not require a source of power and are interrogated wirelessly. A passive wireless electric field sensor has been designed and fabricated by our research group. In the wireless interrogation system of the sensor, a wireless radio frequency impulse generator needs to be employed. A compact wireless impulse generator composed of an integrated resonant switched oscillator (SWO) and a pulse-radiating antenna has been designed and fabricated in this research. The fundamental of Switched Oscillators was introduced by C.E.Baum. A Switched Oscillator consists of a low impedance transmission line charged by a DC source, through large impedance at desired frequencies and terminated to a high impedance antenna at one end and a fast closing switch at the other end. Once the line is charged, the switch will close and short-circuit the transmission line. Therefore, a fast transient wave will be generated and travels along the transmission line. Because of the mismatch between the antenna and the transmission line, only a part of fast transient wave will be radiated, and a portion of the fast-transient wave will reflect back. At the other end of the transmission line, there is a closed switch. Consequently, a second reflection with a reversed sign will propagate towards the antenna and the wave continues back and forth. hence, at the terminal of the antenna, there will be a series of positive and negative pulses with descending amplitude. In this research a single ended quarter wavelength Switched Oscillator has been designed and simulated at 800MHz. The simulation results show that the designed Switched Oscillator generates pulses with decreasing amplitude at the frequency of 800MHz with the maximum amplitude of 10V and bandwidth of about 10MHz at the antenna end. The switched oscillator has been fabricated using a 6cm long coaxial cable transmission line which is charged by a DC source and an 8cm monopole antenna as the pulse radiating antenna. A 90V gas discharge switch has been employed as the fast closing switch. The Switched oscillator sends a series of pulses with decreasing amplitude at the frequency of 790MHz with the maximum amplitude of 0.3V in the distance of 30 cm.

Keywords: electric field measurement, impulse radiating antenna, switched oscillator, wireless impulse generator

Procedia PDF Downloads 180
1228 Network Based Speed Synchronization Control for Multi-Motor via Consensus Theory

Authors: Liqin Zhang, Liang Yan

Abstract:

This paper addresses the speed synchronization control problem for a network-based multi-motor system from the perspective of cluster consensus theory. Each motor is considered as a single agent connected through fixed and undirected network. This paper presents an improved control protocol from three aspects. First, for the purpose of improving both tracking and synchronization performance, this paper presents a distributed leader-following method. The improved control protocol takes the importance of each motor’s speed into consideration, and all motors are divided into different groups according to speed weights. Specifically, by using control parameters optimization, the synchronization error and tracking error can be regulated and decoupled to some extent. The simulation results demonstrate the effectiveness and superiority of the proposed strategy. In practical engineering, the simplified models are unrealistic, such as single-integrator and double-integrator. And previous algorithms require the acceleration information of the leader available to all followers if the leader has a varying velocity, which is also difficult to realize. Therefore, the method focuses on an observer-based variable structure algorithm for consensus tracking, which gets rid of the leader acceleration. The presented scheme optimizes synchronization performance, as well as provides satisfactory robustness. What’s more, the existing algorithms can obtain a stable synchronous system; however, the obtained stable system may encounter some disturbances that may destroy the synchronization. Focus on this challenging technological problem, a state-dependent-switching approach is introduced. In the presence of unmeasured angular speed and unknown failures, this paper investigates a distributed fault-tolerant consensus tracking algorithm for a group non-identical motors. The failures are modeled by nonlinear functions, and the sliding mode observer is designed to estimate the angular speed and nonlinear failures. The convergence and stability of the given multi-motor system are proved. Simulation results have shown that all followers asymptotically converge to a consistent state when one follower fails to follow the virtual leader during a large enough disturbance, which illustrates the good performance of synchronization control accuracy.

Keywords: consensus control, distributed follow, fault-tolerant control, multi-motor system, speed synchronization

Procedia PDF Downloads 123
1227 Altering Surface Properties of Magnetic Nanoparticles with Single-Step Surface Modification with Various Surface Active Agents

Authors: Krupali Mehta, Sandip Bhatt, Umesh Trivedi, Bhavesh Bharatiya, Mukesh Ranjan, Atindra D. Shukla

Abstract:

Owing to the dominating surface forces and large-scale surface interactions, the nano-scale particles face difficulties in getting suspended in various media. Magnetic nanoparticles of iron oxide offer a great deal of promise due to their ease of preparation, reasonable magnetic properties, low cost and environmental compatibility. We intend to modify the surface of magnetic Fe₂O₃ nanoparticles with selected surface modifying agents using simple and effective single-step chemical reactions in order to enhance dispersibility of magnetic nanoparticles in non-polar media. Magnetic particles were prepared by hydrolysis of Fe²⁺/Fe³⁺ chlorides and their subsequent oxidation in aqueous medium. The dried particles were then treated with Octadecyl quaternary ammonium silane (Terrasil™), stearic acid and gallic acid ester of stearyl alcohol in ethanol separately to yield S-2 to S-4 respectively. The untreated Fe₂O₃ was designated as S-1. The surface modified nanoparticles were then analysed with Dynamic Light Scattering (DLS), Fourier Transform Infrared spectroscopy (FTIR), X-Ray Diffraction (XRD), Thermogravimetric Gravimetric Analysis (TGA) and Scanning Electron Microscopy and Energy dispersive X-Ray analysis (SEM-EDAX). Characterization reveals the particle size averaging 20-50 nm with and without modification. However, the crystallite size in all cases remained ~7.0 nm with the diffractogram matching to Fe₂O₃ crystal structure. FT-IR suggested the presence of surfactants on nanoparticles’ surface, also confirmed by SEM-EDAX where mapping of elements proved their presence. TGA indicated the weight losses in S-2 to S-4 at 300°C onwards suggesting the presence of organic moiety. Hydrophobic character of modified surfaces was confirmed with contact angle analysis, all modified nanoparticles showed super hydrophobic behaviour with average contact angles ~129° for S-2, ~139.5° for S-3 and ~151° for S-4. This indicated that surface modified particles are super hydrophobic and they are easily dispersible in non-polar media. These modified particles could be ideal candidates to be suspended in oil-based fluids, polymer matrices, etc. We are pursuing elaborate suspension/sedimentation studies of these particles in various oils to establish this conjecture.

Keywords: iron nanoparticles, modification, hydrophobic, dispersion

Procedia PDF Downloads 140
1226 Sustainability Assessment Tool for the Selection of Optimal Site Remediation Technologies for Contaminated Gasoline Sites

Authors: Connor Dunlop, Bassim Abbassi, Richard G. Zytner

Abstract:

Life cycle assessment (LCA) is a powerful tool established by the International Organization for Standardization (ISO) that can be used to assess the environmental impacts of a product or process from cradle to grave. Many studies utilize the LCA methodology within the site remediation field to compare various decontamination methods, including bioremediation, soil vapor extraction or excavation, and off-site disposal. However, with the authors' best knowledge, limited information is available in the literature on a sustainability tool that could be used to help with the selection of the optimal remediation technology. This tool, based on the LCA methodology, would consider site conditions like environmental, economic, and social impacts. Accordingly, this project was undertaken to develop a tool to assist with the selection of optimal sustainable technology. Developing a proper tool requires a large amount of data. As such, data was collected from previous LCA studies looking at site remediation technologies. This step identified knowledge gaps or limitations within project data. Next, utilizing the data obtained from the literature review and other organizations, an extensive LCA study is being completed following the ISO 14040 requirements. Initial technologies being compared include bioremediation, excavation with off-site disposal, and a no-remediation option for a generic gasoline-contaminated site. To complete the LCA study, the modelling software SimaPro is being utilized. A sensitivity analysis of the LCA results will also be incorporated to evaluate the impact on the overall results. Finally, the economic and social impacts associated with each option will then be reviewed to understand how they fluctuate at different sites. All the results will then be summarized, and an interactive tool using Excel will be developed to help select the best sustainable site remediation technology. Preliminary LCA results show improved sustainability for the decontamination of a gasoline-contaminated site for each technology compared to the no-remediation option. Sensitivity analyses are now being completed on on-site parameters to determine how the environmental impacts fluctuate at other contaminated gasoline locations as the parameters vary, including soil type and transportation distances. Additionally, the social improvements and overall economic costs associated with each technology are being reviewed. Utilizing these results, the sustainability tool created to assist in the selection of the overall best option will be refined.

Keywords: life cycle assessment, site remediation, sustainability tool, contaminated sites

Procedia PDF Downloads 57
1225 Analyzing the Contamination of Some Food Crops Due to Mineral Deposits in Ondo State, Nigeria

Authors: Alexander Chinyere Nwankpa, Nneka Ngozi Nwankpa

Abstract:

In Nigeria, the Federal government is trying to make sure that everyone has access to enough food that is nutritiously adequate and safe. But in the southwest of Nigeria, notably in Ondo State, the most valuable minerals such as oil and gas, bitumen, kaolin, limestone talc, columbite, tin, gold, coal, and phosphate are abundant. Therefore, some regions of Ondo State are now linked to large quantities of natural radioactivity as a result of the mineral presence. In this work, the baseline radioactivity levels in some of the most important food crops in Ondo State were analyzed, allowing for the prediction of probable radiological health impacts. To this effect, maize (Zea mays), yam (Dioscorea alata) and cassava (Manihot esculenta) tubers were collected from the farmlands in the State because they make up the majority of food's nutritional needs. Ondo State was divided into eight zones in order to provide comprehensive coverage of the research region. At room temperature, the maize (Zea mays), yam (Dioscorea alata), and cassava (Manihot esculenta) samples were dried until they reached a consistent weight. They were pulverized, homogenized, and 250 g packed in a 1-liter Marinelli beaker and kept for 28 days to achieve secular equilibrium. The activity concentrations of Radium-226 (Ra-226), Thorium-232 (Th-232), and Potassium-40 (K-40) were determined in the food samples using Gamma-ray spectrometry. Firstly, the Hyper Pure Germanium detector was calibrated using standard radioactive sources. The gamma counting, which lasted for 36000s for each sample, was carried out in the Centre for Energy Research and Development, Obafemi Awolowo University, Ile-Ife, Nigeria. The mean activity concentration of Ra-226, Th-232 and K-40 for yam were 1.91 ± 0.10 Bq/kg, 2.34 ± 0.21 Bq/kg and 48.84 ± 3.14 Bq/kg, respectively. The content of the radionuclides in maize gave a mean value of 2.83 ± 0.21 Bq/kg for Ra-226, 2.19 ± 0.07 Bq/kg for Th-232 and 41.11 ± 2.16 Bq/kg for K-40. The mean activity concentrations in cassava were 2.52 ± 0.31 Bq/kg for Ra-226, 1.94 ± 0.21 Bq/kg for Th-232 and 45.12 ± 3.31 Bq/kg for K-40. The average committed effective doses in zones 6-8 were 0.55 µSv/y for the consumption of yam, 0.39 µSv/y for maize, and 0.49 µSv/y for cassava. These values are higher than the annual dose guideline of 0.35 µSv/y for the general public. Therefore, the values obtained in this work show that there is radiological contamination of some foodstuffs consumed in some parts of Ondo State. However, we recommend that systematic and appropriate methods also need to be established for the measurement of gamma-emitting radionuclides since these constitute important contributors to the internal exposure of man through ingestion, inhalation, or wound on the body.

Keywords: contamination, environment, radioactivity, radionuclides

Procedia PDF Downloads 103
1224 Application of Zeolite Nanoparticles in Biomedical Optics

Authors: Vladimir Hovhannisyan, Chen Yuan Dong

Abstract:

Recently nanoparticles (NPs) have been introduced in biomedicine as effective agents for cancer-targeted drug delivery and noninvasive tissue imaging. The most important requirements to these agents are their non-toxicity, biocompatibility and stability. In view of these criteria, the zeolite (ZL) nanoparticles (NPs) may be considered as perfect candidates for biomedical applications. ZLs are crystalline aluminosilicates consisting of oxygen-sharing SiO4 and AlO4 tetrahedral groups united by common vertices in three-dimensional framework and containing pores with diameters from 0.3 to 1.2 nm. Generally, the behavior and physical properties of ZLs are studied by SEM, X-ray spectroscopy, and AFM, whereas optical spectroscopic and microscopic approaches are not effective enough, because of strong scattering in common ZL bulk materials and powders. The light scattering can be reduced by using of ZL NPs. ZL NPs have large external surface area, high dispersibility in both aqueous and organic solutions, high photo- and thermal stability, and exceptional ability to adsorb various molecules and atoms in their nanopores. In this report, using multiphoton microscopy and nonlinear spectroscopy, we investigate nonlinear optical properties of clinoptilolite type of ZL micro- and nanoparticles with average diameters of 2200 nm and 240 nm, correspondingly. Multiphoton imaging is achieved using a laser scanning microscope system (LSM 510 META, Zeiss, Germany) coupled to a femtosecond titanium:sapphire laser (repetition rate- 80 MHz, pulse duration-120 fs, radiation wavelength- 720-820 nm) (Tsunami, Spectra-Physics, CA). Two Zeiss, Plan-Neofluar objectives (air immersion 20×∕NA 0.5 and water immersion 40×∕NA 1.2) are used for imaging. For the detection of the nonlinear response, we use two detection channels with 380-400 nm and 435-700 nm spectral bandwidths. We demonstrate that ZL micro- and nanoparticles can produce nonlinear optical response under the near-infrared femtosecond laser excitation. The interaction of hypericine, chlorin e6 and other dyes with ZL NPs and their photodynamic activity is investigated. Particularly, multiphoton imaging shows that individual ZL NPs particles adsorb Zn-tetraporphyrin molecules, but do not adsorb fluorescein molecules. In addition, nonlinear spectral properties of ZL NPs in native biotissues are studied. Nonlinear microscopy and spectroscopy may open new perspectives in the research and application of ZL NP in biomedicine, and the results may help to introduce novel approaches into the clinical environment.

Keywords: multiphoton microscopy, nanoparticles, nonlinear optics, zeolite

Procedia PDF Downloads 414
1223 Metal Contamination in an E-Waste Recycling Community in Northeastern Thailand

Authors: Aubrey Langeland, Richard Neitzel, Kowit Nambunmee

Abstract:

Electronic waste, ‘e-waste’, refers generally to discarded electronics and electrical equipment, including products from cell phones and laptops to wires, batteries and appliances. While e-waste represents a transformative source of income in low- and middle-income countries, informal e-waste workers use rudimentary methods to recover materials, simultaneously releasing harmful chemicals into the environment and creating a health hazard for themselves and surrounding communities. Valuable materials such as precious metals, copper, aluminum, ferrous metals, plastic and components are recycled from e-waste. However, persistent organic pollutants such as polychlorinated biphenyls (PCBs) and some polybrominated diphenyl ethers (PBDEs), and heavy metals are toxicants contained within e-waste and are of great concern to human and environmental health. The current study seeks to evaluate the environmental contamination resulting from informal e-waste recycling in a predominantly agricultural community in northeastern Thailand. To accomplish this objective, five types of environmental samples were collected and analyzed for concentrations of eight metals commonly associated with e-waste recycling during the period of July 2016 through July 2017. Rice samples from the community were collected after harvest and analyzed using inductively coupled plasma mass spectrometry (ICP-MS) and gas furnace atomic spectroscopy (GF-AS). Soil samples were collected and analyzed using methods similar to those used in analyzing the rice samples. Surface water samples were collected and analyzed using absorption colorimetry for three heavy metals. Environmental air samples were collected using a sampling pump and matched-weight PVC filters, then analyzed using Inductively Coupled Argon Plasma-Atomic Emission Spectroscopy (ICAP-AES). Finally, surface wipe samples were collected from surfaces in homes where e-waste recycling activities occur and were analyzed using ICAP-AES. Preliminary1 results indicate that some rice samples have concentrations of lead and cadmium significantly higher than limits set by the United States Department of Agriculture (USDA) and the World Health Organization (WHO). Similarly, some soil samples show levels of copper, lead and cadmium more than twice the maximum permissible level set by the USDA and WHO, and significantly higher than other areas of Thailand. Surface water samples indicate that areas near e-waste recycling activities, particularly the burning of e-waste products, result in increased levels of cadmium, lead and copper in surface waters. This is of particular concern given that many of the surface waters tested are used in irrigation of crops. Surface wipe samples measured concentrations of metals commonly associated with e-waste, suggesting a danger of ingestion of metals during cooking and other activities. Of particular concern is the relevance of surface contamination of metals to child health. Finally, air sampling showed that the burning of e-waste presents a serious health hazard to workers and the environment through inhalation and deposition2. Our research suggests a need for improved methods of e-waste recycling that allows workers to continue this valuable revenue stream in a sustainable fashion that protects both human and environmental health. 1Statistical analysis to be finished in October 2017 due to follow-up field studies occurring in July and August 2017. 2Still awaiting complete analytic results.

Keywords: e-waste, environmental contamination, informal recycling, metals

Procedia PDF Downloads 361
1222 Prediction of Fluid Induced Deformation using Cavity Expansion Theory

Authors: Jithin S. Kumar, Ramesh Kannan Kandasami

Abstract:

Geomaterials are generally porous in nature due to the presence of discrete particles and interconnected voids. The porosity present in these geomaterials play a critical role in many engineering applications such as CO2 sequestration, well bore strengthening, enhanced oil and hydrocarbon recovery, hydraulic fracturing, and subsurface waste storage. These applications involves solid-fluid interactions, which govern the changes in the porosity which in turn affect the permeability and stiffness of the medium. Injecting fluid into the geomaterials results in permeation which exhibits small or negligible deformation of the soil skeleton followed by cavity expansion/ fingering/ fracturing (different forms of instabilities) due to the large deformation especially when the flow rate is greater than the ability of the medium to permeate the fluid. The complexity of this problem increases as the geomaterial behaves like a solid and fluid under certain conditions. Thus it is important to understand this multiphysics problem where in addition to the permeation, the elastic-plastic deformation of the soil skeleton plays a vital role during fluid injection. The phenomenon of permeation and cavity expansion in porous medium has been studied independently through extensive experimental and analytical/ numerical models. The analytical models generally use Darcy's/ diffusion equations to capture the fluid flow during permeation while elastic-plastic (Mohr-Coulomb and Modified Cam-Clay) models were used to predict the solid deformations. Hitherto, the research generally focused on modelling cavity expansion without considering the effect of injected fluid coming into the medium. Very few studies have considered the effect of injected fluid on the deformation of soil skeleton. However, the porosity changes during the fluid injection and coupled elastic-plastic deformation are not clearly understood. In this study, the phenomenon of permeation and instabilities such as cavity and finger/ fracture formation will be quantified extensively by performing experiments using a novel experimental setup in addition to utilizing image processing techniques. This experimental study will describe the fluid flow and soil deformation characteristics under different boundary conditions. Further, a well refined coupled semi-analytical model will be developed to capture the physics involved in quantifying the deformation behaviour of geomaterial during fluid injection.

Keywords: solid-fluid interaction, permeation, poroelasticity, plasticity, continuum model

Procedia PDF Downloads 73
1221 Designing Electrically Pumped Photonic Crystal Surface Emitting Lasers Based on a Honeycomb Nanowire Pattern

Authors: Balthazar Temu, Zhao Yan, Bogdan-Petrin Ratiu, Sang Soon Oh, Qiang Li

Abstract:

Photonic crystal surface emitting lasers (PCSELs) has recently become an area of active research because of the advantages these lasers have over the edge emitting lasers and vertical cavity surface emitting lasers (VCSELs). PCSELs can emit laser beams with high power (from the order of few milliwatts to Watts or even tens of Watts) which scales with the emission area while maintaining single mode operation even at large emission areas. Most PCSELs reported in the literature are air-hole based, with only few demonstrations of nanowire based PCSELs. We previously reported an optically pumped, nanowire based PCSEL operating in the O band by using the honeycomb lattice. The nanowire based PCSELs have the advantage of being able to grow on silicon platform without threading dislocations. It is desirable to extend their operating wavelength to C band to open more applications including eye-safe sensing, lidar and long haul optical communications. In this work we first analyze how the lattice constant , nanowire diameter, nanowire height and side length of the hexagon in the honeycomb pattern can be changed to increase the operating wavelength of the honeycomb based PCSELs to the C band. Then as an attempt to make our device electrically pumped, we present the finite-difference time-domain (FDTD) simulation results with metals on the nanowire. The results for different metals on the nanowire are presented in order to choose the metal which gives the device with the best quality factor. The metals under consideration are those which form good ohmic contact with p-type doped InGaAs with low contact resistivity and decent sticking coefficient to the semiconductor. Such metals include Tungsten, Titanium, Palladium and Platinum. Using the chosen metal we demonstrate the impact of thickness of the metal for a given nanowire height on the quality factor of the device. We also investigate how the height of the nanowire affects the quality factor for a fixed thickness of the metal. Finally, the main steps in making the practical device are discussed.

Keywords: designing nanowire PCSEL, designing PCSEL on silicon substrates, low threshold nanowire laser, simulation of photonic crystal lasers.

Procedia PDF Downloads 12