Search results for: adaptive filter
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1756

Search results for: adaptive filter

166 Anti-Arthritic Effect of a Herbal Diet Formula Comprising Fruits of Rosa Multiflora and Flowers of Lonicera Japonica

Authors: Brian Chi Yan Cheng, Hui Guo, Tao Su, Xiu‐qiong Fu, Ting Li, Zhi‐ling Yu

Abstract:

Rheumatoid arthritis (RA) affects around 1% of the globe population. Yet, there is still no cure for RA. Toll-like receptor 4 (TLR4) signalling has been found to be involved in the pathogenesis of RA, making it a potential therapeutic target for RA treatment. A herbal formula (RL) consisting of fruits of Rosa Multiflora (Eijitsu rose) and flowers of Lonicera Japonica (Japanese honeysuckle) has been used in treating various inflammatory disorders for more than a thousand year. Both of them are rich sources of nutrients and bioactive phytochemicals, which can be used in producing different food products and supplements. In this study, we would evaluate the anti-arthritic effect of RL on collagen-induced arthritis (CIA) in rats and investigate the involvement of TLR4 signaling in the mode of action of RL. Anti-arthritic efficacy was evaluated using CIA rats induced by bovine type II collagen. The treatment groups were treated with RL (82.5, 165, and 330 mg/kg bw per day, p.o.) or positive control indomethacin (0.25 mg/kg bw per day, p.o.) for 35 days. Clinical signs (hind paw volume and arthritis severity scores), changes in serum inflammatory mediators, pro-/antioxidant status, histological and radiographic changes of joints were investigated. Spleens and peritoneal macrophages were used to determine the effects of RL on innate and adaptive immune responses in CIA rats. The involvement of TLR4 signalling pathways in the anti-arthritic effect of RL was examined in cartilage tissue of CIA rats, murine RAW264.7 macrophages and human THP-1 monocytic cells. The severity of arthritis in the CIA rats was significantly attenuated by RL. Antioxidant status, histological score and radiographic score were efficiently improved by RL. RL could also dose-dependently inhibit pro-inflammatory cytokines in serum of CIA rats. RL significantly inhibited the production of various pro-inflammatory mediators, the expression and/or activity of the components of TLR4 signalling pathways in animal tissue and cell lines. RL possesses anti-arthritic effect on collagen-induced arthritis in rats. The therapeutic effect of RL may be related to its inhibition on pro-inflammatory cytokines in serum. The inhibition of the TAK1/NF-κB and TAK1/MAPK pathways participate in the anti-arthritic effects of RL. This provides a pharmacological justification for the dietary use of RL in the control of various arthritic diseases. Further investigation should be done to develop RL into a anti-arthritic food products and/or supplements.

Keywords: japanese honeysuckle, rheumatoid arthritis, rosa multiflora, rosehip

Procedia PDF Downloads 416
165 Challenging Weak Central Coherence: An Exploration of Neurological Evidence from Visual Processing and Linguistic Studies in Autism Spectrum Disorder

Authors: Jessica Scher Lisa, Eric Shyman

Abstract:

Autism spectrum disorder (ASD) is a neuro-developmental disorder that is characterized by persistent deficits in social communication and social interaction (i.e. deficits in social-emotional reciprocity, nonverbal communicative behaviors, and establishing/maintaining social relationships), as well as by the presence of repetitive behaviors and perseverative areas of interest (i.e. stereotyped or receptive motor movements, use of objects, or speech, rigidity, restricted interests, and hypo or hyperactivity to sensory input or unusual interest in sensory aspects of the environment). Additionally, diagnoses of ASD require the presentation of symptoms in the early developmental period, marked impairments in adaptive functioning, and a lack of explanation by general intellectual impairment or global developmental delay (although these conditions may be co-occurring). Over the past several decades, many theories have been developed in an effort to explain the root cause of ASD in terms of atypical central cognitive processes. The field of neuroscience is increasingly finding structural and functional differences between autistic and neurotypical individuals using neuro-imaging technology. One main area this research has focused upon is in visuospatial processing, with specific attention to the notion of ‘weak central coherence’ (WCC). This paper offers an analysis of findings from selected studies in order to explore research that challenges the ‘deficit’ characterization of a weak central coherence theory as opposed to a ‘superiority’ characterization of strong local coherence. The weak central coherence theory has long been both supported and refuted in the ASD literature and has most recently been increasingly challenged by advances in neuroscience. The selected studies lend evidence to the notion of amplified localized perception rather than deficient global perception. In other words, WCC may represent superiority in ‘local processing’ rather than a deficit in global processing. Additionally, the right hemisphere and the specific area of the extrastriate appear to be key in both the visual and lexicosemantic process. Overactivity in the striate region seems to suggest inaccuracy in semantic language, which lends itself to support for the link between the striate region and the atypical organization of the lexicosemantic system in ASD.

Keywords: autism spectrum disorder, neurology, visual processing, weak coherence

Procedia PDF Downloads 105
164 Motivation of Doctors and its Impact on the Quality of Working Life

Authors: E. V. Fakhrutdinova, K. R. Maksimova, P. B. Chursin

Abstract:

At the present stage of the society progress the health care is an integral part of both the economic system and social, while in the second case the medicine is a major component of a number of basic and necessary social programs. Since the foundation of the health system are highly qualified health professionals, it is logical proposition that increase of doctor`s professionalism improves the effectiveness of the system as a whole. Professionalism of the doctor is a collection of many components, essential role played by such personal-psychological factors as honesty, willingness and desire to help people, and motivation. A number of researchers consider motivation as an expression of basic human needs that have passed through the “filter” which is a worldview and values learned in the process of socialization by the individual, to commit certain actions designed to achieve the expected result. From this point of view a number of researchers propose the following classification of highly skilled employee’s needs: 1. the need for confirmation the competence (setting goals that meet the professionalism and receipt of positive emotions in their decision), 2. The need for independence (the ability to make their own choices in contentious situations arising in the process carry out specialist functions), 3. The need for ownership (in the case of health care workers, to the profession and accordingly, high in the eyes of the public status of the doctor). Nevertheless, it is important to understand that in a market economy a significant motivator for physicians (both legal and natural persons) is to maximize its own profits. In the case of health professionals duality motivational structure creates an additional contrast, as in the public mind the image of the ideal physician; usually a altruistically minded person thinking is not primarily about their own benefit, and to assist others. In this context, the question of the real motivation of health workers deserves special attention. The survey conducted by the American researcher Harrison Terni for the magazine "Med Tech" in 2010 revealed the opinion of more than 200 medical students starting courses, and the primary motivation in a profession choice is "desire to help people", only 15% said that they want become a doctor, "to earn a lot". From the point of view of most of the classical theories of motivation this trend can be called positive, as intangible incentives are more effective. However, it is likely that over time the opinion of the respondents may change in the direction of mercantile motives. Thus, it is logical to assume that well-designed system of motivation of doctor`s labor should be based on motivational foundations laid during training in higher education.

Keywords: motivation, quality of working life, health system, personal-psychological factors, motivational structure

Procedia PDF Downloads 337
163 A Robust Optimization of Chassis Durability/Comfort Compromise Using Chebyshev Polynomial Chaos Expansion Method

Authors: Hanwei Gao, Louis Jezequel, Eric Cabrol, Bernard Vitry

Abstract:

The chassis system is composed of complex elements that take up all the loads from the tire-ground contact area and thus it plays an important role in numerous specifications such as durability, comfort, crash, etc. During the development of new vehicle projects in Renault, durability validation is always the main focus while deployment of comfort comes later in the project. Therefore, sometimes design choices have to be reconsidered because of the natural incompatibility between these two specifications. Besides, robustness is also an important point of concern as it is related to manufacturing costs as well as the performance after the ageing of components like shock absorbers. In this paper an approach is proposed aiming to realize a multi-objective optimization between chassis endurance and comfort while taking the random factors into consideration. The adaptive-sparse polynomial chaos expansion method (PCE) with Chebyshev polynomial series has been applied to predict responses’ uncertainty intervals of a system according to its uncertain-but-bounded parameters. The approach can be divided into three steps. First an initial design of experiments is realized to build the response surfaces which represent statistically a black-box system. Secondly within several iterations an optimum set is proposed and validated which will form a Pareto front. At the same time the robustness of each response, served as additional objectives, is calculated from the pre-defined parameter intervals and the response surfaces obtained in the first step. Finally an inverse strategy is carried out to determine the parameters’ tolerance combination with a maximally acceptable degradation of the responses in terms of manufacturing costs. A quarter car model has been tested as an example by applying the road excitations from the actual road measurements for both endurance and comfort calculations. One indicator based on the Basquin’s law is defined to compare the global chassis durability of different parameter settings. Another indicator related to comfort is obtained from the vertical acceleration of the sprung mass. An optimum set with best robustness has been finally obtained and the reference tests prove a good robustness prediction of Chebyshev PCE method. This example demonstrates the effectiveness and reliability of the approach, in particular its ability to save computational costs for a complex system.

Keywords: chassis durability, Chebyshev polynomials, multi-objective optimization, polynomial chaos expansion, ride comfort, robust design

Procedia PDF Downloads 136
162 Increment of Panel Flutter Margin Using Adaptive Stiffeners

Authors: S. Raja, K. M. Parammasivam, V. Aghilesh

Abstract:

Fluid-structure interaction is a crucial consideration in the design of many engineering systems such as flight vehicles and bridges. Aircraft lifting surfaces and turbine blades can fail due to oscillations caused by fluid-structure interaction. Hence, it is focussed to study the fluid-structure interaction in the present research. First, the effect of free vibration over the panel is studied. It is well known that the deformation of a panel and flow induced forces affects one another. The selected panel has a span 300mm, chord 300mm and thickness 2 mm. The project is to study, the effect of cross-sectional area and the stiffener location is carried out for the same panel. The stiffener spacing is varied along both the chordwise and span-wise direction. Then for that optimal location the ideal stiffener length is identified. The effect of stiffener cross-section shapes (T, I, Hat, Z) over flutter velocity has been conducted. The flutter velocities of the selected panel with two rectangular stiffeners of cantilever configuration are estimated using MSC NASTRAN software package. As the flow passes over the panel, deformation takes place which further changes the flow structure over it. With increasing velocity, the deformation goes on increasing, but the stiffness of the system tries to dampen the excitation and maintain equilibrium. But beyond a critical velocity, the system damping suddenly becomes ineffective, so it loses its equilibrium. This estimated in NASTRAN using PK method. The first 10 modal frequencies of a simple panel and stiffened panel are estimated numerically and are validated with open literature. A grid independence study is also carried out and the modal frequency values remain the same for element lengths less than 20 mm. The current investigation concludes that the span-wise stiffener placement is more effective than the chord-wise placement. The maximum flutter velocity achieved for chord-wise placement is 204 m/s while for a span-wise arrangement it is augmented to 963 m/s for the stiffeners location of ¼ and ¾ of the chord from the panel edge (50% of chord from either side of the mid-chord line). The flutter velocity is directly proportional to the stiffener cross-sectional area. A significant increment in flutter velocity from 218m/s to 1024m/s is observed for the stiffener lengths varying from 50% to 60% of the span. The maximum flutter velocity above Mach 3 is achieved. It is also observed that for a stiffened panel, the full effect of stiffener can be achieved only when the stiffener end is clamped. Stiffeners with Z cross section incremented the flutter velocity from 142m/s (Panel with no stiffener) to 328 m/s, which is 2.3 times that of simple panel.

Keywords: stiffener placement, stiffener cross-sectional area, stiffener length, stiffener cross sectional area shape

Procedia PDF Downloads 272
161 Management and Genetic Characterization of Local Sheep Breeds for Better Productive and Adaptive Traits

Authors: Sonia Bedhiaf-Romdhani

Abstract:

The sheep (Ovis aries) was domesticated, approximately 11,000 years ago (YBP), in the Fertile Crescent from Asian Mouflon (Ovis Orientalis). The Northern African (NA) sheep is 7,000 years old, represents a remarkable diversity of sheep populations reared under traditional and low input farming systems (LIFS) over millennia. The majority of small ruminants in developing countries are encountered in low input production systems and the resilience of local communities in rural areas is often linked to the wellbeing of small ruminants. Regardless of the rich biodiversity encountered in sheep ecotypes there are four main sheep breeds in the country with 61,6 and 35.4 percents of Barbarine (fat tail breed) and Queue Fine de l’Ouest (thin tail breed), respectively. Phoenicians introduced the Barbarine sheep from the steppes of Central Asia in the Carthaginian period, 3000 years ago. The Queue Fine de l’Ouest is a thin-tailed meat breed heavily concentrated in the Western and the central semi-arid regions. The Noire de Thibar breed, involving mutton-fine wool producing animals, has been on the verge of extinction, it’s a composite black coated sheep breed found in the northern sub-humid region because of its higher nutritional requirements and non-tolerance of the prevailing harsher condition. The D'Man breed, originated from Morocco, is mainly located in the southern oases of the extreme arid ecosystem. A genetic investigation of Tunisian sheep breeds using a genome-wide scan of approximately 50,000 SNPs was performed. Genetic analysis of relationship between breeds highlighted the genetic differentiation of Noire de Thibar breed from the other local breeds, reflecting the effect of past events of introgression of European gene pool. The Queue Fine de l’Ouest breed showed a genetic heterogeneity and was close to Barbarine. The D'Man breed shared a considerable gene flow with the thin-tailed Queue Fine de l'Ouest breed. Native small ruminants breeds, are capable to be efficiently productive if essential ingredients and coherent breeding schemes are implemented and followed. Assessing the status of genetic variability of native sheep breeds could provide important clues for research and policy makers to devise better strategies for the conservation and management of genetic resources.

Keywords: sheep, farming systems, diversity, SNPs.

Procedia PDF Downloads 127
160 Advanced Techniques in Semiconductor Defect Detection: An Overview of Current Technologies and Future Trends

Authors: Zheng Yuxun

Abstract:

This review critically assesses the advancements and prospective developments in defect detection methodologies within the semiconductor industry, an essential domain that significantly affects the operational efficiency and reliability of electronic components. As semiconductor devices continue to decrease in size and increase in complexity, the precision and efficacy of defect detection strategies become increasingly critical. Tracing the evolution from traditional manual inspections to the adoption of advanced technologies employing automated vision systems, artificial intelligence (AI), and machine learning (ML), the paper highlights the significance of precise defect detection in semiconductor manufacturing by discussing various defect types, such as crystallographic errors, surface anomalies, and chemical impurities, which profoundly influence the functionality and durability of semiconductor devices, underscoring the necessity for their precise identification. The narrative transitions to the technological evolution in defect detection, depicting a shift from rudimentary methods like optical microscopy and basic electronic tests to more sophisticated techniques including electron microscopy, X-ray imaging, and infrared spectroscopy. The incorporation of AI and ML marks a pivotal advancement towards more adaptive, accurate, and expedited defect detection mechanisms. The paper addresses current challenges, particularly the constraints imposed by the diminutive scale of contemporary semiconductor devices, the elevated costs associated with advanced imaging technologies, and the demand for rapid processing that aligns with mass production standards. A critical gap is identified between the capabilities of existing technologies and the industry's requirements, especially concerning scalability and processing velocities. Future research directions are proposed to bridge these gaps, suggesting enhancements in the computational efficiency of AI algorithms, the development of novel materials to improve imaging contrast in defect detection, and the seamless integration of these systems into semiconductor production lines. By offering a synthesis of existing technologies and forecasting upcoming trends, this review aims to foster the dialogue and development of more effective defect detection methods, thereby facilitating the production of more dependable and robust semiconductor devices. This thorough analysis not only elucidates the current technological landscape but also paves the way for forthcoming innovations in semiconductor defect detection.

Keywords: semiconductor defect detection, artificial intelligence in semiconductor manufacturing, machine learning applications, technological evolution in defect analysis

Procedia PDF Downloads 17
159 Mapping the Urban Catalytic Trajectory for 'Convention and Exhibition' Projects: A Case of India International Convention and Expo Centre, New Delhi

Authors: Bhavana Gulaty, Arshia Chaudhri

Abstract:

Great civic projects contribute integrally to a city, and every city undergoes a recurring cycle of urban transformations and regeneration by their insertion. The M.I.C.E. (Meetings, Incentives, Convention and Exhibitions) industry is the forbearer of one category of such catalytic civic projects. Through a specific focus on M.I.C.E. destinations, this paper illustrates the multifarious dimensions that urban catalysts impact the city on S.P.U.R. (Seed. Profile. Urbane. Reflections), the theoretical framework of this paper aims to unearth these dimensions in the realm of the COEX (Convention & Exhibition) biosphere. The ‘COEX Biosphere’ is the filter of such catalysts being ecosystems unto themselves. Like a ripple in water, the impact of these strategic interventions focusing on art, culture, trade, and promotion expands right from the trigger; the immediate context to the region and subsequently impacts the global scale. These ripples are known to bring about significant economic, social, and political and network changes. The COEX inventory in the Asian context has one such prominent addition; the proposed India International Convention and Exhibition Centre (IICC) at New Delhi. It is envisioned to be the largest facility in Asia currently and would position India on the global M.I.C.E map. With the first phase of the project scheduled to open for use in the end of 2019, this flagship project of the Government of India is projected to cater to a peak daily footfall of 3,20,000 visitors and estimated to generate 5,00,000 jobs. While the economic benefits are yet to manifest in real time and ‘Good design is good business’ holds true, for the urban transformation to be meaningful, the benefits have to go beyond just a balance sheet for the city’s exchequer. This aspect has been found relatively unexplored in research on these developments. The methodology for investigation will comprise of two steps. The first will be establishing an inventory of the global success stories and associated benefits of COEX projects over the past decade. The rationale for capping the timeframe is the significant paradigm shift that has been observed in their recent conceptualization; for instance ‘Innovation Districts’ conceptualised in the city of Albuquerque that converges into the global economy. The second step would entail a comparative benchmarking of the projected transformations by IICC through a toolkit of parameters. This is posited to yield a matrix that can form the test bed for mapping the catalytic trajectory for projects in the pipeline globally. As a ready reckoner, it purports to be a catalyst to substantiate decision making in the planning stage itself for future projects in similar contexts.

Keywords: catalysts, COEX, M.I.C.E., urban transformations

Procedia PDF Downloads 134
158 Deep Reinforcement Learning Approach for Trading Automation in The Stock Market

Authors: Taylan Kabbani, Ekrem Duman

Abstract:

The design of adaptive systems that take advantage of financial markets while reducing the risk can bring more stagnant wealth into the global market. However, most efforts made to generate successful deals in trading financial assets rely on Supervised Learning (SL), which suffered from various limitations. Deep Reinforcement Learning (DRL) offers to solve these drawbacks of SL approaches by combining the financial assets price "prediction" step and the "allocation" step of the portfolio in one unified process to produce fully autonomous systems capable of interacting with its environment to make optimal decisions through trial and error. In this paper, a continuous action space approach is adopted to give the trading agent the ability to gradually adjust the portfolio's positions with each time step (dynamically re-allocate investments), resulting in better agent-environment interaction and faster convergence of the learning process. In addition, the approach supports the managing of a portfolio with several assets instead of a single one. This work represents a novel DRL model to generate profitable trades in the stock market, effectively overcoming the limitations of supervised learning approaches. We formulate the trading problem, or what is referred to as The Agent Environment as Partially observed Markov Decision Process (POMDP) model, considering the constraints imposed by the stock market, such as liquidity and transaction costs. More specifically, we design an environment that simulates the real-world trading process by augmenting the state representation with ten different technical indicators and sentiment analysis of news articles for each stock. We then solve the formulated POMDP problem using the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm, which can learn policies in high-dimensional and continuous action spaces like those typically found in the stock market environment. From the point of view of stock market forecasting and the intelligent decision-making mechanism, this paper demonstrates the superiority of deep reinforcement learning in financial markets over other types of machine learning such as supervised learning and proves its credibility and advantages of strategic decision-making.

Keywords: the stock market, deep reinforcement learning, MDP, twin delayed deep deterministic policy gradient, sentiment analysis, technical indicators, autonomous agent

Procedia PDF Downloads 158
157 Isolation of Nitrosoguanidine Induced NaCl Tolerant Mutant of Spirulina platensis with Improved Growth and Phycocyanin Production

Authors: Apurva Gupta, Surendra Singh

Abstract:

Spirulina spp., as a promising source of many commercially valuable products, is grown photo autotrophically in open ponds and raceways on a large scale. However, the economic exploitation in an open system seems to have been limited because of lack of multiple stress-tolerant strains. The present study aims to isolate a stable stress tolerant mutant of Spirulina platensis with improved growth rate and enhanced potential to produce its commercially valuable bioactive compounds. N-methyl-n'-nitro-n-nitrosoguanidine (NTG) at 250 μg/mL (concentration permitted 1% survival) was employed for chemical mutagenesis to generate random mutants and screened against NaCl. In a preliminary experiment, wild type S. platensis was treated with NaCl concentrations from 0.5-1.5 M to calculate its LC₅₀. Mutagenized colonies were then screened for tolerance at 0.8 M NaCl (LC₅₀), and the surviving colonies were designated as NaCl tolerant mutants of S. platensis. The mutant cells exhibited 1.5 times improved growth against NaCl stress as compared to the wild type strain in control conditions. This might be due to the ability of the mutant cells to protect its metabolic machinery against inhibitory effects of salt stress. Salt stress is known to adversely affect the rate of photosynthesis in cyanobacteria by causing degradation of the pigments. Interestingly, the mutant cells were able to protect its photosynthetic machinery and exhibited 4.23 and 1.72 times enhanced accumulation of Chl a and phycobiliproteins, respectively, which resulted in enhanced rate of photosynthesis (2.43 times) and respiration (1.38 times) against salt stress. Phycocyanin production in mutant cells was observed to enhance by 1.63 fold. Nitrogen metabolism plays a vital role in conferring halotolerance to cyanobacterial cells by influx of nitrate and efflux of Na+ ions from the cell. The NaCl tolerant mutant cells took up 2.29 times more nitrate as compared to the wild type and efficiently reduce it. Nitrate reductase and nitrite reductase activity in the mutant cells also improved by 2.45 and 2.31 times, respectively against salt stress. From these preliminary results, it could be deduced that enhanced nitrogen uptake and its efficient reduction might be a reason for adaptive and halotolerant behavior of the S. platensis mutant cells. Also, the NaCl tolerant mutant of S. platensis with significant improved growth and phycocyanin accumulation compared to the wild type can be commercially promising.

Keywords: chemical mutagenesis, NaCl tolerant mutant, nitrogen metabolism, photosynthetic machinery, phycocyanin

Procedia PDF Downloads 151
156 Study on the Rapid Start-up and Functional Microorganisms of the Coupled Process of Short-range Nitrification and Anammox in Landfill Leachate Treatment

Authors: Lina Wu

Abstract:

The excessive discharge of nitrogen in sewage greatly intensifies the eutrophication of water bodies and poses a threat to water quality. Nitrogen pollution control has become a global concern. Currently, the problem of water pollution in China is still not optimistic. As a typical high ammonia nitrogen organic wastewater, landfill leachate is more difficult to treat than domestic sewage because of its complex water quality, high toxicity, and high concentration.Many studies have shown that the autotrophic anammox bacteria in nature can combine nitrous and ammonia nitrogen without carbon source through functional genes to achieve total nitrogen removal, which is very suitable for the removal of nitrogen from leachate. In addition, the process also saves a lot of aeration energy consumption than the traditional nitrogen removal process. Therefore, anammox plays an important role in nitrogen conversion and energy saving. The process composed of short-range nitrification and denitrification coupled an ammo ensures the removal of total nitrogen and improves the removal efficiency, meeting the needs of the society for an ecologically friendly and cost-effective nutrient removal treatment technology. Continuous flow process for treating late leachate [an up-flow anaerobic sludge blanket reactor (UASB), anoxic/oxic (A/O)–anaerobic ammonia oxidation reactor (ANAOR or anammox reactor)] has been developed to achieve autotrophic deep nitrogen removal. In this process, the optimal process parameters such as hydraulic retention time and nitrification flow rate have been obtained, and have been applied to the rapid start-up and stable operation of the process system and high removal efficiency. Besides, finding the characteristics of microbial community during the start-up of anammox process system and analyzing its microbial ecological mechanism provide a basis for the enrichment of anammox microbial community under high environmental stress. One research developed partial nitrification-Anammox (PN/A) using an internal circulation (IC) system and a biological aerated filter (BAF) biofilm reactor (IBBR), where the amount of water treated is closer to that of landfill leachate. However, new high-throughput sequencing technology is still required to be utilized to analyze the changes of microbial diversity of this system, related functional genera and functional genes under optimal conditions, providing theoretical and further practical basis for the engineering application of novel anammox system in biogas slurry treatment and resource utilization.

Keywords: nutrient removal and recovery, leachate, anammox, partial nitrification

Procedia PDF Downloads 28
155 Integrating System-Level Infrastructure Resilience and Sustainability Based on Fractal: Perspectives and Review

Authors: Qiyao Han, Xianhai Meng

Abstract:

Urban infrastructures refer to the fundamental facilities and systems that serve cities. Due to the global climate change and human activities in recent years, many urban areas around the world are facing enormous challenges from natural and man-made disasters, like flood, earthquake and terrorist attack. For this reason, urban resilience to disasters has attracted increasing attention from researchers and practitioners. Given the complexity of infrastructure systems and the uncertainty of disasters, this paper suggests that studies of resilience could focus on urban functional sustainability (in social, economic and environmental dimensions) supported by infrastructure systems under disturbance. It is supposed that urban infrastructure systems with high resilience should be able to reconfigure themselves without significant declines in critical functions (services), such as primary productivity, hydrological cycles, social relations and economic prosperity. Despite that some methods have been developed to integrate the resilience and sustainability of individual infrastructure components, more work is needed to enable system-level integration. This research presents a conceptual analysis framework for integrating resilience and sustainability based on fractal theory. It is believed that the ability of an ecological system to maintain structure and function in face of disturbance and to reorganize following disturbance-driven change is largely dependent on its self-similar and hierarchical fractal structure, in which cross-scale resilience is produced by the replication of ecosystem processes dominating at different levels. Urban infrastructure systems are analogous to ecological systems because they are interconnected, complex and adaptive, are comprised of interconnected components, and exhibit characteristic scaling properties. Therefore, analyzing resilience of ecological system provides a better understanding about the dynamics and interactions of infrastructure systems. This paper discusses fractal characteristics of ecosystem resilience, reviews literature related to system-level infrastructure resilience, identifies resilience criteria associated with sustainability dimensions, and develops a conceptual analysis framework. Exploration of the relevance of identified criteria to fractal characteristics reveals that there is a great potential to analyze infrastructure systems based on fractal. In the conceptual analysis framework, it is proposed that in order to be resilient, urban infrastructure system needs to be capable of “maintaining” and “reorganizing” multi-scale critical functions under disasters. Finally, the paper identifies areas where further research efforts are needed.

Keywords: fractal, urban infrastructure, sustainability, system-level resilience

Procedia PDF Downloads 251
154 Investigating Reading Comprehension Proficiency and Self-Efficacy among Algerian EFL Students within Collaborative Strategic Reading Approach and Attributional Feedback Intervention

Authors: Nezha Badi

Abstract:

It has been shown in the literature that Algerian university students suffer from low levels of reading comprehension proficiency, which hinder their overall proficiency in English. This low level is mainly related to the methodology of teaching reading which is employed by the teacher in the classroom (a teacher-centered environment), as well as students’ poor sense of self-efficacy to undertake reading comprehension activities. Arguably, what is needed is an approach necessary for enhancing students’ self-beliefs about their abilities to deal with different reading comprehension activities. This can be done by providing them with opportunities to take responsibility for their own learning (learners’ autonomy). As a result of learning autonomy, learners’ beliefs about their abilities to deal with certain language tasks may increase, and hence, their language learning ability. Therefore, this experimental research study attempts to assess the extent to which an integrated approach combining one particular reading approach known as ‘collaborative strategic reading’ (CSR), and teacher’s attributional feedback (on students’ reading performance and strategy use) can improve the reading comprehension skill and the sense of self-efficacy of EFL Algerian university students. It also seeks to examine students’ main reasons for their successful or unsuccessful achievements in reading comprehension activities, and whether students’ attributions for their reading comprehension outcomes can be modified after exposure to the instruction. To obtain the data, different tools including a reading comprehension test, questionnaires, an observation, an interview, and learning logs were used with 105 second year Algerian EFL university students. The sample of the study was divided into three groups; one control group (with no treatment), one experimental group (CSR group) who received a CSR instruction, and a second intervention group (CSR Plus group) who received teacher’s attribution feedback in addition to the CSR intervention. Students in the CSR Plus group received the same experiment as the CSR group using the same tools, except that they were asked to keep learning logs, for which teacher’s feedback on reading performance and strategy use was provided. The results of this study indicate that the CSR and the attributional feedback intervention was effective in improving students’ reading comprehension proficiency and sense of self-efficacy. However, there was not a significant change in students’ adaptive and maladaptive attributions for their success and failure d from the pre-test to the post-test phase. Analysis of the perception questionnaire, the interview, and the learning logs shows that students have positive perceptions about the CSR and the attributional feedback instruction. Based on the findings, this study, therefore, seeks to provide EFL teachers in general and Algerian EFL university teachers in particular with pedagogical implications on how to teach reading comprehension to their students to help them achieve well and feel more self-efficacious in reading comprehension activities, and in English language learning more generally.

Keywords: attributions, attributional feedback, collaborative strategic reading, self-efficacy

Procedia PDF Downloads 99
153 Stable Time Reversed Integration of the Navier-Stokes Equation Using an Adjoint Gradient Method

Authors: Jurriaan Gillissen

Abstract:

This work is concerned with stabilizing the numerical integration of the Navier-Stokes equation (NSE), backwards in time. Applications involve the detection of sources of, e.g., sound, heat, and pollutants. Stable reverse numerical integration of parabolic differential equations is also relevant for image de-blurring. While the literature addresses the reverse integration problem of the advection-diffusion equation, the problem of numerical reverse integration of the NSE has, to our knowledge, not yet been addressed. Owing to the presence of viscosity, the NSE is irreversible, i.e., when going backwards in time, the fluid behaves, as if it had a negative viscosity. As an effect, perturbations from the perfect solution, due to round off errors or discretization errors, grow exponentially in time, and reverse integration of the NSE is inherently unstable, regardless of using an implicit time integration scheme. Consequently, some sort of filtering is required, in order to achieve a stable, numerical, reversed integration. The challenge is to find a filter with a minimal adverse affect on the accuracy of the reversed integration. In the present work, we explore an adjoint gradient method (AGM) to achieve this goal, and we apply this technique to two-dimensional (2D), decaying turbulence. The AGM solves for the initial velocity field u0 at t = 0, that, when integrated forward in time, produces a final velocity field u1 at t = 1, that is as close as is feasibly possible to some specified target field v1. The initial field u0 defines a minimum of a cost-functional J, that measures the distance between u1 and v1. In the minimization procedure, the u0 is updated iteratively along the gradient of J w.r.t. u0, where the gradient is obtained by transporting J backwards in time from t = 1 to t = 0, using the adjoint NSE. The AGM thus effectively replaces the backward integration by multiple forward and backward adjoint integrations. Since the viscosity is negative in the adjoint NSE, each step of the AGM is numerically stable. Nevertheless, when applied to turbulence, the AGM develops instabilities, which limit the backward integration to small times. This is due to the exponential divergence of phase space trajectories in turbulent flow, which produces a multitude of local minima in J, when the integration time is large. As an effect, the AGM may select unphysical, noisy initial conditions. In order to improve this situation, we propose two remedies. First, we replace the integration by a sequence of smaller integrations, i.e., we divide the integration time into segments, where in each segment the target field v1 is taken as the initial field u0 from the previous segment. Second, we add an additional term (regularizer) to J, which is proportional to a high-order Laplacian of u0, and which dampens the gradients of u0. We show that suitable values for the segment size and for the regularizer, allow a stable reverse integration of 2D decaying turbulence, with accurate results for more then O(10) turbulent, integral time scales.

Keywords: time reversed integration, parabolic differential equations, adjoint gradient method, two dimensional turbulence

Procedia PDF Downloads 202
152 A Laser Instrument Rapid-E+ for Real-Time Measurements of Airborne Bioaerosols Such as Bacteria, Fungi, and Pollen

Authors: Minghui Zhang, Sirine Fkaier, Sabri Fernana, Svetlana Kiseleva, Denis Kiselev

Abstract:

The real-time identification of bacteria and fungi is difficult because they emit much weaker signals than pollen. In 2020, Plair developed Rapid-E+, which extends abilities of Rapid-E to detect smaller bioaerosols such as bacteria and fungal spores with diameters down to 0.3 µm, while keeping the similar or even better capability for measurements of large bioaerosols like pollen. Rapid-E+ enables simultaneous measurements of (1) time-resolved, polarization and angle dependent Mie scattering patterns, (2) fluorescence spectra resolved in 16 channels, and (3) fluorescence lifetime of individual particles. Moreover, (4) it provides 2D Mie scattering images which give the full information on particle morphology. The parameters of every single bioaerosol aspired into the instrument are subsequently analysed by machine learning. Firstly, pure species of microbes, e.g., Bacillus subtilis (a species of bacteria), and Penicillium chrysogenum (a species of fungal spores), were aerosolized in a bioaerosol chamber for Rapid-E+ training. Afterwards, we tested microbes under different concentrations. We used several steps of data analysis to classify and identify microbes. All single particles were analysed by the parameters of light scattering and fluorescence in the following steps. (1) They were treated with a smart filter block to get rid of non-microbes. (2) By classification algorithm, we verified the filtered particles were microbes based on the calibration data. (3) The probability threshold (defined by the user) step provides the probability of being microbes ranging from 0 to 100%. We demonstrate how Rapid-E+ identified simultaneously microbes based on the results of Bacillus subtilis (bacteria) and Penicillium chrysogenum (fungal spores). By using machine learning, Rapid-E+ achieved identification precision of 99% against the background. The further classification suggests the precision of 87% and 89% for Bacillus subtilis and Penicillium chrysogenum, respectively. The developed algorithm was subsequently used to evaluate the performance of microbe classification and quantification in real-time. The bacteria and fungi were aerosolized again in the chamber with different concentrations. Rapid-E+ can classify different types of microbes and then quantify them in real-time. Rapid-E+ enables classifying different types of microbes and quantifying them in real-time. Rapid-E+ can identify pollen down to species with similar or even better performance than the previous version (Rapid-E). Therefore, Rapid-E+ is an all-in-one instrument which classifies and quantifies not only pollen, but also bacteria and fungi. Based on the machine learning platform, the user can further develop proprietary algorithms for specific microbes (e.g., virus aerosols) and other aerosols (e.g., combustion-related particles that contain polycyclic aromatic hydrocarbons).

Keywords: bioaerosols, laser-induced fluorescence, Mie-scattering, microorganisms

Procedia PDF Downloads 69
151 Investigation of Municipal Solid Waste Incineration Filter Cake as Minor Additional Constituent in Cement Production

Authors: Veronica Caprai, Katrin Schollbach, Miruna V. A. Florea, H. J. H. Brouwers

Abstract:

Nowadays MSWI (Municipal Solid Waste Incineration) bottom ash (BA) produced by Waste-to-Energy (WtE) plants represents the majority of the solid residues derived from MSW incineration. Once processed, the BA is often landfilled resulting in possible environmental problems, additional costs for the plant and increasing occupation of public land. In order to limit this phenomenon, European countries such as the Netherlands aid the utilization of MSWI BA in the construction field, by providing standards about the leaching of contaminants into the environment (Dutch Soil Quality Decree). Commonly, BA has a particle size below 32 mm and a heterogeneous chemical composition, depending on its source. By washing coarser BA, an MSWI sludge is obtained. It is characterized by a high content of heavy metals, chlorides, and sulfates as well as a reduced particle size (below 0.25 mm). To lower its environmental impact, MSWI sludge is filtered or centrifuged for removing easily soluble contaminants, such as chlorides. However, the presence of heavy metals is not easily reduced, compromising its possible application. For lowering the leaching of those contaminants, the use of MSWI residues in combination with cement represents a precious option, due to the known retention of those ions into the hydrated cement matrix. Among the applications, the European standard for common cement EN 197-1:1992 allows the incorporation of up to 5% by mass of a minor additional constituent (MAC), such as fly ash or blast furnace slag but also an unspecified filler into cement. To the best of the author's knowledge, although it is widely available, it has the appropriate particle size and a chemical composition similar to cement, FC has not been investigated as possible MAC in cement production. Therefore, this paper will address the suitability of MSWI FC as MAC for CEM I 52.5 R, within a 5% maximum replacement by mass. After physical and chemical characterization of the raw materials, the crystal phases of the pastes are determined by XRD for 3 replacement levels (1%, 3%, and 5%) at different ages. Thereafter, the impact of FC on mechanical and environmental performances of cement is assessed according to EN 196-1 and the Dutch Soil Quality Decree, respectively. The investigation of the reaction products evidences the formation of layered double hydroxides (LDH), in the early stage of the reaction. Mechanically the presence of FC results in a reduction of 28 days compressive strength by 8% for a replacement of 5% wt., compared with the pure CEM I 52.5 R without any MAC. In contrast, the flexural strength is not affected by the presence of FC. Environmentally, the Dutch legislation for the leaching of contaminants for unshaped (granular) material is satisfied. Based on the collected results, FC represents a suitable candidate as MAC in cement production.

Keywords: environmental impact evaluation, Minor additional constituent, MSWI residues, X-ray diffraction crystallography

Procedia PDF Downloads 149
150 Development of Method for Detecting Low Concentration of Organophosphate Pesticides in Vegetables Using near Infrared Spectroscopy

Authors: Atchara Sankom, Warapa Mahakarnchanakul, Ronnarit Rittiron, Tanaboon Sajjaanantakul, Thammasak Thongket

Abstract:

Vegetables are frequently contaminated with pesticides residues resulting in the most food safety concern among agricultural products. The objective of this work was to develop a method to detect the organophosphate (OP) pesticides residues in vegetables using Near Infrared (NIR) spectroscopy technique. Low concentration (ppm) of OP pesticides in vegetables were investigated. The experiment was divided into 2 sections. In the first section, Chinese kale spiked with different concentrations of chlorpyrifos pesticide residues (0.5-100 ppm) was chosen as the sample model to demonstrate the appropriate conditions of sample preparation, both for a solution or solid sample. The spiked samples were extracted with acetone. The sample extracts were applied as solution samples, while the solid samples were prepared by the dry-extract system for infrared (DESIR) technique. The DESIR technique was performed by embedding the solution sample on filter paper (GF/A) and then drying. The NIR spectra were measured with the transflectance mode over wavenumber regions of 12,500-4000 cm⁻¹. The QuEChERS method followed by gas chromatography-mass spectrometry (GC-MS) was performed as the standard method. The results from the first section showed that the DESIR technique with NIR spectroscopy demonstrated good accurate calibration result with R² of 0.93 and RMSEP of 8.23 ppm. However, in the case of solution samples, the prediction regarding the NIR-PLSR (partial least squares regression) equation showed poor performance (R² = 0.16 and RMSEP = 23.70 ppm). In the second section, the DESIR technique coupled with NIR spectroscopy was applied to the detection of OP pesticides in vegetables. Vegetables (Chinese kale, cabbage and hot chili) were spiked with OP pesticides (chlorpyrifos ethion and profenofos) at different concentrations ranging from 0.5 to 100 ppm. Solid samples were prepared (based on the DESIR technique), then samples were scanned by NIR spectrophotometer at ambient temperature (25+2°C). The NIR spectra were measured as in the first section. The NIR- PLSR showed the best calibration equation for detecting low concentrations of chlorpyrifos residues in vegetables (Chinese kale, cabbage and hot chili) according to the prediction set of R2 and RMSEP of 0.85-0.93 and 8.23-11.20 ppm, respectively. For ethion residues, the best calibration equation of NIR-PLSR showed good indexes of R² and RMSEP of 0.88-0.94 and 7.68-11.20 ppm, respectively. As well as the results for profenofos pesticide, the NIR-PLSR also showed the best calibration equation for detecting the profenofos residues in vegetables according to the good index of R² and RMSEP of 0.88-0.97 and 5.25-11.00 ppm, respectively. Moreover, the calibration equation developed in this work could rapidly predict the concentrations of OP pesticides residues (0.5-100 ppm) in vegetables, and there was no significant difference between NIR-predicted values and actual values (data from GC-MS) at a confidence interval of 95%. In this work, the proposed method using NIR spectroscopy involving the DESIR technique has proved to be an efficient method for the screening detection of OP pesticides residues at low concentrations, and thus increases the food safety potential of vegetables for domestic and export markets.

Keywords: NIR spectroscopy, organophosphate pesticide, vegetable, food safety

Procedia PDF Downloads 135
149 Status of Sensory Profile Score among Children with Autism in Selected Centers of Dhaka City

Authors: Nupur A. D., Miah M. S., Moniruzzaman S. K.

Abstract:

Autism is a neurobiological disorder that affects physical, social, and language skills of a person. A child with autism feels difficulty for processing, integrating, and responding to sensory stimuli. Current estimates have shown that 45% to 96 % of children with Autism Spectrum Disorder demonstrate sensory difficulties. As autism is a worldwide burning issue, it has become a highly prioritized and important service provision in Bangladesh. The sensory deficit does not only hamper the normal development of a child, it also hampers the learning process and functional independency. The purpose of this study was to find out the prevalence of sensory dysfunction among children with autism and recognize common patterns of sensory dysfunction. A cross-sectional study design was chosen to carry out this research work. This study enrolled eighty children with autism and their parents by using the systematic sampling method. In this study, data were collected through the Short Sensory Profile (SSP) assessment tool, which consists of 38 items in the questionnaire, and qualified graduate Occupational Therapists were directly involved in interviewing parents as well as observing child responses to sensory related activities of the children with autism from four selected autism centers in Dhaka, Bangladesh. All item analyses were conducted to identify items yielding or resulting in the highest reported sensory processing dysfunction among those children through using SSP and Statistical Package for Social Sciences (SPSS) version 21.0 for data analysis. This study revealed that almost 78.25% of children with autism had significant sensory processing dysfunction based on their sensory response to relevant activities. Under-responsive sensory seeking and auditory filtering were the least common problems among them. On the other hand, most of them (95%) represented that they had definite to probable differences in sensory processing, including under-response or sensory seeking, auditory filtering, and tactile sensitivity. Besides, the result also shows that the definite difference in sensory processing among 64 children was within 100%; it means those children with autism suffered from sensory difficulties, and thus it drew a great impact on the children’s Daily Living Activities (ADLs) as well as social interaction with others. Almost 95% of children with autism require intervention to overcome or normalize the problem. The result gives insight regarding types of sensory processing dysfunction to consider during diagnosis and ascertaining the treatment. So, early sensory problem identification is very important and thus will help to provide appropriate sensory input to minimize the maladaptive behavior and enhance to reach the normal range of adaptive behavior.

Keywords: autism, sensory processing difficulties, sensory profile, occupational therapy

Procedia PDF Downloads 37
148 Ambivilance, Denial, and Adaptive Responses to Vulnerable Suspects in Police Custody: The New Limits of the Sovereign State

Authors: Faye Cosgrove, Donna Peacock

Abstract:

This paper examines current state strategies for dealing with vulnerable people in police custody and identifies the underpinning discourses and practices which inform these strategies. It has previously been argued that the state has utilised contradictory and conflicting responses to the control of crime, by employing opposing strategies of denial and adaptation in order to simultaneously both display sovereignty and disclaim responsibility. This paper argues that these contradictory strategies are still being employed in contemporary criminal justice, although the focus and the purpose have now shifted. The focus is upon the ‘vulnerable’ suspect, whose social identity is as incongruous, complex and contradictory as his social environment, and the purpose is to redirect attention away from negative state practices, whilst simultaneously displaying a compassionate and benevolent countenance in order to appeal to the voting public. The findings presented here result from intensive qualitative research with police officers, with health care professionals, and with civilian volunteers who work within police custodial environments. The data has been gathered over a three-year period and includes observational and interview data which has been thematically analysed to expose the underpinning mechanisms from which the properties of the system emerge. What is revealed is evidence of contemporary state practices of denial relating to the harms of austerity and the structural relations of vulnerability, whilst simultaneously adapting through processes of ‘othering’ of the vulnerable, ‘responsibilisation’ of citizens, defining deviance down through diversionary practices, and managing success through redefining the aims of the system. The ‘vulnerable’ suspect is subject to individual pathologising, and yet the nature of risk is aggregated. ‘Vulnerable’ suspects are supported in police custody by private citizens, by multi-agency partnerships, and by for-profit organisations, while the state seeks to collate and control services, and thereby to retain a veneer of control. Late modern ambivalence to crime control and the associated contradictory practices of abjuration and adjustment have extended to state responses to vulnerable suspects. The support available in the custody environment operates to control and minimise operational and procedural risk, rather than for the welfare of the detained person, and in fact, the support available is discovered to be detrimental to the very people that it claims to benefit. The ‘vulnerable’ suspect is now subject to the bifurcated logics employed at the new limits of the sovereign state.

Keywords: custody, policing, sovereign state, vulnerability

Procedia PDF Downloads 146
147 Ecosystem Approach in Aquaculture: From Experimental Recirculating Multi-Trophic Aquaculture to Operational System in Marsh Ponds

Authors: R. Simide, T. Miard

Abstract:

Integrated multi-trophic aquaculture (IMTA) is used to reduce waste from aquaculture and increase productivity by co-cultured species. In this study, we designed a recirculating multi-trophic aquaculture system which requires low energy consumption, low water renewal and easy-care. European seabass (Dicentrarchus labrax) were raised with co-cultured sea urchin (Paracentrotus lividus), deteritivorous polychaete fed on settled particulate matter, mussels (Mytilus galloprovincialis) used to extract suspended matters, macroalgae (Ulva sp.) used to uptake dissolved nutrients and gastropod (Phorcus turbinatus) used to clean the series of 4 tanks from fouling. Experiment was performed in triplicate during one month in autumn under an experimental greenhouse at the Institute Océanographique Paul Ricard (IOPR). Thanks to the absence of a physical filter, any pomp was needed to pressure water and the water flow was carried out by a single air-lift followed by gravity flow.Total suspended solids (TSS), biochemical oxygen demand (BOD5), turbidity, phytoplankton estimation and dissolved nutrients (ammonium NH₄, nitrite NO₂⁻, nitrate NO₃⁻ and phosphorus PO₄³⁻) were measured weekly while dissolved oxygen and pH were continuously recorded. Dissolved nutrients stay under the detectable threshold during the experiment. BOD5 decreased between fish and macroalgae tanks. TSS highly increased after 2 weeks and then decreased at the end of the experiment. Those results show that bioremediation can be well used for aquaculture system to keep optimum growing conditions. Fish were the only feeding species by an external product (commercial fish pellet) in the system. The others species (extractive species) were fed from waste streams from the tank above or from Ulva produced by the system for the sea urchin. In this way, between the fish aquaculture only and the addition of the extractive species, the biomass productivity increase by 5.7. In other words, the food conversion ratio dropped from 1.08 with fish only to 0.189 including all species. This experimental recirculating multi-trophic aquaculture system was efficient enough to reduce waste and increase productivity. In a second time, this technology has been reproduced at a commercial scale. The IOPR in collaboration with Les 4 Marais company run for 6 month a recirculating IMTA in 8000 m² of water allocate between 4 marsh ponds. A similar air-lift and gravity recirculating system was design and only one feeding species of shrimp (Palaemon sp.) was growth for 3 extractive species. Thanks to this joint work at the laboratory and commercial scales we will be able to challenge IMTA system and discuss about this sustainable aquaculture technology.

Keywords: bioremediation, integrated multi-trophic aquaculture (IMTA), laboratory and commercial scales, recirculating aquaculture, sustainable

Procedia PDF Downloads 136
146 Detection of Some Drugs of Abuse from Fingerprints Using Liquid Chromatography-Mass Spectrometry

Authors: Ragaa T. Darwish, Maha A. Demellawy, Haidy M. Megahed, Doreen N. Younan, Wael S. Kholeif

Abstract:

The testing of drug abuse is authentic in order to affirm the misuse of drugs. Several analytical approaches have been developed for the detection of drugs of abuse in pharmaceutical and common biological samples, but few methodologies have been created to identify them from fingerprints. Liquid Chromatography-Mass Spectrometry (LC-MS) plays a major role in this field. The current study aimed at assessing the possibility of detection of some drugs of abuse (tramadol, clonazepam, and phenobarbital) from fingerprints using LC-MS in drug abusers. The aim was extended in order to assess the possibility of detection of the above-mentioned drugs in fingerprints of drug handlers till three days of handling the drugs. The study was conducted on randomly selected adult individuals who were either drug abusers seeking treatment at centers of drug dependence in Alexandria, Egypt or normal volunteers who were asked to handle the different studied drugs (drug handlers). An informed consent was obtained from all individuals. Participants were classified into 3 groups; control group that consisted of 50 normal individuals (neither abusing nor handling drugs), drug abuser group that consisted of 30 individuals who abused tramadol, clonazepam or phenobarbital (10 individuals for each drug) and drug handler group that consisted of 50 individuals who were touching either the powder of drugs of abuse: tramadol, clonazepam or phenobarbital (10 individuals for each drug) or the powder of the control substances which were of similar appearance (white powder) and that might be used in the adulteration of drugs of abuse: acetyl salicylic acid and acetaminophen (10 individuals for each drug). Samples were taken from the handler individuals for three consecutive days for the same individual. The diagnosis of drug abusers was based on the current Diagnostic and Statistical Manual of Mental disorders (DSM-V) and urine screening tests using immunoassay technique. Preliminary drug screening tests of urine samples were also done for drug handlers and the control groups to indicate the presence or absence of the studied drugs of abuse. Fingerprints of all participants were then taken on a filter paper previously soaked with methanol to be analyzed by LC-MS using SCIEX Triple Quad or QTRAP 5500 System. The concentration of drugs in each sample was calculated using the regression equations between concentration in ng/ml and peak area of each reference standard. All fingerprint samples from drug abusers showed positive results with LC-MS for the tested drugs, while all samples from the control individuals showed negative results. A significant difference was noted between the concentration of the drugs and the duration of abuse. Tramadol, clonazepam, and phenobarbital were also successfully detected from fingerprints of drug handlers till 3 days of handling the drugs. The mean concentration of the chosen drugs of abuse among the handlers group decreased when the days of samples intake increased.

Keywords: drugs of abuse, fingerprints, liquid chromatography–mass spectrometry, tramadol

Procedia PDF Downloads 95
145 Natural Mexican Zeolite Modified with Iron to Remove Arsenic Ions from Water Sources

Authors: Maritza Estela Garay-Rodriguez, Mirella Gutierrez-Arzaluz, Miguel Torres-Rodriguez, Violeta Mugica-Alvarez

Abstract:

Arsenic is an element present in the earth's crust and is dispersed in the environment through natural processes and some anthropogenic activities. Naturally released into the environment through the weathering and erosion of sulphides mineral, some activities such as mining, the use of pesticides or wood preservatives potentially increase the concentration of arsenic in air, water, and soil. The natural arsenic release of a geological material is a threat to the world's drinking water sources. In aqueous phase is found in inorganic form, as arsenate and arsenite mainly, the contamination of groundwater by salts of this element originates what is known as endemic regional hydroarsenicism. The International Agency for Research on Cancer (IARC) categorizes the inorganic As within group I, as a substance with proven carcinogenic action for humans. It has been found the presence of As in groundwater in several countries such as Argentina, Mexico, Bangladesh, Canada and the United States. Regarding the concentration of arsenic in drinking water according to the World Health Organization (WHO) and the Environmental Protection Agency (EPA) establish maximum concentrations of 10 μg L⁻¹. In Mexico, in some states as Hidalgo, Morelos and Michoacán concentrations of arsenic have been found in bodies of water around 1000 μg L⁻¹, a concentration that is well above what is allowed by Mexican regulations with the NOM-127- SSA1-1994 that establishes a limit of 25 μg L⁻¹. Given this problem in Mexico, this research proposes the use of a natural Mexican zeolite (clinoptilolite type) native to the district of Etla in the central valley region of Oaxaca, as an adsorbent for the removal of arsenic. The zeolite was subjected to a conditioning with iron oxide by the precipitation-impregnation method with 0.5 M iron nitrate solution, in order to increase the natural adsorption capacity of this material. The removal of arsenic was carried out in a column with a fixed bed of conditioned zeolite, since it combines the advantages of a conventional filter with those of a natural adsorbent medium, providing a continuous treatment, of low cost and relatively easy to operate, for its implementation in marginalized areas. The zeolite was characterized by XRD, SEM/EDS, and FTIR before and after the arsenic adsorption tests, the results showed that the modification methods used are adequate to prepare adsorbent materials since it does not modify its structure, the results showed that with a particle size of 1.18 mm, an initial concentration of As (V) ions of 1 ppm, a pH of 7 and at room temperature, a removal of 98.7% was obtained with an adsorption capacity of 260 μg As g⁻¹ zeolite. The results obtained indicated that the conditioned zeolite is favorable for the elimination of arsenate in water containing up to 1000 μg As L⁻¹ and could be suitable for removing arsenate from pits of water.

Keywords: adsorption, arsenic, iron conditioning, natural zeolite

Procedia PDF Downloads 149
144 Functional Dimension of Reuse: Use of Antalya Kaleiçi Traditional Dwellings as Hotel

Authors: Dicle Aydın, Süheyla Büyükşahin Sıramkaya

Abstract:

Conservation concept gained importance especially in 19th century, it found value with the change and developments lived globally. Basic values in the essence of the concept are important in the continuity of historical and cultural fabrics which have character special to them. Reuse of settlements and spaces carrying historical and cultural values in the frame of socio-cultural and socio-economic conditions is related with functional value. Functional dimension of reuse signifies interrogation of the usage potential of the building with a different aim other than its determined aim. If a building carrying historical and cultural values cannot be used with its own function because of environmental, economical, structural and functional reasons, it is advantageous to maintain its reuse from the point of environmental ecology. By giving a new function both a requirement of the society is fulfilled and a culture entity is conserved because of its functional value. In this study, functional dimension of reuse is exemplified in Antalya Kaleiçi where has a special location and importance with its natural, cultural and historical heritage characteristics. Antayla Kaleiçi settlement preserves its liveliness as a touristic urban fabric with its almost fifty thousand years of past, traditional urban form, civil architectural examples of 18th–19th century reflecting the life style of the region and monumental buildings. The civil architectural examples in the fabric have a special character formed according to Mediterranean climate with their outer sofa (open or closed), one, two or three storey, courtyards and oriels. In the study reuse of five civil architectural examples as boutique hotel by forming a whole with their environmental arrangements is investigated, it is analyzed how the spatial requirements of a boutique hotel are fulfilled in traditional dwellings. Usage of a cultural entity as a boutique hotel is evaluated under the headlines of i.functional requirement, ii.satisfactoriness of spatial dimensions, iii.functional organization. There are closed and open restaurant, kitchen, pub, lobby, administrative offices in the hotel with 70 bed capacity and 28 rooms in total. There are expansions to urban areas on second and third floors by the means of oriels in the hotel surrounded by narrow streets in three directions. This boutique hotel, formed by unique five different dwellings having similar plan scheme in traditional fabric, is different with its structure opened to outside and connected to each other by the means of courtyards, and its outside spaces which gained mobility because of the elevation differences in courtyards.

Keywords: reuse, adaptive reuse, functional dimension of reuse, traditional dwellings

Procedia PDF Downloads 295
143 Applying Big Data Analysis to Efficiently Exploit the Vast Unconventional Tight Oil Reserves

Authors: Shengnan Chen, Shuhua Wang

Abstract:

Successful production of hydrocarbon from unconventional tight oil reserves has changed the energy landscape in North America. The oil contained within these reservoirs typically will not flow to the wellbore at economic rates without assistance from advanced horizontal well and multi-stage hydraulic fracturing. Efficient and economic development of these reserves is a priority of society, government, and industry, especially under the current low oil prices. Meanwhile, society needs technological and process innovations to enhance oil recovery while concurrently reducing environmental impacts. Recently, big data analysis and artificial intelligence become very popular, developing data-driven insights for better designs and decisions in various engineering disciplines. However, the application of data mining in petroleum engineering is still in its infancy. The objective of this research aims to apply intelligent data analysis and data-driven models to exploit unconventional oil reserves both efficiently and economically. More specifically, a comprehensive database including the reservoir geological data, reservoir geophysical data, well completion data and production data for thousands of wells is firstly established to discover the valuable insights and knowledge related to tight oil reserves development. Several data analysis methods are introduced to analysis such a huge dataset. For example, K-means clustering is used to partition all observations into clusters; principle component analysis is applied to emphasize the variation and bring out strong patterns in the dataset, making the big data easy to explore and visualize; exploratory factor analysis (EFA) is used to identify the complex interrelationships between well completion data and well production data. Different data mining techniques, such as artificial neural network, fuzzy logic, and machine learning technique are then summarized, and appropriate ones are selected to analyze the database based on the prediction accuracy, model robustness, and reproducibility. Advanced knowledge and patterned are finally recognized and integrated into a modified self-adaptive differential evolution optimization workflow to enhance the oil recovery and maximize the net present value (NPV) of the unconventional oil resources. This research will advance the knowledge in the development of unconventional oil reserves and bridge the gap between the big data and performance optimizations in these formations. The newly developed data-driven optimization workflow is a powerful approach to guide field operation, which leads to better designs, higher oil recovery and economic return of future wells in the unconventional oil reserves.

Keywords: big data, artificial intelligence, enhance oil recovery, unconventional oil reserves

Procedia PDF Downloads 265
142 Elasto-Plastic Analysis of Structures Using Adaptive Gaussian Springs Based Applied Element Method

Authors: Mai Abdul Latif, Yuntian Feng

Abstract:

Applied Element Method (AEM) is a method that was developed to aid in the analysis of the collapse of structures. Current available methods cannot deal with structural collapse accurately; however, AEM can simulate the behavior of a structure from an initial state of no loading until collapse of the structure. The elements in AEM are connected with sets of normal and shear springs along the edges of the elements, that represent the stresses and strains of the element in that region. The elements are rigid, and the material properties are introduced through the spring stiffness. Nonlinear dynamic analysis has been widely modelled using the finite element method for analysis of progressive collapse of structures; however, difficulties in the analysis were found at the presence of excessively deformed elements with cracking or crushing, as well as having a high computational cost, and difficulties on choosing the appropriate material models for analysis. The Applied Element method is developed and coded to significantly improve the accuracy and also reduce the computational costs of the method. The scheme works for both linear elastic, and nonlinear cases, including elasto-plastic materials. This paper will focus on elastic and elasto-plastic material behaviour, where the number of springs required for an accurate analysis is tested. A steel cantilever beam is used as the structural element for the analysis. The first modification of the method is based on the Gaussian Quadrature to distribute the springs. Usually, the springs are equally distributed along the face of the element, but it was found that using Gaussian springs, only up to 2 springs were required for perfectly elastic cases, while with equal springs at least 5 springs were required. The method runs on a Newton-Raphson iteration scheme, and quadratic convergence was obtained. The second modification is based on adapting the number of springs required depending on the elasticity of the material. After the first Newton Raphson iteration, Von Mises stress conditions were used to calculate the stresses in the springs, and the springs are classified as elastic or plastic. Then transition springs, springs located exactly between the elastic and plastic region, are interpolated between regions to strictly identify the elastic and plastic regions in the cross section. Since a rectangular cross-section was analyzed, there were two plastic regions (top and bottom), and one elastic region (middle). The results of the present study show that elasto-plastic cases require only 2 springs for the elastic region, and 2 springs for the plastic region. This showed to improve the computational cost, reducing the minimum number of springs in elasto-plastic cases to only 6 springs. All the work is done using MATLAB and the results will be compared to models of structural elements using the finite element method in ANSYS.

Keywords: applied element method, elasto-plastic, Gaussian springs, nonlinear

Procedia PDF Downloads 207
141 The Direct Deconvolution Model for the Large Eddy Simulation of Turbulence

Authors: Ning Chang, Zelong Yuan, Yunpeng Wang, Jianchun Wang

Abstract:

Large eddy simulation (LES) has been extensively used in the investigation of turbulence. LES calculates the grid-resolved large-scale motions and leaves small scales modeled by sub lfilterscale (SFS) models. Among the existing SFS models, the deconvolution model has been used successfully in the LES of the engineering flows and geophysical flows. Despite the wide application of deconvolution models, the effects of subfilter scale dynamics and filter anisotropy on the accuracy of SFS modeling have not been investigated in depth. The results of LES are highly sensitive to the selection of fi lters and the anisotropy of the grid, which has been overlooked in previous research. In the current study, two critical aspects of LES are investigated. Firstly, we analyze the influence of sub-fi lter scale (SFS) dynamics on the accuracy of direct deconvolution models (DDM) at varying fi lter-to-grid ratios (FGR) in isotropic turbulence. An array of invertible filters are employed, encompassing Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The signi ficance of FGR becomes evident, as it acts as a pivotal factor in error control for precise SFS stress prediction. When FGR is set to 1, the DDM models cannot accurately reconstruct the SFS stress due to the insufficient resolution of SFS dynamics. Notably, prediction capabilities are enhanced at an FGR of 2, resulting in accurate SFS stress reconstruction, except for cases involving Helmholtz I and II fi lters. A remarkable precision close to 100% is achieved at an FGR of 4 for all DDM models. Additionally, the further exploration extends to the fi lter anisotropy to address its impact on the SFS dynamics and LES accuracy. By employing dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with the anisotropic fi lter, aspect ratios (AR) ranging from 1 to 16 in LES fi lters are evaluated. The findings highlight the DDM's pro ficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. High correlation coefficients exceeding 90% are observed in the a priori study for the DDM's reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as lter anisotropy increases. In the a posteriori studies, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, encompassing velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strain-rate tensors, and SFS stress. It is observed that as fi lter anisotropy intensify , the results of DSM and DMM become worse, while the DDM continues to deliver satisfactory results across all fi lter-anisotropy scenarios. The fi ndings emphasize the DDM framework's potential as a valuable tool for advancing the development of sophisticated SFS models for LES of turbulence.

Keywords: deconvolution model, large eddy simulation, subfilter scale modeling, turbulence

Procedia PDF Downloads 53
140 Study of the Kinetics of Formation of Carboxylic Acids Using Ion Chromatography during Oxidation Induced by Rancimat of the Oleic Acid, Linoleic Acid, Linolenic Acid, and Biodiesel

Authors: Patrícia T. Souza, Marina Ansolin, Eduardo A. C. Batista, Antonio J. A. Meirelles, Matthieu Tubino

Abstract:

Lipid oxidation is a major cause of the deterioration of the quality of the biodiesel, because the waste generated damages the engines. Among the main undesirable effects are the increase of viscosity and acidity, leading to the formation of insoluble gums and sediments which cause the blockage of fuel filters. The auto-oxidation is defined as the spontaneous reaction of atmospheric oxygen with lipids. Unsaturated fatty acids are usually the components affected by such reactions. They are present as free fatty acids, fatty esters and glycerides. To determine the oxidative stability of biodiesels, through the induction period, IP, the Rancimat method is used, which allows continuous monitoring of the induced oxidation process of the samples. During the oxidation of the lipids, volatile organic acids are produced as byproducts, in addition, other byproducts, including alcohols and carbonyl compounds, may be further oxidized to carboxylic acids. By the methodology developed in this work using ion chromatography, IC, analyzing the water contained in the conductimetric vessel, were quantified organic anions of carboxylic acids in samples subjected to oxidation induced by Rancimat. The optimized chromatographic conditions were: eluent water:acetone (80:20 v/v) with 0.5 mM sulfuric acid; flow rate 0.4 mL min-1; injection volume 20 µL; eluent suppressor 20 mM LiCl; analytical curve from 1 to 400 ppm. The samples studied were methyl biodiesel from soybean oil and unsaturated fatty acids standards: oleic, linoleic and linolenic. The induced oxidation kinetics curves were constructed by analyzing the water contained in the conductimetric vessels which were removed, each one, from the Rancimat apparatus at prefixed intervals of time. About 3 g of sample were used under the conditions of 110 °C and air flow rate of 10 L h-1. The water of each conductimetric Rancimat measuring vessel, where the volatile compounds were collected, was filtered through a 0.45 µm filter and analyzed by IC. Through the kinetic data of the formation of the organic anions of carboxylic acids, the formation rates of the same were calculated. The observed order of the rates of formation of the anions was: formate >>> acetate > hexanoate > valerate for the oleic acid; formate > hexanoate > acetate > valerate for the linoleic acid; formate >>> valerate > acetate > propionate > butyrate for the linolenic acid. It is possible to suppose that propionate and butyrate are obtained mainly from linolenic acid and that hexanoate is originated from oleic and linoleic acid. For the methyl biodiesel the order of formation of anions was: formate >>> acetate > valerate > hexanoate > propionate. According to the total rate of formation these anions produced during the induced degradation of the fatty acids can be assigned the order of reactivity: linolenic acid > linoleic acid >>> oleic acid.

Keywords: anions of carboxylic acids, biodiesel, ion chromatography, oxidation

Procedia PDF Downloads 448
139 Chemical Analysis of Particulate Matter (PM₂.₅) and Volatile Organic Compound Contaminants

Authors: S. Ebadzadsahraei, H. Kazemian

Abstract:

The main objective of this research was to measure particulate matter (PM₂.₅) and Volatile Organic Compound (VOCs) as two classes of air pollutants, at Prince George (PG) neighborhood in warm and cold seasons. To fulfill this objective, analytical protocols were developed for accurate sampling and measurement of the targeted air pollutants. PM₂.₅ samples were analyzed for their chemical composition (i.e., toxic trace elements) in order to assess their potential source of emission. The City of Prince George, widely known as the capital of northern British Columbia (BC), Canada, has been dealing with air pollution challenges for a long time. The city has several local industries including pulp mills, a refinery, and a couple of asphalt plants that are the primary contributors of industrial VOCs. In this research project, which is the first study of this kind in this region it measures physical and chemical properties of particulate air pollutants (PM₂.₅) at the city neighborhood. Furthermore, this study quantifies the percentage of VOCs at the city air samples. One of the outcomes of this project is updated data about PM₂.₅ and VOCs inventory in the selected neighborhoods. For examining PM₂.₅ chemical composition, an elemental analysis methodology was developed to measure major trace elements including but not limited to mercury and lead. The toxicity of inhaled particulates depends on both their physical and chemical properties; thus, an understanding of aerosol properties is essential for the evaluation of such hazards, and the treatment of such respiratory and other related diseases. Mixed cellulose ester (MCE) filters were selected for this research as a suitable filter for PM₂.₅ air sampling. Chemical analyses were conducted using Inductively Coupled Plasma Mass Spectrometry (ICP-MS) for elemental analysis. VOCs measurement of the air samples was performed using a Gas Chromatography-Flame Ionization Detector (GC-FID) and Gas Chromatography-Mass Spectrometry (GC-MS) allowing for quantitative measurement of VOC molecules in sub-ppb levels. In this study, sorbent tube (Anasorb CSC, Coconut Charcoal), 6 x 70-mm size, 2 sections, 50/100 mg sorbent, 20/40 mesh was used for VOCs air sampling followed by using solvent extraction and solid-phase micro extraction (SPME) techniques to prepare samples for measuring by a GC-MS/FID instrument. Air sampling for both PM₂.₅ and VOC were conducted in summer and winter seasons for comparison. Average concentrations of PM₂.₅ are very different between wildfire and daily samples. At wildfire time average of concentration is 83.0 μg/m³ and daily samples are 23.7 μg/m³. Also, higher concentrations of iron, nickel and manganese found at all samples and mercury element is found in some samples. It is able to stay too high doses negative effects.

Keywords: air pollutants, chemical analysis, particulate matter (PM₂.₅), volatile organic compound, VOCs

Procedia PDF Downloads 123
138 Melt–Electrospun Polyprophylene Fabrics Functionalized with TiO2 Nanoparticles for Effective Photocatalytic Decolorization

Authors: Z. Karahaliloğlu, C. Hacker, M. Demirbilek, G. Seide, E. B. Denkbaş, T. Gries

Abstract:

Currently, textile industry has played an important role in world’s economy, especially in developing countries. Dyes and pigments used in textile industry are significant pollutants. Most of theirs are azo dyes that have chromophore (-N=N-) in their structure. There are many methods for removal of the dyes from wastewater such as chemical coagulation, flocculation, precipitation and ozonation. But these methods have numerous disadvantages and alternative methods are needed for wastewater decolorization. Titanium-mediated photodegradation has been used generally due to non-toxic, insoluble, inexpensive, and highly reactive properties of titanium dioxide semiconductor (TiO2). Melt electrospinning is an attractive manufacturing process for thin fiber production through electrospinning from PP (Polyprophylene). PP fibers have been widely used in the filtration due to theirs unique properties such as hydrophobicity, good mechanical strength, chemical resistance and low-cost production. In this study, we aimed to investigate the effect of titanium nanoparticle localization and amine modification on the dye degradation. The applicability of the prepared chemical activated composite and pristine fabrics for a novel treatment of dyeing wastewater were evaluated.In this study, a photocatalyzer material was prepared from nTi (titanium dioxide nanoparticles) and PP by a melt-electrospinning technique. The electrospinning parameters of pristine PP and PP/nTi nanocomposite fabrics were optimized. Before functionalization with nTi, the surface of fabrics was activated by a technique using glutaraldehyde (GA) and polyethyleneimine to promote the dye degredation. Pristine PP and PP/nTi nanocomposite melt-electrospun fabrics were characterized using scanning electron microscopy (SEM) and X-Ray Photon Spectroscopy (XPS). Methyl orange (MO) was used as a model compound for the decolorization experiments. Photocatalytic performance of nTi-loaded pristine and nanocomposite melt-electrospun filters was investigated by varying initial dye concentration 10, 20, 40 mg/L). nTi-PP composite fabrics were successfully processed into a uniform, fibrous network of beadless fibers with diameters of 800±0.4 nm. The process parameters were determined as a voltage of 30 kV, a working distance of 5 cm, a temperature of the thermocouple and hotcoil of 260–300 ºC and a flow rate of 0.07 mL/h. SEM results indicated that TiO2 nanoparticles were deposited uniformly on the nanofibers and XPS results confirmed the presence of titanium nanoparticles and generation of amine groups after modification. According to photocatalytic decolarization test results, nTi-loaded GA-treated pristine or nTi-PP nanocomposite fabric filtern have superior properties, especially over 90% decolorization efficiency at GA-treated pristine and nTi-PP composite PP fabrics. In this work, as a photocatalyzer for wastewater treatment, surface functionalized with nTi melt-electrospun fabrics from PP were prepared. Results showed melt-electrospun nTi-loaded GA-tretaed composite or pristine PP fabrics have a great potential for use as a photocatalytic filter to decolorization of wastewater and thus, requires further investigation.

Keywords: titanium oxide nanoparticles, polyprophylene, melt-electrospinning

Procedia PDF Downloads 247
137 Neural Network Based Control Algorithm for Inhabitable Spaces Applying Emotional Domotics

Authors: Sergio A. Navarro Tuch, Martin Rogelio Bustamante Bello, Leopoldo Julian Lechuga Lopez

Abstract:

In recent years, Mexico’s population has seen a rise of different physiological and mental negative states. Two main consequences of this problematic are deficient work performance and high levels of stress generating and important impact on a person’s physical, mental and emotional health. Several approaches, such as the use of audiovisual stimulus to induce emotions and modify a person’s emotional state, can be applied in an effort to decreases these negative effects. With the use of different non-invasive physiological sensors such as EEG, luminosity and face recognition we gather information of the subject’s current emotional state. In a controlled environment, a subject is shown a series of selected images from the International Affective Picture System (IAPS) in order to induce a specific set of emotions and obtain information from the sensors. The raw data obtained is statistically analyzed in order to filter only the specific groups of information that relate to a subject’s emotions and current values of the physical variables in the controlled environment such as, luminosity, RGB light color, temperature, oxygen level and noise. Finally, a neural network based control algorithm is given the data obtained in order to feedback the system and automate the modification of the environment variables and audiovisual content shown in an effort that these changes can positively alter the subject’s emotional state. During the research, it was found that the light color was directly related to the type of impact generated by the audiovisual content on the subject’s emotional state. Red illumination increased the impact of violent images and green illumination along with relaxing images decreased the subject’s levels of anxiety. Specific differences between men and women were found as to which type of images generated a greater impact in either gender. The population sample was mainly constituted by college students whose data analysis showed a decreased sensibility to violence towards humans. Despite the early stage of the control algorithm, the results obtained from the population sample give us a better insight into the possibilities of emotional domotics and the applications that can be created towards the improvement of performance in people’s lives. The objective of this research is to create a positive impact with the application of technology to everyday activities; nonetheless, an ethical problem arises since this can also be applied to control a person’s emotions and shift their decision making.

Keywords: data analysis, emotional domotics, performance improvement, neural network

Procedia PDF Downloads 120