Search results for: human auditory system model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 35130

Search results for: human auditory system model

18900 How Group Education Impacts Female Factory Workers’ Behavior and Readiness to Receive Mammography and Pap Smears

Authors: Memnun Seven, Mine Bahar, Aygül Akyüz, Hatice Erdoğan

Abstract:

Background: The workplace has been deemed a suitable location for educating many women at once about cancer screening. Objective: To determine how group education about early diagnostic methods for breast and cervical cancer affects women’s behavior and readiness to receive mammography and Pap smears. Methods: This semi-interventional study was conducted at a textile factory in Istanbul, Turkey. Female workers (n = 125) were included in the study. A participant identification form and knowledge evaluation form developed for this study, along with the trans-theoretical model, were used to collect data. A 45-min interactive group education was given to the participants. Results: Upon contacting participants 3 months after group education, 15.4% (n = 11) stated that they had since received a mammogram and 9.8% (n = 7) a Pap smear. As suggested by the trans-theoretical model, group education increased participants’ readiness to receive cancer screening, along with their knowledge of breast and cervical cancer. Conclusions: Group education positively impacted women’s knowledge of cancer and their readiness to receive mammography and Pap smears. Group education can therefore potentially create awareness of cancer screening tests among women and improve their readiness to receive such tests.

Keywords: cancer screening, educational intervention, participation, women

Procedia PDF Downloads 324
18899 Empowering and Educating Young People Against Cybercrime by Playing: The Rayuela Method

Authors: Jose L. Diego, Antonio Berlanga, Gregorio López, Diana López

Abstract:

The Rayuela method is a success story, as it is part of a project selected by the European Commission to face the challenge launched by itself for achieving a better understanding of human factors, as well as social and organisational aspects that are able to solve issues in fighting against crime. Rayuela's method specifically focuses on the drivers of cyber criminality, including approaches to prevent, investigate, and mitigate cybercriminal behavior. As the internet has become an integral part of young people’s lives, they are the key target of the Rayuela method because they (as a victim or as a perpetrator) are the most vulnerable link of the chain. Considering the increased time spent online and the control of their internet usage and the low level of awareness of cyber threats and their potential impact, it is understandable the proliferation of incidents due to human mistakes. 51% of Europeans feel not well informed about cyber threats, and 86% believe that the risk of becoming a victim of cybercrime is rapidly increasing. On the other hand, Law enforcement has noted that more and more young people are increasingly committing cybercrimes. This is an international problem that has considerable cost implications; it is estimated that crimes in cyberspace will cost the global economy $445B annually. Understanding all these phenomena drives to the necessity of a shift in focus from sanctions to deterrence and prevention. As a research project, Rayuela aims to bring together law enforcement agencies (LEAs), sociologists, psychologists, anthropologists, legal experts, computer scientists, and engineers, to develop novel methodologies that allow better understanding the factors affecting online behavior related to new ways of cyber criminality, as well as promoting the potential of these young talents for cybersecurity and technologies. Rayuela’s main goal is to better understand the drivers and human factors affecting certain relevant ways of cyber criminality, as well as empower and educate young people in the benefits, risks, and threats intrinsically linked to the use of the Internet by playing, thus preventing and mitigating cybercriminal behavior. In order to reach that goal it´s necessary an interdisciplinary consortium (formed by 17 international partners) carries out researches and actions like Profiling and case studies of cybercriminals and victims, risk assessments, studies on Internet of Things and its vulnerabilities, development of a serious gaming environment, training activities, data analysis and interpretation using Artificial intelligence, testing and piloting, etc. For facilitating the real implementation of the Rayuela method, as a community policing strategy, is crucial to count on a Police Force with a solid background in trust-building and community policing in order to do the piloting, specifically with young people. In this sense, Valencia Local Police is a pioneer Police Force working with young people in conflict solving, through providing police mediation and peer mediation services and advice. As an example, it is an official mediation institution, so agreements signed by their police mediators have once signed by the parties, the value of a judicial decision.

Keywords: fight against crime and insecurity, avert and prepare young people against aggression, ICT, serious gaming and artificial intelligence against cybercrime, conflict solving and mediation with young people

Procedia PDF Downloads 125
18898 Analytical Design of Fractional-Order PI Controller for Decoupling Control System

Authors: Truong Nguyen Luan Vu, Le Hieu Giang, Le Linh

Abstract:

The FOPI controller is proposed based on the main properties of the decoupling control scheme, as well as the fractional calculus. By using the simplified decoupling technique, the transfer function of decoupled apparent process is firstly separated into a set of n equivalent independent processes in terms of a ratio of the diagonal elements of original open-loop transfer function to those of dynamic relative gain array and the fraction – order PI controller is then developed for each control loops due to the Bode’s ideal transfer function that gives the desired fractional closed-loop response in the frequency domain. The simulation studies were carried out to evaluate the proposed design approach in a fair compared with the other existing methods in accordance with the structured singular value (SSV) theory that used to measure the robust stability of control systems under multiplicative output uncertainty. The simulation results indicate that the proposed method consistently performs well with fast and well-balanced closed-loop time responses.

Keywords: ideal transfer function of bode, fractional calculus, fractional order proportional integral (FOPI) controller, decoupling control system

Procedia PDF Downloads 327
18897 Attachment Systems and Psychotherapy: An Internal Secure Caregiver to Heal and Protect the Parts of Our Clients: InCorporer Method

Authors: Julien Baillet

Abstract:

In light of 30 years of scientific research, InCorporer Method was created in 2019 as a new approach to heal traumatic, developmental, and dissociative injuries. Following natural nervous system functions, InCorporer aims to heal, develop, and update the different defensive mammalian subsystems: fight, flight, freeze, feign death, cry for help, & energy regulator. The dimensions taken into account are: (i) Heal the traumatic injuries who are still bleeding, (ii) Develop the systems that never received the security, attention, and affection they needed. (iii) Update the parts that stayed stuck in the past, ignoring for too long that they are out of danger now. Through the Present Part and its caregiving skills, InCorporer method enables a balanced, soothed, and collaborative personality system. To be as integrative as possible, InCorporer method has been designed according to several fields of research, such as structural dissociation theory, attachment theory, and information processing theory. In this paper, the author presents how the internal caregiver is developed and trained to heal all the different parts/subsystems of our clients through mindful attention and reflex movement integration.

Keywords: PTSD, attachment, dissociation, part work

Procedia PDF Downloads 72
18896 Field Tests and Numerical Simulation of Tunis Soft Soil Improvement Using Prefabricated Vertical Drains

Authors: Marwa Ben Khalifa, Zeineb Ben Salem, Wissem Frikha

Abstract:

This paper presents a case study of “Radès la Goulette” bridge project using the technique of prefabricated vertical drains (PVD) associated with step by step construction of preloading embankments with averaged height of about 6 m. These embankments are founded on a highly compressible layer of Tunis soft soil. The construction steps included extensive soil instrumentation such as piezometers and settlement plates for monitoring the dissipation of excess pore water pressures and settlement during the consolidation of Tunis soft soil. An axisymmetric numerical model using the 2D finite difference code FLAC was developed and calibrated using laboratory tests to predict the soil behavior and consolidation settlements. The constitutive model impact for simulating the soft soil behavior is investigated. The results of analyses show that numerical analysis provided satisfactory predictions for the field performance during the construction of Radès la Goulette embankment. The obtained results show the effectiveness of PVD in the acceleration of the consolidation time. A comparison of numerical results with theoretical analysis was presented.

Keywords: tunis soft soil, radès bridge project, prefabricated vertical drains, FLAC, acceleration of consolidation

Procedia PDF Downloads 120
18895 Quantification of the Erosion Effect on Small Caliber Guns: Experimental and Numerical Analysis

Authors: Dhouibi Mohamed, Stirbu Bogdan, Chabotier André, Pirlot Marc

Abstract:

Effects of erosion and wear on the performance of small caliber guns have been analyzed throughout numerical and experimental studies. Mainly, qualitative observations were performed. Correlations between the volume change of the chamber and the maximum pressure are limited. This paper focuses on the development of a numerical model to predict the maximum pressure evolution when the interior shape of the chamber changes in the different weapon’s life phases. To fulfill this goal, an experimental campaign, followed by a numerical simulation study, is carried out. Two test barrels, « 5.56x45mm NATO » and « 7.62x51mm NATO,» are considered. First, a Coordinate Measuring Machine (CMM) with a contact scanning probe is used to measure the interior profile of the barrels after each 300-shots cycle until their worn out. Simultaneously, the EPVAT (Electronic Pressure Velocity and Action Time) method with a special WEIBEL radar are used to measure: (i) the chamber pressure, (ii) the action time, (iii) and the bullet velocity in each barrel. Second, a numerical simulation study is carried out. Thus, a coupled interior ballistic model is developed using the dynamic finite element program LS-DYNA. In this work, two different models are elaborated: (i) coupled Eularien Lagrangian method using fluid-structure interaction (FSI) techniques and a coupled thermo-mechanical finite element using a lumped parameter model (LPM) as a subroutine. Those numerical models are validated and checked through three experimental results, such as (i) the muzzle velocity, (ii) the chamber pressure, and (iii) the surface morphology of fired projectiles. Results show a good agreement between experiments and numerical simulations. Next, a comparison between the two models is conducted. The projectile motions, the dynamic engraving resistances and the maximum pressures are compared and analyzed. Finally, using this obtained database, a statistical correlation between the muzzle velocity, the maximum pressure and the chamber volume is established.

Keywords: engraving process, finite element analysis, gun barrel erosion, interior ballistics, statistical correlation

Procedia PDF Downloads 209
18894 Data Mining Approach for Commercial Data Classification and Migration in Hybrid Storage Systems

Authors: Mais Haj Qasem, Maen M. Al Assaf, Ali Rodan

Abstract:

Parallel hybrid storage systems consist of a hierarchy of different storage devices that vary in terms of data reading speed performance. As we ascend in the hierarchy, data reading speed becomes faster. Thus, migrating the application’ important data that will be accessed in the near future to the uppermost level will reduce the application I/O waiting time; hence, reducing its execution elapsed time. In this research, we implement trace-driven two-levels parallel hybrid storage system prototype that consists of HDDs and SSDs. The prototype uses data mining techniques to classify application’ data in order to determine its near future data accesses in parallel with the its on-demand request. The important data (i.e. the data that the application will access in the near future) are continuously migrated to the uppermost level of the hierarchy. Our simulation results show that our data migration approach integrated with data mining techniques reduces the application execution elapsed time when using variety of traces in at least to 22%.

Keywords: hybrid storage system, data mining, recurrent neural network, support vector machine

Procedia PDF Downloads 304
18893 A Benchmark System for Testing Medium Voltage Direct Current (MVDC-CB) Robustness Utilizing Real Time Digital Simulation and Hardware-In-Loop Theory

Authors: Ali Kadivar, Kaveh Niayesh

Abstract:

The integration of green energy resources is a major focus, and the role of Medium Voltage Direct Current (MVDC) systems is exponentially expanding. However, the protection of MVDC systems against DC faults is a challenge that can have consequences on reliable and safe grid operation. This challenge reveals the need for MVDC circuit breakers (MVDC CB), which are in infancies of their improvement. Therefore will be a lack of MVDC CBs standards, including thresholds for acceptable power losses and operation speed. To establish a baseline for comparison purposes, a benchmark system for testing future MVDC CBs is vital. The literatures just give the timing sequence of each switch and the emphasis is on the topology, without in-depth study on the control algorithm of DCCB, as the circuit breaker control system is not yet systematic. A digital testing benchmark is designed for the Proof-of-concept of simulation studies using software models. It can validate studies based on real-time digital simulators and Transient Network Analyzer (TNA) models. The proposed experimental setup utilizes data accusation from the accurate sensors installed on the tested MVDC CB and through general purpose input/outputs (GPIO) from the microcontroller and PC Prototype studies in the laboratory-based models utilizing Hardware-in-the-Loop (HIL) equipment connected to real-time digital simulators is achieved. The improved control algorithm of the circuit breaker can reduce the peak fault current and avoid arc resignation, helping the coordination of DCCB in relay protection. Moreover, several research gaps are identified regarding case studies and evaluation approaches.

Keywords: DC circuit breaker, hardware-in-the-loop, real time digital simulation, testing benchmark

Procedia PDF Downloads 75
18892 Influence of Degassing on the Curing Behaviour and Void Occurrence Properties of Epoxy / Anhydride Resin System

Authors: Latha Krishnan, Andrew Cobley

Abstract:

Epoxy resin is most widely used as matrices for composites of aerospace, automotive and electronic applications due to its outstanding mechanical properties. These properties are chiefly predetermined by the chemical structure of the prepolymer and type of hardener but can also be varied by the processing conditions such as prepolymer and hardener mixing, degassing and curing conditions. In this research, the effect of degassing on the curing behaviour and the void occurrence is experimentally evaluated for epoxy /anhydride resin system. The epoxy prepolymer was mixed with an anhydride hardener and accelerator in an appropriate quantity. In order to investigate the effect of degassing on the curing behaviour and void content of the resin, the uncured resin samples were prepared using three different methods: 1) no degassing 2) degassing on prepolymer and 3) degassing on mixed solution of prepolymer and hardener with an accelerator. The uncured resins were tested in differential scanning calorimeter (DSC) to observe the changes in curing behaviour of the above three resin samples by analysing factors such as gel temperature, peak cure temperature and heat of reaction/heat flow in curing. Additionally, the completely cured samples were tested in DSC to identify the changes in the glass transition temperature (Tg) between the three samples. In order to evaluate the effect of degassing on the void content and morphology changes in the cured epoxy resin, the fractured surfaces of cured epoxy resin were examined under the scanning electron microscope (SEM). In addition, the amount of void, void geometry and void fraction were also investigated using an optical microscope and image J software (image analysis software). It was found that degassing at different stages of resin mixing had significant effects on properties such as glass transition temperature, the void content and void size of the epoxy/anhydride resin system. For example, degassing (vacuum applied on the mixed resin) has shown higher glass transition temperature (Tg) with lower void content.

Keywords: anhydride epoxy, curing behaviour, degassing, void occurrence

Procedia PDF Downloads 212
18891 Characterizing the Diffused Double Layer Properties of Clay Minerals

Authors: N. Saranya

Abstract:

The difference in characteristic behavior of clay minerals for different electrolyte solution is dictated by the corresponding variation occurring at its diffused double layer thickness (DDL). The diffused double layer of clay mineral has two distinct regions; the inner region is termed as ‘Stern layer’ where ions are strongly attached to the clay surface. In the outer region, the ions are not strongly bonded with the clay surface, and this region is termed as ‘diffuse layer’. Within the diffuse layer, there is a plane that forms a boundary between the moving ions and the ions attached to the clay surface, which is termed as slipping or shear plane, and the potential of this plane is defined as zeta potential (ζ). Therefore, the variation in diffused double layer properties of clay mineral for different electrolyte solutions can be modeled if the corresponding variation in surface charge, surface potential, and zeta potential are computed. In view of this, the present study has attempted to characterize the diffused double layer properties of three different clay minerals interacting with different pore fluids by measuring the corresponding variation in surface charge, surface potential, and zeta potential. Further, the obtained variation in the diffused double layer property is compared with the Gouy-Chapman model, which is the widely accepted theoretical model to characterize the diffused double layer properties of clay minerals.

Keywords: DDL, surface charge, surface potential, zeta potential

Procedia PDF Downloads 164
18890 Efficient Video Compression Technique Using Convolutional Neural Networks and Generative Adversarial Network

Authors: P. Karthick, K. Mahesh

Abstract:

Video has become an increasingly significant component of our digital everyday contact. With the advancement of greater contents and shows of the resolution, its significant volume poses serious obstacles to the objective of receiving, distributing, compressing, and revealing video content of high quality. In this paper, we propose the primary beginning to complete a deep video compression model that jointly upgrades all video compression components. The video compression method involves splitting the video into frames, comparing the images using convolutional neural networks (CNN) to remove duplicates, repeating the single image instead of the duplicate images by recognizing and detecting minute changes using generative adversarial network (GAN) and recorded with long short-term memory (LSTM). Instead of the complete image, the small changes generated using GAN are substituted, which helps in frame level compression. Pixel wise comparison is performed using K-nearest neighbours (KNN) over the frame, clustered with K-means, and singular value decomposition (SVD) is applied for each and every frame in the video for all three color channels [Red, Green, Blue] to decrease the dimension of the utility matrix [R, G, B] by extracting its latent factors. Video frames are packed with parameters with the aid of a codec and converted to video format, and the results are compared with the original video. Repeated experiments on several videos with different sizes, duration, frames per second (FPS), and quality results demonstrate a significant resampling rate. On average, the result produced had approximately a 10% deviation in quality and more than 50% in size when compared with the original video.

Keywords: video compression, K-means clustering, convolutional neural network, generative adversarial network, singular value decomposition, pixel visualization, stochastic gradient descent, frame per second extraction, RGB channel extraction, self-detection and deciding system

Procedia PDF Downloads 184
18889 Impact of Extended Enterprise Resource Planning in the Context of Cloud Computing on Industries and Organizations

Authors: Gholamreza Momenzadeh, Forough Nematolahi

Abstract:

The Extended Enterprise Resource Planning (ERPII) system usually requires massive amounts of storage space, powerful servers, and large upfront and ongoing investments to purchase and manage the software and the related hardware which are not affordable for organizations. In recent decades, organizations prefer to adapt their business structures with new technologies for remaining competitive in the world economy. Therefore, cloud computing (which is one of the tools of information technology (IT)) is a modern system that reveals the next-generation application architecture. Also, cloud computing has had some advantages that reduce costs in many ways such as: lower upfront costs for all computing infrastructure and lower cost of maintaining and supporting. On the other hand, traditional ERPII is not responding for huge amounts of data and relations between the organizations. In this study, based on a literature study, ERPII is investigated in the context of cloud computing where the organizations operate more efficiently. Also, ERPII conditions have a response to needs of organizations in large amounts of data and relations between the organizations.

Keywords: extended enterprise resource planning, cloud computing, business process, enterprise information integration

Procedia PDF Downloads 217
18888 Approach for Demonstrating Reliability Targets for Rail Transport during Low Mileage Accumulation in the Field: Methodology and Case Study

Authors: Nipun Manirajan, Heeralal Gargama, Sushil Guhe, Manoj Prabhakaran

Abstract:

In railway industry, train sets are designed based on contractual requirements (mission profile), where reliability targets are measured in terms of mean distance between failures (MDBF). However, during the beginning of revenue services, trains do not achieve the designed mission profile distance (mileage) within the timeframe due to infrastructure constraints, scarcity of commuters or other operational challenges thereby not respecting the original design inputs. Since trains do not run sufficiently and do not achieve the designed mileage within the specified time, car builder has a risk of not achieving the contractual MDBF target. This paper proposes a constant failure rate based model to deal with the situations where mileage accumulation is not a part of the design mission profile. The model provides appropriate MDBF target to be demonstrated based on actual accumulated mileage. A case study of rolling stock running in the field is undertaken to analyze the failure data and MDBF target demonstration during low mileage accumulation. The results of case study prove that with the proposed method, reliability targets are achieved under low mileage accumulation.

Keywords: mean distance between failures, mileage-based reliability, reliability target appropriations, rolling stock reliability

Procedia PDF Downloads 263
18887 Malignancy Assessment of Brain Tumors Using Convolutional Neural Network

Authors: Chung-Ming Lo, Kevin Li-Chun Hsieh

Abstract:

The central nervous system in the World Health Organization defines grade 2, 3, 4 gliomas according to the aggressiveness. For brain tumors, using image examination would have a lower risk than biopsy. Besides, it is a challenge to extract relevant tissues from biopsy operation. Observing the whole tumor structure and composition can provide a more objective assessment. This study further proposed a computer-aided diagnosis (CAD) system based on a convolutional neural network to quantitatively evaluate a tumor's malignancy from brain magnetic resonance imaging. A total of 30 grade 2, 43 grade 3, and 57 grade 4 gliomas were collected in the experiment. Transferred parameters from AlexNet were fine-tuned to classify the target brain tumors and achieved an accuracy of 98% and an area under the receiver operating characteristics curve (Az) of 0.99. Without pre-trained features, only 61% of accuracy was obtained. The proposed convolutional neural network can accurately and efficiently classify grade 2, 3, and 4 gliomas. The promising accuracy can provide diagnostic suggestions to radiologists in the clinic.

Keywords: convolutional neural network, computer-aided diagnosis, glioblastoma, magnetic resonance imaging

Procedia PDF Downloads 142
18886 Monitoring Memories by Using Brain Imaging

Authors: Deniz Erçelen, Özlem Selcuk Bozkurt

Abstract:

The course of daily human life calls for the need for memories and remembering the time and place for certain events. Recalling memories takes up a substantial amount of time for an individual. Unfortunately, scientists lack the proper technology to fully understand and observe different brain regions that interact to form or retrieve memories. The hippocampus, a complex brain structure located in the temporal lobe, plays a crucial role in memory. The hippocampus forms memories as well as allows the brain to retrieve them by ensuring that neurons fire together. This process is called “neural synchronization.” Sadly, the hippocampus is known to deteriorate often with age. Proteins and hormones, which repair and protect cells in the brain, typically decline as the age of an individual increase. With the deterioration of the hippocampus, an individual becomes more prone to memory loss. Many memory loss starts off as mild but may evolve into serious medical conditions such as dementia and Alzheimer’s disease. In their quest to fully comprehend how memories work, scientists have created many different kinds of technology that are used to examine the brain and neural pathways. For instance, Magnetic Resonance Imaging - or MRI- is used to collect detailed images of an individual's brain anatomy. In order to monitor and analyze brain functions, a different version of this machine called Functional Magnetic Resonance Imaging - or fMRI- is used. The fMRI is a neuroimaging procedure that is conducted when the target brain regions are active. It measures brain activity by detecting changes in blood flow associated with neural activity. Neurons need more oxygen when they are active. The fMRI measures the change in magnetization between blood which is oxygen-rich and oxygen-poor. This way, there is a detectable difference across brain regions, and scientists can monitor them. Electroencephalography - or EEG - is also a significant way to monitor the human brain. The EEG is more versatile and cost-efficient than an fMRI. An EEG measures electrical activity which has been generated by the numerous cortical layers of the brain. EEG allows scientists to be able to record brain processes that occur after external stimuli. EEGs have a very high temporal resolution. This quality makes it possible to measure synchronized neural activity and almost precisely track the contents of short-term memory. Science has come a long way in monitoring memories using these kinds of devices, which have resulted in the inspections of neurons and neural pathways becoming more intense and detailed.

Keywords: brain, EEG, fMRI, hippocampus, memories, neural pathways, neurons

Procedia PDF Downloads 77
18885 An Examination of Factors Leading to Knowledge-Sharing Behavior of Sri Lankan Bankers

Authors: Eranga N. Somaratna, Pradeep Dharmadasa

Abstract:

In the current competitive environment, the factors leading to organization success are not limited to the investment of capital, labor, and raw material, but in the ability of knowledge innovation from all the members of an organization. However, knowledge on its own cannot provide organizations with its promised benefits unless it is shared, as organizations are increasingly experiencing unsuccessful knowledge sharing efforts. In such a backdrop and due to the dearth of research in this area in the South Asian context, the study set forth to develop an understanding of the factors that influence knowledge-sharing behavior within an organizational framework, using widely accepted social psychology theories. The purpose of the article is to discover the determinants of knowledge-sharing intention and actual knowledge sharing behaviors of bank employees in Sri Lanka using an aggregate model. Knowledge sharing intentions are widely discussed in literature through the application of Ajzen’s Theory of planned behavior (TPB) and Theory of Social Capital (SCT) separately. Both the theories are rich to explain knowledge sharing intention of workers with limitations. The study, therefore, combines the TPB with SCT in developing its conceptual model. Data were collected through a self-administrated paper-based questionnaire of 199 bank managers from 6 public and private banks of Sri Lanka and analyzed the suggested research model using Structural Equation Modelling (SEM). The study supported six of the nine hypotheses, where Attitudes toward Knowledge Sharing Behavior, Perceived Behavioral Control, Trust, Anticipated Reciprocal Relationships and Actual Knowledge Sharing Behavior were supported while Organizational Climate, Sense of Self-Worth and Anticipated Extrinsic Rewards were not, in determining knowledge sharing intentions. Furthermore, the study investigated the effect of demographic factors of bankers (age, gender, position, education, and experiences) to the actual knowledge sharing behavior. However, findings should be confirmed using a larger sample, as well as through cross-sectional studies. The results highlight the need for theoreticians to combined TPB and SCT in understanding knowledge workers’ intentions and actual behavior; and for practitioners to focus on the perceptions and needs of the individual knowledge worker and the need to cultivate a culture of sharing knowledge in the organization for their mutual benefit.

Keywords: banks, employees behavior, knowledge management, knowledge sharing

Procedia PDF Downloads 130
18884 The Determination of Pb and Zn Phytoremediation Potential and Effect of Interaction between Cadmium and Zinc on Metabolism of Buckwheat (Fagopyrum Esculentum)

Authors: Nurdan Olguncelik Kaplan, Aysen Akay

Abstract:

Nowadays soil pollution has become a global problem. External added polluters to the soil are destroying and changing the structure of the soil and the problems are becoming more complex and in this sense the correction of these problems is going to be harder and more costly. Cadmium has got a fast mobility in the soil and plant system because of that cadmium can interfere very easily to the human and animal food chain and in the same time this can be very dangerous. The cadmium which is absorbed and stored by the plants is causing to many metabolic changes of the plants like; protein synthesis, nitrogen and carbohydrate metabolism, enzyme (nitrate reductase) activation, photo and chlorophyll synthesis. The biological function of cadmium is not known over the plants and it is not a necessary element. The plant is generally taking in small amounts the cadmium and this element is competing with the zinc. Cadmium is causing root damages. Buckwheat (Fagopyrum esculentum) is an important nutraceutical because of its high content of flavonoids, minerals and vitamins, and their nutritionally balanced amino-acid composition. Buckwheat has relatively high biomass productivity, is adapted to many areas of the world, and can flourish in sterile fields; therefore buckwheat plants are widely used for the phytoremediation process.The aim of this study were to evaluate the phytoremediation capacity of the high-yielding plant Buckwheat (Fagopyrum esculentum) in soils contaminated with Cd and Zn. The soils were applied to differrent doses cd(0-12.5-25-50-100 mg Cd kg−1 soil in the form of 3CdSO4.8H2O ) and Zn (0-10-30 mg Zn kg−1 soil in the form of ZnSO4.7H2O) and incubated about 60 days. Later buckwheat seeds were sown and grown for three mounth under greenhouse conditions. The test plants were irrigated by using pure water after the planting process. Buckwheat seeds (Gunes and Aktas species) were taken from Bahri Dagdas International Agricultural Research. After harvest, Cd and Zn concentrations of plant biomass and grain, yield and translocation factors (TFs) for Cd and Cd were determined. Cadmium accumulation in biomass and grain significantly increased in dose-dependent manner. Long term field trials are required to further investigate the potential of buckwheat to reclaimed the soil. But this could be undertaken in conjunction with actual remediation schemes. However, the differences in element accumulation among the genotypes were affected more by the properties of genotypes than by the soil properties. Gunes genotype accumulated higher lead than Aktas genotypes.

Keywords: buckwheat, cadmium, phytoremediation, zinc

Procedia PDF Downloads 415
18883 Web-Based Cognitive Writing Instruction (WeCWI): A Hybrid e-Framework for Instructional Design

Authors: Boon Yih Mah

Abstract:

Web-based Cognitive Writing Instruction (WeCWI) is a hybrid e-framework that consolidates instructional design and language development towards the development of a web-based instruction (WBI). WeCWI divides instructional design into macro and micro perspectives. In macro perspective, a 21st century educator is encouraged to disseminate knowledge and share ideas with in-class and global learners. By leveraging the virtue of technology, WeCWI aims to transform the educator into an aggregator, curator, publisher, social networker and finally, a web-based instructor. Since the most notable contribution of integrating technology is being a tool of teaching as well as a stimulus for learning, WeCWI focuses on the use of contemporary web tools based on the multiple roles played by the 21st century educator. The micro perspective draws attention to the pedagogical approaches focussing on three main aspects: reading, discussion, and writing. With the effective use of pedagogical approaches, technology adds new dimensions and expands the bounds of learning capacity. Lastly, WeCWI also imparts the fundamental theoretical concepts for web-based instructors’ awareness such as interactionism, e-learning interactional-based model, computer-mediated communication (CMC), cognitive theories, and learning style model.

Keywords: web-based cognitive writing instruction, WeCWI, instructional design, e-framework, web-based instructor

Procedia PDF Downloads 435
18882 Comparison of Steel and Composite Analysis of a Multi-Storey Building

Authors: Çiğdem Avcı Karataş

Abstract:

Mitigation of structural damage caused by earthquake and reduction of fatality is one of the main concerns of engineers in seismic prone zones of the world. To achieve this aim many technologies have been developed in the last decades and applied in construction and retrofit of structures. On the one hand Turkey is well-known a country of high level of seismicity; on the other hand steel-composite structures appear competitive today in this country by comparison with other types of structures, for example only-steel or concrete structures. Composite construction is the dominant form of construction for the multi-storey building sector. The reason why composite construction is often so good can be expressed in one simple way - concrete is good in compression and steel is good in tension. By joining the two materials together structurally these strengths can be exploited to result in a highly efficient design. The reduced self-weight of composite elements has a knock-on effect by reducing the forces in those elements supporting them, including the foundations. The floor depth reductions that can be achieved using composite construction can also provide significant benefits in terms of the costs of services and the building envelope. The scope of this paper covers analysis, materials take-off, cost analysis and economic comparisons of a multi-storey building with composite and steel frames. The aim of this work is to show that designing load carrying systems as composite is more economical than designing as steel. Design of the nine stories building which is under consideration is done according to the regulation of the 2007, Turkish Earthquake Code and by using static and dynamic analysis methods. For the analyses of the steel and composite systems, plastic analysis methods have been used and whereas steel system analyses have been checked in compliance with EC3 and composite system analyses have been checked in compliance with EC4. At the end of the comparisons, it is revealed that composite load carrying systems analysis is more economical than the steel load carrying systems analysis considering the materials to be used in the load carrying system and the workmanship to be spent for this job.

Keywords: composite analysis, earthquake, steel, multi-storey building

Procedia PDF Downloads 565
18881 System Identification in Presence of Outliers

Authors: Chao Yu, Qing-Guo Wang, Dan Zhang

Abstract:

The outlier detection problem for dynamic systems is formulated as a matrix decomposition problem with low-rank, sparse matrices and further recast as a semidefinite programming (SDP) problem. A fast algorithm is presented to solve the resulting problem while keeping the solution matrix structure and it can greatly reduce the computational cost over the standard interior-point method. The computational burden is further reduced by proper construction of subsets of the raw data without violating low rank property of the involved matrix. The proposed method can make exact detection of outliers in case of no or little noise in output observations. In case of significant noise, a novel approach based on under-sampling with averaging is developed to denoise while retaining the saliency of outliers and so-filtered data enables successful outlier detection with the proposed method while the existing filtering methods fail. Use of recovered “clean” data from the proposed method can give much better parameter estimation compared with that based on the raw data.

Keywords: outlier detection, system identification, matrix decomposition, low-rank matrix, sparsity, semidefinite programming, interior-point methods, denoising

Procedia PDF Downloads 304
18880 Displacement Due to Natural Disasters Vis-à-Vis Policy Framework: Case Study of Mising Community of Majuli, Assam

Authors: Mausumi Chetia

Abstract:

One of the main causes of impoverishment of the rural areas of Assam has been the recurrent floods and riverbank erosion. One of the life-changing consequences is displacement. This results not only in a loss of livelihoods but also has wide-reaching socio-economic and cultural effects. Thus, due to such disasters, not only families but communities too are being displaced at large. This compels them to find temporary shelter and begin life from scratch. The role of the state has been highly negligible, with a displacement not being perceived as an ‘issue’ to be addressed. A more holistic approach is thus needed to take socio-economic, cultural, political as well as ecological considerations into account.

Keywords: displacement, policy-framework, human-induced disasters, marginalised communities, India, Assam

Procedia PDF Downloads 271
18879 Design of Enhanced Adaptive Filter for Integrated Navigation System of FOG-SINS and Star Tracker

Authors: Nassim Bessaad, Qilian Bao, Zhao Jiangkang

Abstract:

The fiber optics gyroscope in the strap-down inertial navigation system (FOG-SINS) suffers from precision degradation due to the influence of random errors. In this work, an enhanced Allan variance (AV) stochastic modeling method combined with discrete wavelet transform (DWT) for signal denoising is implemented to estimate the random process in the FOG signal. Furthermore, we devise a measurement-based iterative adaptive Sage-Husa nonlinear filter with augmented states to integrate a star tracker sensor with SINS. The proposed filter adapts the measurement noise covariance matrix based on the available data. Moreover, the enhanced stochastic modeling scheme is invested in tuning the process noise covariance matrix and the augmented state Gauss-Markov process parameters. Finally, the effectiveness of the proposed filter is investigated by employing the collected data in laboratory conditions. The result shows the filter's improved accuracy in comparison with the conventional Kalman filter (CKF).

Keywords: inertial navigation, adaptive filtering, star tracker, FOG

Procedia PDF Downloads 78
18878 Comfort in Green: Thermal Performance and Comfort Analysis of Sky Garden, SM City, North EDSA, Philippines

Authors: Raul Chavez Jr.

Abstract:

Green roof's body of knowledge appears to be in its infancy stage in the Philippines. To contribute to its development, this study intends to answer the question: Does the existing green roof in Metro Manila perform well in providing thermal comfort and satisfaction to users? Relatively, this study focuses on thermal sensation and satisfaction of users, surface temperature comparison, weather data comparison of the site (Sky Garden) and local weather station (PAG-ASA), and its thermal resistance capacity. Initially, the researcher conducted a point-in-time survey in parallel with weather data gathering from PAG-ASA and Sky Garden. In line with these, ambient and surface temperature are conducted through the use of a digital anemometer, with humidity and temperature, and non-contact infrared thermometer respectively. Furthermore, to determine the Sky Garden's overall thermal resistance, materials found on site were identified and tabulated based on specified locations. It revealed that the Sky Garden can be considered comfortable based from PMV-PPD Model of ASHRAE Standard 55 having similar results from thermal comfort and thermal satisfaction survey, which is contrary to the actual condition of the Sky Garden by means of a psychrometric chart which falls beyond the contextualized comfort zone. In addition, ground floor benefited the most in terms of lower average ambient temperature and humidity compared to the Sky Garden. Lastly, surface temperature data indicates that the green roof portion obtained the highest average temperature yet performed well in terms of heat resistance compared to other locations. These results provided the researcher valuable baseline information of the actual performance of a certain green roof in Metro Manila that could be vital in locally enhancing the system even further and for future studies.

Keywords: Green Roof, Thermal Analysis, Thermal Comfort, Thermal Performance

Procedia PDF Downloads 163
18877 Development and Validation of a Coronary Heart Disease Risk Score in Indian Type 2 Diabetes Mellitus Patients

Authors: Faiz N. K. Yusufi, Aquil Ahmed, Jamal Ahmad

Abstract:

Diabetes in India is growing at an alarming rate and the complications caused by it need to be controlled. Coronary heart disease (CHD) is one of the complications that will be discussed for prediction in this study. India has the second most number of diabetes patients in the world. To the best of our knowledge, there is no CHD risk score for Indian type 2 diabetes patients. Any form of CHD has been taken as the event of interest. A sample of 750 was determined and randomly collected from the Rajiv Gandhi Centre for Diabetes and Endocrinology, J.N.M.C., A.M.U., Aligarh, India. Collected variables include patients data such as sex, age, height, weight, body mass index (BMI), blood sugar fasting (BSF), post prandial sugar (PP), glycosylated haemoglobin (HbA1c), diastolic blood pressure (DBP), systolic blood pressure (SBP), smoking, alcohol habits, total cholesterol (TC), triglycerides (TG), high density lipoprotein (HDL), low density lipoprotein (LDL), very low density lipoprotein (VLDL), physical activity, duration of diabetes, diet control, history of antihypertensive drug treatment, family history of diabetes, waist circumference, hip circumference, medications, central obesity and history of CHD. Predictive risk scores of CHD events are designed by cox proportional hazard regression. Model calibration and discrimination is assessed from Hosmer Lemeshow and area under receiver operating characteristic (ROC) curve. Overfitting and underfitting of the model is checked by applying regularization techniques and best method is selected between ridge, lasso and elastic net regression. Youden’s index is used to choose the optimal cut off point from the scores. Five year probability of CHD is predicted by both survival function and Markov chain two state model and the better technique is concluded. The risk scores for CHD developed can be calculated by doctors and patients for self-control of diabetes. Furthermore, the five-year probabilities can be implemented as well to forecast and maintain the condition of patients.

Keywords: coronary heart disease, cox proportional hazard regression, ROC curve, type 2 diabetes Mellitus

Procedia PDF Downloads 216
18876 Maximum Entropy Based Image Segmentation of Human Skin Lesion

Authors: Sheema Shuja Khattak, Gule Saman, Imran Khan, Abdus Salam

Abstract:

Image segmentation plays an important role in medical imaging applications. Therefore, accurate methods are needed for the successful segmentation of medical images for diagnosis and detection of various diseases. In this paper, we have used maximum entropy to achieve image segmentation. Maximum entropy has been calculated using Shannon, Renyi, and Tsallis entropies. This work has novelty based on the detection of skin lesion caused by the bite of a parasite called Sand Fly causing the disease is called Cutaneous Leishmaniasis.

Keywords: shannon, maximum entropy, Renyi, Tsallis entropy

Procedia PDF Downloads 457
18875 A Case for Ethics Practice under the Revised ISO 14001:2015

Authors: Reuben Govender, M. L. Woermann

Abstract:

The ISO 14001 management system standard was first published in 1996. It is a voluntary standard adopted by both private and public sector organizations globally. Adoption of the ISO 14001 standard at the corporate level is done to help manage business impacts on the environment e.g. pollution control. The International Organization for Standardization (ISO) revised the standard in 2004 and recently in 2015. The current revision of the standard appears to adopt a communitarian-type philosophy. The inclusion of requirements to consider external 'interested party' needs and expectations implies this philosophy. Therefore, at operational level businesses implementing ISO 14001 will have to consider needs and expectations beyond local laws. Should these external needs and expectations be included in the scope of the environmental management system, they become requirements to be complied with in much the same way as compliance to laws. The authors assert that the recent changes to ISO 14001 introduce an ethical dimension to the standard. The authors assert that business ethics as a discipline now finds relevance in ISO 14001 via contemporary stakeholder theory and discourse ethics. Finally, the authors postulate implications of (not) addressing these requirements before July 2018 when transition to the revised standard must be complete globally.

Keywords: business ethics, environmental ethics, ethics practice, ISO 14001:2015

Procedia PDF Downloads 256
18874 Comparison of Cyclone Design Methods for Removal of Fine Particles from Plasma Generated Syngas

Authors: Mareli Hattingh, I. Jaco Van der Walt, Frans B. Waanders

Abstract:

A waste-to-energy plasma system was designed by Necsa for commercial use to create electricity from unsorted municipal waste. Fly ash particles must be removed from the syngas stream at operating temperatures of 1000 °C and recycled back into the reactor for complete combustion. A 2D2D high efficiency cyclone separator was chosen for this purpose. During this study, two cyclone design methods were explored: The Classic Empirical Method (smaller cyclone) and the Flow Characteristics Method (larger cyclone). These designs were optimized with regard to efficiency, so as to remove at minimum 90% of the fly ash particles of average size 10 μm by 50 μm. Wood was used as feed source at a concentration of 20 g/m3 syngas. The two designs were then compared at room temperature, using Perspex test units and three feed gases of different densities, namely nitrogen, helium and air. System conditions were imitated by adapting the gas feed velocity and particle load for each gas respectively. Helium, the least dense of the three gases, would simulate higher temperatures, whereas air, the densest gas, simulates a lower temperature. The average cyclone efficiencies ranged between 94.96% and 98.37%, reaching up to 99.89% in individual runs. The lowest efficiency attained was 94.00%. Furthermore, the design of the smaller cyclone proved to be more robust, while the larger cyclone demonstrated a stronger correlation between its separation efficiency and the feed temperatures. The larger cyclone can be assumed to achieve slightly higher efficiencies at elevated temperatures. However, both design methods led to good designs. At room temperature, the difference in efficiency between the two cyclones was almost negligible. At higher temperatures, however, these general tendencies are expected to be amplified so that the difference between the two design methods will become more obvious. Though the design specifications were met for both designs, the smaller cyclone is recommended as default particle separator for the plasma system due to its robust nature.

Keywords: Cyclone, design, plasma, renewable energy, solid separation, waste processing

Procedia PDF Downloads 207
18873 BTEX (Benzene, Toluene, Ethylbenzene and Xylene) Degradation by Cold Plasma

Authors: Anelise Leal Vieira Cubas, Marina de Medeiros Machado, Marília de Medeiros Machado

Abstract:

The volatile organic compounds - BTEX (Benzene, Toluene, Ethylbenzene, and Xylene) petroleum derivatives, have high rates of toxicity, which may carry consequences for human health, biota and environment. In this direction, this paper proposes a method of treatment of these compounds by using corona discharge plasma technology. The efficiency of the method was tested by analyzing samples of BTEX after going through a plasma reactor by gas chromatography method. The results show that the optimal residence time of the sample in the reactor was 8 minutes.

Keywords: BTEX, degradation, cold plasma, ecological sciences

Procedia PDF Downloads 312
18872 Application of a Geomechanical Model to Justify the Exploitation of Bazhenov-Abalak Formation, Western Siberia

Authors: Yan Yusupov, Aleksandra Soldatova, Yaroslav Zaglyadin

Abstract:

The object of this work is Bazhenov-Abalak unconventional formation (BAUF) of Western Siberia. On the base of the Geomechanical model (GMM), a methodology was developed for sweet spot intervals and zones for drilling horizontal wells with hydraulic fracturing. Based on mechanical rock typification, eight mechanical rock types (MRT) have been identified. Sweet spot intervals are represented by siliceous-carbonate (2), siliceous (5) and carbonate (8) MRT that have the greatest brittleness index (BRIT). A correlation has been established between the thickness of brittle intervals and the initial well production rates, which makes it possible to identify sweet spot zones for drilling horizontal wells with hydraulic fracturing. Brittle and ductile intervals are separated by a BRIT cut-off of 0.4 since wells located at points with BRIT < 0.4 have insignificant rates (less than 2 m³/day). Wells with an average BRIT in BAUF of more than 0.4 reach industrial production rates. The next application of GMM is associated with the instability of the overburdened clay formation above the top of the BAUF. According to the wellbore stability analysis, the recommended mud weight for this formation must be not less than 1.53–1.55 g/cc. The optimal direction for horizontal wells corresponds to the azimuth of Shmin equal to 70-80°.

Keywords: unconventional reservoirs, geomechanics, sweet spot zones, borehole stability

Procedia PDF Downloads 63
18871 Investigation of Leakage, Cracking and Warpage Issues Observed on Composite Valve Cover in Development Phase through FEA Simulation

Authors: Ashwini Shripatwar, Mayur Biyani, Nikhil Rao, Rajendra Bodake, Sachin Sane

Abstract:

This paper documents the correlation of valve cover sealing, cracking, and warpage Finite Element Modelling with observations on engine test development. The valve cover is a component mounted on engine head with a gasket which provides sealing against oil which flows around camshaft, valves, rockers, and other overhead components. Material nonlinearity and contact nonlinearity characteristics are taken into consideration because the valve cover is made of a composite material having temperature dependent elastic-plastic properties and because the gasket load-deformation curve is also nonlinear. The leakage is observed between the valve cover and the engine head due to the insufficient contact pressure. The crack is observed on the valve cover due to force application at a region with insufficient stiffness and with elevated temperature. The valve cover shrinkage is observed during the disassembly process on hot exhaust side bolt holes after the engine has been running. In this paper, an analytical approach is developed to correlate a Finite Element Model with the observed failures and to address the design issues associated with the failure modes in question by making design changes in the model.

Keywords: cracking issue, gasket sealing analysis, nonlinearity of contact and material, valve cover

Procedia PDF Downloads 136