Search results for: Neural Processing Element (NPE)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7894

Search results for: Neural Processing Element (NPE)

934 Ecological Ice Hockey Butterfly Motion Assessment Using Inertial Measurement Unit Capture System

Authors: Y. Zhang, J. Perez, S. Marnier

Abstract:

To date, no study on goaltending butterfly motion has been completed in real conditions, during an ice hockey game or training practice, to the author's best knowledge. This motion, performed to save score, is unnatural, intense, and repeated. The target of this research activity is to identify representative biomechanical criteria for this goaltender-specific movement pattern. Determining specific physical parameters may allow to will identify the risk of hip and groin injuries sustained by goaltenders. Four professional or academic goalies were instrumented during ice hockey training practices with five inertial measurement units. These devices were inserted in dedicated pockets located on each thigh and shank, and the fifth on the lumbar spine. A camera was also installed close to the ice to observe and record the goaltenders' activities, especially the butterfly motions, in order to synchronize the captured data and the behavior of the goaltender. Each data recorded began with a calibration of the inertial units and a calibration of the fully equipped goaltender on the ice. Three butterfly motions were recorded out of the training practice to define referential individual butterfly motions. Then, a data processing algorithm based on the Madgwick filter computed hip and knee joints joint range of motion as well as angular specific angular velocities. The developed algorithm software automatically identified and analyzed all the butterfly motions executed by the four different goaltenders. To date, it is still too early to show that the analyzed criteria are representative of the trauma generated by the butterfly motion as the research is only at its beginning. However, this descriptive research activity is promising in its ecological assessment, and once the criteria are found, the tools and protocols defined will allow the prevention of as many injuries as possible. It will thus be possible to build a specific training program for each goalie.

Keywords: biomechanics, butterfly motion, human motion analysis, ice hockey, inertial measurement unit

Procedia PDF Downloads 116
933 Application of Micro-Tunneling Technique to Rectify Tilted Structures Constructed on Cohesive Soil

Authors: Yasser R. Tawfic, Mohamed A. Eid

Abstract:

Foundation differential settlement and supported structure tilting is an occasionally occurred engineering problem. This may be caused by overloading, changes in ground soil properties or unsupported nearby excavations. Engineering thinking points directly toward the logic solution for such problem by uplifting the settled side. This can be achieved with deep foundation elements such as micro-piles and macro-piles™, jacked piers and helical piers, jet grouted soil-crete columns, compaction grout columns, cement grouting or with chemical grouting, or traditional pit underpinning with concrete and mortar. Although, some of these techniques offer economic, fast and low noise solutions, many of them are quite the contrary. For tilted structures, with limited inclination, it may be much easier to cause a balancing settlement on the less-settlement side which shall be done carefully in a proper rate. This principal has been applied in Leaning Tower of Pisa stabilization with soil extraction from the ground surface. In this research, the authors attempt to introduce a new solution with a different point of view. So, micro-tunneling technique is presented in here as an intended ground deformation cause. In general, micro-tunneling is expected to induce limited ground deformations. Thus, the researchers propose to apply the technique to form small size ground unsupported holes to produce the target deformations. This shall be done in four phases: •Application of one or more micro-tunnels, regarding the existing differential settlement value, under the raised side of the tilted structure. •For each individual tunnel, the lining shall be pulled out from both sides (from jacking and receiving shafts) in slow rate. •If required, according to calculations and site records, an additional surface load can be applied on the raised foundation side. •Finally, a strengthening soil grouting shall be applied for stabilization after adjustment. A finite element based numerical model is presented to simulate the proposed construction phases for different tunneling positions and tunnels group. For each case, the surface settlements are calculated and induced plasticity points are checked. These results show the impact of the suggested procedure on the tilted structure and its feasibility. Comparing results also show the importance of the position selection and tunnels group gradual effect. Thus, a new engineering solution is presented to one of the structural and geotechnical engineering challenges.

Keywords: differential settlement, micro-tunneling, soil-structure interaction, tilted structures

Procedia PDF Downloads 194
932 The Advantages of Using DNA-Barcoding for Determining the Fraud in Seafood

Authors: Elif Tugce Aksun Tumerkan

Abstract:

Although seafood is an important part of human diet and categorized highly traded food industry internationally, it is remain overlooked generally in the global food security aspect. Food product authentication is the main interest in the aim of both avoids commercial fraud and to consider the risks that might be harmful to human health safety. In recent years, with increasing consumer demand for regarding food content and it's transparency, there are some instrumental analyses emerging for determining food fraud depend on some analytical methodologies such as proteomic and metabolomics. While, fish and seafood consumed as fresh previously, within advanced technology, processed or packaged seafood consumption have increased. After processing or packaging seafood, morphological identification is impossible when some of the external features have been removed. The main fish and seafood quality-related issues are the authentications of seafood contents such as mislabelling products which may be contaminated and replacement partly or completely, by lower quality or cheaper ones. For all mentioned reasons, truthful consistent and easily applicable analytical methods are needed for assurance the correct labelling and verifying of seafood products. DNA-barcoding methods become popular robust that used in taxonomic research for endangered or cryptic species in recent years; they are used for determining food traceability also. In this review, when comparing the other proteomic and metabolic analysis, DNA-based methods are allowing a chance to identification all type of food even as raw, spiced and processed products. This privilege caused by DNA is a comparatively stable molecule than protein and other molecules. Furthermore showing variations in sequence based on different species and founding in all organisms, make DNA-based analysis more preferable. This review was performed to clarify the main advantages of using DNA-barcoding for determining seafood fraud among other techniques.

Keywords: DNA-barcoding, genetic analysis, food fraud, mislabelling, packaged seafood

Procedia PDF Downloads 152
931 Development of a Passive Solar Tomato Dryer with Movable Heat Storage System

Authors: Jacob T. Liberty, Wilfred I. Okonkwo

Abstract:

The present study designed and constructed a post-harvest passive solar tomato dryer of dimension 176 x 152 x 54cm for drying tomato. Quality of the dried crop was evaluated and compared with the fresh ones. The solar dryer consist of solar collector (air heater), 110 x 61 x 10 x 10cm, the drying chamber, 102 x54cm, removal heat storage unit, 40 x 35 x 13cm and drying trays, 43 x 42cm. The physicochemical properties of this crop were evaluated before and after drying. Physicochemical properties evaluated includes moisture, protein, fat, fibre, ash, carbohydrate and vitamin C, contents. The fresh, open and solar dried samples were analysed for their proximate composition using the recommended method of AOAC. Also, statistical analysis of the data was conducted using analysis of variance (ANOVA) using completely Randomize Design (CRD) and means were separated by Duncan’s New Multiple Range test (DNMRT). Proximate analysis showed that solar dried tomato had significantly (P < 0.05) higher protein, fibre, ash, carbohydrate and vitamin C except for the fat content that was significantly (P < 0.05) higher for all the open sun dried samples than the solar dried and fresh product. The nutrient which is highly affected by sun drying is vitamin C. Result indicates that moisture loss in solar dried tomato was faster and lower than the open dried samples and as such makes the solar dried products of lesser tendency to mould and bacterial growth. Also, the open sun dried samples had to be carried into the sheltered place each time it rained. The solar dried produce is of high quality. Further processing of the dried crops will involve packaging for commercial purposes. This will also help in making these agricultural product available in a relatively cheap price in off season and also avert micronutrient deficiencies in diet especially among the low-income groups in Nigeria.

Keywords: tomato, passive solar dryer, physicochemical properties, removal heat storage

Procedia PDF Downloads 294
930 The Influence of Phosphate Fertilizers on Radiological Situation of Cultivated Lands: ²¹⁰Po, ²²⁶Ra, ²³²Th, ⁴⁰K and ¹³⁷Cs Concentrations in Soil

Authors: Grzegorz Szaciłowski, Marta Konop, Małgorzata Dymecka, Jakub Ośko

Abstract:

In 1996, the European Council Directive 96/29/EURATOM pointed phosphate fertilizers to have a potentially negative influence on the environment from the radiation protection point of view. Fertilizers along with irrigation and crop rotation were the milestones that allowed to increase agricultural productivity. Firstly based on natural materials such as compost, manure, fish processing waste, etc., and since the 19th century created synthetically, fertilizers caused a boom in crop yield and helped to propel global food production, especially after World War II. In this work the concentrations of ²¹⁰Po, ²²⁶Ra, ²³²Th, ⁴⁰K, and ¹³⁷Cs in selected fertilizers and soil samples were determined. The results were used to calculate the annual addition of natural radionuclides and increment of the external radiation exposure caused by the use of studied fertilizers. Soils intended for different types of crops were sampled in early spring when no vegetation had occurred yet. Analysed fertilizers were those with which the soil was previously fertilized. For gamma radionuclides, a high purity germanium detector GX3520 from Canberra was used. The polonium concentration was determined by radiochemical separation followed by measurement by means of alpha spectrometry. The spectrometer used in this study was equipped with 450 cm² PIPS detector from Canberra. Obtained results showed significant differences in radionuclide composition between phosphate and nitrogenous fertilizers (e.g. the radium equivalent activity for phosphate fertilizer was 207.7 Bq/kg in comparison to <5.6 Bq/kg for nitrogenous fertilizer). The calculated increase of external radiation exposure due to use of phosphate fertilizer ranged between 3.4 and 5.4 nG/h, which represents up to 10% of the polish average outdoor exposure due to terrestrial gamma radiation (45 nGy/h).

Keywords: ²¹⁰Po, alpha spectrometry, exposure, gamma spectrometry, phosphate fertilizer, soil

Procedia PDF Downloads 285
929 From Battles to Balance and Back: Document Analysis of EU Copyright in the Digital Era

Authors: Anette Alén

Abstract:

Intellectual property (IP) regimes have traditionally been designed to integrate various conflicting elements stemming from private entitlement and the public good. In IP laws and regulations, this design takes the form of specific uses of protected subject-matter without the right-holder’s consent, or exhaustion of exclusive rights upon market release, and the like. More recently, the pursuit of ‘balance’ has gained ground in the conceptualization of these conflicting elements both in terms of IP law and related policy. This can be seen, for example, in European Union (EU) copyright regime, where ‘balance’ has become a key element in argumentation, backed up by fundamental rights reasoning. This development also entails an ever-expanding dialogue between the IP regime and the constitutional safeguards for property, free speech, and privacy, among others. This study analyses the concept of ‘balance’ in EU copyright law: the research task is to examine the contents of the concept of ‘balance’ and the way it is operationalized and pursued, thereby producing new knowledge on the role and manifestations of ‘balance’ in recent copyright case law and regulatory instruments in the EU. The study discusses two particular pieces of legislation, the EU Digital Single Market (DSM) Copyright Directive (EU) 2019/790 and the finalized EU Artificial Intelligence (AI) Act, including some of the key preparatory materials, as well as EU Court of Justice (CJEU) case law pertaining to copyright in the digital era. The material is examined by means of document analysis, mapping the ways ‘balance’ is approached and conceptualized in the documents. Similarly, the interaction of fundamental rights as part of the balancing act is also analyzed. Doctrinal study of law is also employed in the analysis of legal sources. This study suggests that the pursuit of balance is, for its part, conducive to new battles, largely due to the advancement of digitalization and more recent developments in artificial intelligence. Indeed, the ‘balancing act’ rather presents itself as a way to bypass or even solidify some of the conflicting interests in a complex global digital economy. Indeed, such a conceptualization, especially when accompanied by non-critical or strategically driven fundamental rights argumentation, runs counter to the genuine acknowledgment of new types of conflicting interests in the copyright regime. Therefore, a more radical approach, including critical analysis of the normative basis and fundamental rights implications of the concept of ‘balance’, is required to readjust copyright law and regulations for the digital era. Notwithstanding the focus on executing the study in the context of the EU copyright regime, the results bear wider significance for the digital economy, especially due to the platform liability regime in the DSM Directive and with the AI Act including objectives of a ‘level playing field’ whereby compliance with EU copyright rules seems to be expected among system providers.

Keywords: balance, copyright, fundamental rights, platform liability, artificial intelligence

Procedia PDF Downloads 19
928 The Impact of Acoustic Performance on Neurodiverse Students in K-12 Learning Spaces

Authors: Michael Lekan-Kehinde, Abimbola Asojo, Bonnie Sanborn

Abstract:

Good acoustic performance has been identified as one of the critical Indoor Environmental Quality (IEQ) factors for student learning and development by the National Research Council. Childhood presents the opportunity for children to develop lifelong skills that will support them throughout their adult lives. Acoustic performance of a space has been identified as a factor that can impact language acquisition, concentration, information retention, and general comfort within the environment. Increasingly, students learn by communication between both teachers and fellow students, making speaking and listening crucial. Neurodiversity - while initially coined to describe individuals with autism spectrum disorder (ASD) - widely describes anyone with a different brain process. As the understanding from cognitive and neurosciences increases, the number of people identified as neurodiversity is nearly 30% of the population. This research looks at guidelines and standard for spaces with good acoustical quality and relates it with the experiences of students with autism spectrum disorder (ASD), their parents, teachers, and educators through a mixed methods approach, including selected case studies interviews, and mixed surveys. The information obtained from these sources is used to determine if selected materials, especially properties relating to sound absorption and reverberation reduction, are equally useful in small, medium sized, and large learning spaces and methodologically approaching. The results describe the potential impact of acoustics on Neurodiverse students, considering factors that determine the complexity of sound in relation to the auditory processing capabilities of ASD students. In conclusion, this research extends the knowledge of how materials selection influences the better development of acoustical environments for autism students.

Keywords: acoustics, autism spectrum disorder (ASD), children, education, learning, learning spaces, materials, neurodiversity, sound

Procedia PDF Downloads 93
927 Study of the Impact of Synthesis Method and Chemical Composition on Photocatalytic Properties of Cobalt Ferrite Catalysts

Authors: Katerina Zaharieva, Vicente Rives, Martin Tsvetkov, Raquel Trujillano, Boris Kunev, Ivan Mitov, Maria Milanova, Zara Cherkezova-Zheleva

Abstract:

The nanostructured cobalt ferrite-type materials Sample A - Co0.25Fe2.75O4, Sample B - Co0.5Fe2.5O4, and Sample C - CoFe2O4 were prepared by co-precipitation in our previous investigations. The co-precipitated Sample B and Sample C were mechanochemically activated in order to produce Sample D - Co0.5Fe2.5O4 and Sample E- CoFe2O4. The PXRD, Moessbauer and FTIR spectroscopies, specific surface area determination by the BET method, thermal analysis, element chemical analysis and temperature-programmed reduction were used to investigate the prepared nano-sized samples. The changes of the Malachite green dye concentration during reaction of the photocatalytic decolorization using nanostructured cobalt ferrite-type catalysts with different chemical composition are included. The photocatalytic results show that the increase in the degree of incorporation of cobalt ions in the magnetite host structure for co-precipitated cobalt ferrite-type samples results in an increase of the photocatalytic activity: Sample A (4 х10-3 min-1) < Sample B (5 х10-3 min-1) < Sample C (7 х10-3 min-1). Mechanochemically activated photocatalysts showed a higher activity than the co-precipitated ferrite materials: Sample D (16 х10-3 min-1) > Sample E (14 х10-3 min-1) > Sample C (7 х10-3 min-1) > Sample B (5 х10-3 min-1) > Sample A (4 х10-3 min-1). On decreasing the degree of substitution of iron ions by cobalt ones a higher sorption ability of the dye after the dark period for the co-precipitated cobalt ferrite materials was observed: Sample C (72 %) < Sample B (78 %) < Sample A (80 %). Mechanochemically treated ferrite catalysts and co-precipitated Sample B possess similar sorption capacities, Sample D (78 %) ~ Sample E (78 %) ~ Sample B (78 %). The prepared nano-sized cobalt ferrite-type materials demonstrate good photocatalytic and sorption properties. Mechanochemically activated Sample D - Co0.5Fe2.5O4 (16х10-3 min-1) and Sample E-CoFe2O4 (14х10-3 min-1) possess higher photocatalytic activity than that of the most common used UV-light catalyst Degussa P25 (12х10-3 min-1). The dependence of the photo-catalytic activity and sorption properties on the preparation method and different degree of substitution of iron ions by cobalt ions in synthesized cobalt ferrite samples is established. The mechanochemical activation leads to formation of nano-structured cobalt ferrite-type catalysts (Sample D and Sample E) with higher rate constants than those of the ferrite materials (Sample A, Sample B, and Sample C) prepared by the co-precipitation procedure. The increase in the degree of substitution of iron ions by cobalt ones leads to improved photocatalytic properties and lower sorption capacities of the co-precipitated ferrite samples. The good sorption properties between 72 and 80% of the prepared ferrite-type materials show that they could be used as potential cheap absorbents for purification of polluted waters.

Keywords: nanodimensional cobalt ferrites, photocatalyst, synthesis, mechanochemical activation

Procedia PDF Downloads 252
926 Aire-Dependent Transcripts have Shortened 3’UTRs and Show Greater Stability by Evading Microrna-Mediated Repression

Authors: Clotilde Guyon, Nada Jmari, Yen-Chin Li, Jean Denoyel, Noriyuki Fujikado, Christophe Blanchet, David Root, Matthieu Giraud

Abstract:

Aire induces ectopic expression of a large repertoire of tissue-specific antigen (TSA) genes in thymic medullary epithelial cells (MECs), driving immunological self-tolerance in maturing T cells. Although important mechanisms of Aire-induced transcription have recently been disclosed through the identification and the study of Aire’s partners, the fine transcriptional functions underlied by a number of them and conferred to Aire are still unknown. Alternative cleavage and polyadenylation (APA) is an essential mRNA processing step regulated by the termination complex consisting of 85 proteins, 10 of them have been related to Aire. We evaluated APA in MECs in vivo by microarray analysis with mRNA-spanning probes and RNA deep sequencing. We uncovered the preference of Aire-dependent transcripts for short-3’UTR isoforms and for proximal poly(A) site selection marked by the increased binding of the cleavage factor Cstf-64. RNA interference of the 10 Aire-related proteins revealed that Clp1, a member of the core termination complex, exerts a profound effect on short 3’UTR isoform preference. Clp1 is also significantly upregulated in the MECs compared to 25 mouse tissues in which we found that TSA expression is associated with longer 3’UTR isoforms. Aire-dependent transcripts escape a global 3’UTR lengthening associated with MEC differentiation, thereby potentiating the repressive effect of microRNAs that are globally upregulated in mature MECs. Consistent with these findings, RNA deep sequencing of actinomycinD-treated MECs revealed the increased stability of short 3’UTR Aire-induced transcripts, resulting in TSA transcripts accumulation and contributing for their enrichment in the MECs.

Keywords: Aire, central tolerance, miRNAs, transcription termination

Procedia PDF Downloads 368
925 An Approach to Automate the Modeling of Life Cycle Inventory Data: Case Study on Electrical and Electronic Equipment Products

Authors: Axelle Bertrand, Tom Bauer, Carole Charbuillet, Martin Bonte, Marie Voyer, Nicolas Perry

Abstract:

The complexity of Life Cycle Assessment (LCA) can be identified as the ultimate obstacle to massification. Due to these obstacles, the diffusion of eco-design and LCA methods in the manufacturing sectors could be impossible. This article addresses the research question: How to adapt the LCA method to generalize it massively and improve its performance? This paper aims to develop an approach for automating LCA in order to carry out assessments on a massive scale. To answer this, we proceeded in three steps: First, an analysis of the literature to identify existing automation methods. Given the constraints of large-scale manual processing, it was necessary to define a new approach, drawing inspiration from certain methods and combining them with new ideas and improvements. In a second part, our development of automated construction is presented (reconciliation and implementation of data). Finally, the LCA case study of a conduit is presented to demonstrate the feature-based approach offered by the developed tool. A computerized environment supports effective and efficient decision-making related to materials and processes, facilitating the process of data mapping and hence product modeling. This method is also able to complete the LCA process on its own within minutes. Thus, the calculations and the LCA report are automatically generated. The tool developed has shown that automation by code is a viable solution to meet LCA's massification objectives. It has major advantages over the traditional LCA method and overcomes the complexity of LCA. Indeed, the case study demonstrated the time savings associated with this methodology and, therefore, the opportunity to increase the number of LCA reports generated and, therefore, to meet regulatory requirements. Moreover, this approach also presents the potential of the proposed method for a wide range of applications.

Keywords: automation, EEE, life cycle assessment, life cycle inventory, massively

Procedia PDF Downloads 72
924 Quantification of Global Cerebrovascular Reactivity in the Principal Feeding Arteries of the Human Brain

Authors: Ravinder Kaur

Abstract:

Introduction Global cerebrovascular reactivity (CVR) mapping is a promising clinical assessment for stress-testing the brain using physiological challenges, such as CO₂, to elicit changes in perfusion. It enables real-time assessment of cerebrovascular integrity and health. Conventional imaging approaches solely use steady-state parameters, like cerebral blood flow (CBF), to evaluate the integrity of the resting parenchyma and can erroneously show a healthy brain at rest, despite the underlying pathogenesis in the presence of cerebrovascular disease. Conversely, coupling CO₂ inhalation with phase-contrast MRI neuroimaging interrogates the capacity of the vasculature to respond to changes under stress. It shows promise in providing prognostic value as a novel health marker to measure neurovascular function in disease and to detect early brain vasculature dysfunction. Objective This exploratory study was established to:(a) quantify the CBF response to CO₂ in hypocapnia and hypercapnia,(b) evaluate disparities in CVR between internal carotid (ICA) and vertebral artery (VA), and (c) assess sex-specific variation in CVR. Methodology Phase-contrast MRI was employed to measure the cerebrovascular reactivity to CO₂ (±10 mmHg). The respiratory interventions were presented using the prospectively end-tidal targeting RespirActTM Gen3 system. Post-processing and statistical analysis were conducted. Results In 9 young, healthy subjects, the CBF increased from hypocapnia to hypercapnia in all vessels (4.21±0.76 to 7.20±1.83 mL/sec in ICA, 1.36±0.55 to 2.33±1.31 mL/sec in VA, p < 0.05). The CVR was quantitatively higher in ICA than VA (slope of linear regression: 0.23 vs. 0.07 mL/sec/mmHg, p < 0.05). No statistically significant effect was observed in CVR between male and female (0.25 vs 0.20 mL/sec/mmHg in ICA, 0.09 vs 0.11 mL/sec/mmHg in VA, p > 0.05). Conclusions The principal finding in this investigation validated the modulation of CBF by CO₂. Moreover, it has indicated that regional heterogeneity in hemodynamic response exists in the brain. This study provides scope to standardize the quantification of CVR prior to its clinical translation.

Keywords: cerebrovascular disease, neuroimaging, phase contrast MRI, cerebrovascular reactivity, carbon dioxide

Procedia PDF Downloads 132
923 Traumatic Chiasmal Syndrome Following Traumatic Brain Injury

Authors: Jiping Cai, Ningzhi Wangyang, Jun Shao

Abstract:

Traumatic brain injury (TBI) is one of the major causes of morbidity and mortality that leads to structural and functional damage in several parts of the brain, such as cranial nerves, optic nerve tract or other circuitry involved in vision and occipital lobe, depending on its location and severity. As a result, the function associated with vision processing and perception are significantly affected and cause blurred vision, double vision, decreased peripheral vision and blindness. Here two cases complaining of monocular vision loss (actually temporal hemianopia) due to traumatic chiasmal syndrome after frontal head injury were reported, and were compared the findings with individual case reports published in the literature. Reported cases of traumatic chiasmal syndrome appear to share some common features, such as injury to the frontal bone and fracture of the anterior skull base. The degree of bitemporal hemianopia and visual loss acuity have a variable presentation and was not necessarily related to the severity of the craniocerebral trauma. Chiasmal injury may occur even in the absence bony chip impingement. Isolated bitemporal hemianopia is rare and clinical improvement usually may not occur. Mechanisms of damage to the optic chiasm after trauma include direct tearing, contusion haemorrhage and contusion necrosis, and secondary mechanisms such as cell death, inflammation, edema, neurogenesis impairment and axonal damage associated with TBI. Beside visual field test, MRI evaluation of optic pathways seems to the strong objective evidence to demonstrate the impairment of the integrity of visual systems following TBI. Therefore, traumatic chiasmal syndrome should be considered as a differential diagnosis by both neurosurgeons and ophthalmologists in patients presenting with visual impairment, especially bitemporal hemianopia after head injury causing frontal and anterior skull base fracture.

Keywords: bitemporal hemianopia, brain injury, optic chiasma, traumatic chiasmal syndrome.

Procedia PDF Downloads 65
922 Assessing the Benefits of Super Depo Sutorejo as a Model of integration of Waste Pickers in a Sustainable City Waste Management

Authors: Yohanes Kambaru Windi, Loetfia Dwi Rahariyani, Dyah Wijayanti, Eko Rustamaji

Abstract:

Surabaya, the second largest city in Indonesia, has been struggling for years with waste production and its management. Nearly 11,000 tons of waste are generated daily by domestic, commercial and industrial areas. It is predicted that approximately 1,300 tons of waste overflew the Benowo Landfill daily in 2013 and projected that the landfill operation will be critical in 2015. The Super Depo Sutorejo (SDS) is a pilot project on waste management launched by the government of Surabaya in March 2013. The project is aimed to reduce the amount of waste dumped in landfill by sorting the recyclable and organic waste for composting by employing waste pickers to sort the waste before transported to landfill. This study is intended to assess the capacity of SDS to process and reduce waste and its complementary benefits. It also overviews the benefits of the project to the waste pickers in term of satisfaction to the job. Waste processing data-sheets were used to assess the difference between input and outputs waste. A survey was distributed to 30 waste pickers and interviews were conducted as a further insight on a particular issue. The analysis showed that SDS enable to reduce waste up to 50% before dumped in the final disposal area. The cost-benefits analysis using cost differential calculation revealed the economic benefit is considerable low, but composting may substitute tangible benefits for maintain the city’s parks. Waste pickers are mostly satisfied with their job (i.e. Salary, health coverage, job security), services and facilities available in SDS and enjoyed rewarding social life within the project. It is concluded that SDS is an effective and efficient model for sustainable waste management and reliable to be developed in developing countries. It is a strategic approach to empower and open up working opportunity for the poor urban community and prolong the operation of landfills.

Keywords: cost-benefits, integration, satisfaction, waste management

Procedia PDF Downloads 462
921 Viscoelastic Characterization of Gelatin/Cellulose Nanocrystals Aqueous Bionanocomposites

Authors: Liliane Samara Ferreira Leite, Francys Kley Vieira Moreira, Luiz Henrique Capparelli Mattoso

Abstract:

The increasing environmental concern regarding the plastic pollution worldwide has stimulated the development of low-cost biodegradable materials. Proteins are renewable feedstocks that could be used to produce biodegradable plastics. Gelatin, for example, is a cheap film-forming protein extracted from animal skin and connective tissues of Brazilian Livestock residues; thus it has a good potential in low-cost biodegradable plastic production. However, gelatin plastics are limited in terms of mechanical and barrier properties. Cellulose nanocrystals (CNC) are efficient nanofillers that have been used to extend physical properties of polymers. This work was aimed at evaluating the reinforcing efficiency of CNC on gelatin films. Specifically, we have employed the continuous casting as the processing method for obtaining the gelatin/CNC bionanocomposites. This required a first rheological study for assessing the effect of gelatin-CNC and CNC-CNC interactions on the colloidal state of the aqueous bionanocomposite formulations. CNC were isolated from eucalyptus pulp by sulfuric acid hydrolysis (65 wt%) at 55 °C for 30 min. Gelatin was solubilized in ultra-pure water at 85°C for 20 min and then mixed with glycerol at 20 wt.% and CNC at 0.5 wt%, 1.0 wt% and 2.5 wt%. Rotational measurements were performed to determine linear viscosity (η) of bionanocomposite solutions, which increased with increasing CNC content. At 2.5 wt% CNC, η increased by 118% regarding the neat gelatin solution, which was ascribed to percolation CNC network formation. Storage modulus (G’) and loss modulus (G″) further determined by oscillatory tests revealed that a gel-like behavior was dominant in the bionanocomposite solutions (G’ > G’’) over a broad range of temperature (20 – 85 °C), particularly at 2.5 wt% CNC. These results confirm effective interactions in the aqueous gelatin-CNC bionanocomposites that could substantially increase the physical properties of the gelatin plastics. Tensile tests are underway to confirm this hypothesis. The authors would like to thank the Fapesp (process n 2016/03080-3) for support.

Keywords: bionanocomposites, cellulose nanocrystals, gelatin, viscoelastic characterization

Procedia PDF Downloads 141
920 Multi-Sensor Image Fusion for Visible and Infrared Thermal Images

Authors: Amit Kumar Happy

Abstract:

This paper is motivated by the importance of multi-sensor image fusion with a specific focus on infrared (IR) and visual image (VI) fusion for various applications, including military reconnaissance. Image fusion can be defined as the process of combining two or more source images into a single composite image with extended information content that improves visual perception or feature extraction. These images can be from different modalities like visible camera & IR thermal imager. While visible images are captured by reflected radiations in the visible spectrum, the thermal images are formed from thermal radiation (infrared) that may be reflected or self-emitted. A digital color camera captures the visible source image, and a thermal infrared camera acquires the thermal source image. In this paper, some image fusion algorithms based upon multi-scale transform (MST) and region-based selection rule with consistency verification have been proposed and presented. This research includes the implementation of the proposed image fusion algorithm in MATLAB along with a comparative analysis to decide the optimum number of levels for MST and the coefficient fusion rule. The results are presented, and several commonly used evaluation metrics are used to assess the suggested method's validity. Experiments show that the proposed approach is capable of producing good fusion results. While deploying our image fusion algorithm approaches, we observe several challenges from the popular image fusion methods. While high computational cost and complex processing steps of image fusion algorithms provide accurate fused results, they also make it hard to become deployed in systems and applications that require a real-time operation, high flexibility, and low computation ability. So, the methods presented in this paper offer good results with minimum time complexity.

Keywords: image fusion, IR thermal imager, multi-sensor, multi-scale transform

Procedia PDF Downloads 98
919 Seismic Perimeter Surveillance System (Virtual Fence) for Threat Detection and Characterization Using Multiple ML Based Trained Models in Weighted Ensemble Voting

Authors: Vivek Mahadev, Manoj Kumar, Neelu Mathur, Brahm Dutt Pandey

Abstract:

Perimeter guarding and protection of critical installations require prompt intrusion detection and assessment to take effective countermeasures. Currently, visual and electronic surveillance are the primary methods used for perimeter guarding. These methods can be costly and complicated, requiring careful planning according to the location and terrain. Moreover, these methods often struggle to detect stealthy and camouflaged insurgents. The object of the present work is to devise a surveillance technique using seismic sensors that overcomes the limitations of existing systems. The aim is to improve intrusion detection, assessment, and characterization by utilizing seismic sensors. Most of the similar systems have only two types of intrusion detection capability viz., human or vehicle. In our work we could even categorize further to identify types of intrusion activity such as walking, running, group walking, fence jumping, tunnel digging and vehicular movements. A virtual fence of 60 meters at GCNEP, Bahadurgarh, Haryana, India, was created by installing four underground geophones at a distance of 15 meters each. The signals received from these geophones are then processed to find unique seismic signatures called features. Various feature optimization and selection methodologies, such as LightGBM, Boruta, Random Forest, Logistics, Recursive Feature Elimination, Chi-2 and Pearson Ratio were used to identify the best features for training the machine learning models. The trained models were developed using algorithms such as supervised support vector machine (SVM) classifier, kNN, Decision Tree, Logistic Regression, Naïve Bayes, and Artificial Neural Networks. These models were then used to predict the category of events, employing weighted ensemble voting to analyze and combine their results. The models were trained with 1940 training events and results were evaluated with 831 test events. It was observed that using the weighted ensemble voting increased the efficiency of predictions. In this study we successfully developed and deployed the virtual fence using geophones. Since these sensors are passive, do not radiate any energy and are installed underground, it is impossible for intruders to locate and nullify them. Their flexibility, quick and easy installation, low costs, hidden deployment and unattended surveillance make such systems especially suitable for critical installations and remote facilities with difficult terrain. This work demonstrates the potential of utilizing seismic sensors for creating better perimeter guarding and protection systems using multiple machine learning models in weighted ensemble voting. In this study the virtual fence achieved an intruder detection efficiency of over 97%.

Keywords: geophone, seismic perimeter surveillance, machine learning, weighted ensemble method

Procedia PDF Downloads 64
918 Temperature Dependence of Photoluminescence Intensity of Europium Dinuclear Complex

Authors: Kwedi L. M. Nsah, Hisao Uchiki

Abstract:

Quantum computation is a new and exciting field making use of quantum mechanical phenomena. In classical computers, information is represented as bits, with values either 0 or 1, but a quantum computer uses quantum bits in an arbitrary superposition of 0 and 1, enabling it to reach beyond the limits predicted by classical information theory. lanthanide ion quantum computer is an organic crystal, having a lanthanide ion. Europium is a favored lanthanide, since it exhibits nuclear spin coherence times, and Eu(III) is photo-stable and has two stable isotopes. In a europium organic crystal, the key factor is the mutual dipole-dipole interaction between two europium atoms. Crystals of the complex were formed by making a 2 :1 reaction of Eu(fod)3 and bpm. The transparent white crystals formed showed brilliant red luminescence with a 405 nm laser. The photoluminescence spectroscopy was observed both at room and cryogenic temperatures (300-14 K). The luminescence spectrum of [Eu(fod)3(μ-bpm) Eu(fod)3] showed characteristic of Eu(III) emission transitions in the range 570–630 nm, due to the deactivation of 5D0 emissive state to 7Fj. For the application of dinuclear Eu3+ complex to q-bit device, attention was focused on 5D0 -7F0 transition, around 580 nm. The presence of 5D0 -7F0 transition at room temperature revealed that at least one europium symmetry had no inversion center. Since the line was unsplit by the crystal field effect, any multiplicity observed was due to a multiplicity of Eu3+ sites. For q-bit element, more narrow line width of 5D0 → 7F0 PL band in Eu3+ ion was preferable. Cryogenic temperatures (300 K – 14 K) was applicable to reduce inhomogeneous broadening and distinguish between ions. A CCD image sensor was used for low temperature Photoluminescence measurement, and a far better resolved luminescent spectrum was gotten by cooling the complex at 14 K. A red shift by 15 cm-1 in the 5D0 - 7F0 peak position was observed upon cooling, the line shifted towards lower wavenumber. An emission spectrum at the 5D0 - 7F0 transition region was obtained to verify the line width. At this temperature, a peak with magnitude three times that at room temperature was observed. The temperature change of the 5D0 state of Eu(fod)3(μ-bpm)Eu(fod)3 showed a strong dependence in the vicinity of 60 K to 100 K. Thermal quenching was observed at higher temperatures than 100 K, at which point it began to decrease slowly with increasing temperature. The temperature quenching effect of Eu3+ with increase temperature was caused by energy migration. 100 K was the appropriate temperature for the observation of the 5D0 - 7F0 emission peak. Europium dinuclear complex bridged by bpm was successfully prepared and monitored at cryogenic temperatures. At 100 K the Eu3+-dope complex has a good thermal stability and this temperature is appropriate for the observation of the 5D0 - 7F0 emission peak. Sintering the sample above 600o C could also be a method to consider but the Eu3+ ion can be reduced to Eu2+, reasons why cryogenic temperature measurement is preferably over other methods.

Keywords: Eu(fod)₃, europium dinuclear complex, europium ion, quantum bit, quantum computer, 2, 2-bipyrimidine

Procedia PDF Downloads 165
917 HPSEC Application as a New Indicator of Nitrification Occurrence in Water Distribution Systems

Authors: Sina Moradi, Sanly Liu, Christopher W. K. Chow, John Van Leeuwen, David Cook, Mary Drikas, Soha Habibi, Rose Amal

Abstract:

In recent years, chloramine has been widely used for both primary and secondary disinfection. However, a major concern with the use of chloramine as a secondary disinfectant is the decay of chloramine and nitrification occurrence. The management of chloramine decay and the prevention of nitrification are critical for water utilities managing chloraminated drinking water distribution systems. The detection and monitoring of nitrification episodes is usually carried out through measuring certain water quality parameters, which are commonly referred to as indicators of nitrification. The approach taken in this study was to collect water samples from different sites throughout a drinking water distribution systems, Tailem Bend – Keith (TBK) in South Australia, and analyse the samples by high performance size exclusion chromatography (HPSEC). We investigated potential association between the water qualities from HPSEC analysis with chloramine decay and/or nitrification occurrence. MATLAB 8.4 was used for data processing of HPSEC data and chloramine decay. An increase in the absorbance signal of HPSEC profiles at λ=230 nm between apparent molecular weights of 200 to 1000 Da was observed at sampling sites that experienced rapid chloramine decay and nitrification while its absorbance signal of HPSEC profiles at λ=254 nm decreased. An increase in absorbance at λ=230 nm and AMW < 500 Da was detected for Raukkan CT (R.C.T), a location that experienced nitrification and had significantly lower chloramine residual (<0.1 mg/L). This increase in absorbance was not detected in other sites that did not experience nitrification. Moreover, the UV absorbance at 254 nm of the HPSEC spectra was lower at R.C.T. than other sites. In this study, a chloramine residual index (C.R.I) was introduced as a new indicator of chloramine decay and nitrification occurrence, and is defined based on the ratio of area underneath the HPSEC spectra at two different wavelengths of 230 and 254 nm. The C.R.I index is able to indicate DS sites that experienced nitrification and rapid chloramine loss. This index could be useful for water treatment and distribution system managers to know if nitrification is occurring at a specific location in water distribution systems.

Keywords: nitrification, HPSEC, chloramine decay, chloramine residual index

Procedia PDF Downloads 282
916 E-Learning Platform for School Kids

Authors: Gihan Thilakarathna, Fernando Ishara, Rathnayake Yasith, Bandara A. M. R. Y.

Abstract:

E-learning is a crucial component of intelligent education. Even in the midst of a pandemic, E-learning is becoming increasingly important in the educational system. Several e-learning programs are accessible for students. Here, we decided to create an e-learning framework for children. We've found a few issues that teachers are having with their online classes. When there are numerous students in an online classroom, how does a teacher recognize a student's focus on academics and below-the-surface behaviors? Some kids are not paying attention in class, and others are napping. The teacher is unable to keep track of each and every student. Key challenge in e-learning is online exams. Because students can cheat easily during online exams. Hence there is need of exam proctoring is occurred. In here we propose an automated online exam cheating detection method using a web camera. The purpose of this project is to present an E-learning platform for math education and include games for kids as an alternative teaching method for math students. The game will be accessible via a web browser. The imagery in the game is drawn in a cartoonish style. This will help students learn math through games. Everything in this day and age is moving towards automation. However, automatic answer evaluation is only available for MCQ-based questions. As a result, the checker has a difficult time evaluating the theory solution. The current system requires more manpower and takes a long time to evaluate responses. It's also possible to mark two identical responses differently and receive two different grades. As a result, this application employs machine learning techniques to provide an automatic evaluation of subjective responses based on the keyword provided to the computer as student input, resulting in a fair distribution of marks. In addition, it will save time and manpower. We used deep learning, machine learning, image processing and natural language technologies to develop these research components.

Keywords: math, education games, e-learning platform, artificial intelligence

Procedia PDF Downloads 138
915 Characteristics of Himalayan Glaciers with Lakes, Kosi Sub-Basin, Ganga Basin: Based on Remote Sensing and GIS Techniques

Authors: Ram Moorat Singh, Arun Kumar Sharma, Ravi Chaurey

Abstract:

Assessment of characteristics of Himalayan glaciers with or without glacier lakes was carried out for 1937glaciers of Kosi sub-basin, Ganga basin by using remote sensing and GIS techniques. Analysis of IRS-P6 AWiFS Data of 2004-07 periods, SRTM DEM and MODIS Land Surface Temperature (LST) data (15year mean) using image processing and GIS tools has provided significant information on various glacier parameters. The glacier area, length, width, ice exposed area, debris cover area, glacier slope, orientation, elevation and temperature data was analysed. The 119 supra glacier lakes and 62 moraine dam/peri-glacier lakes (area > 0.02 km2) in the study were studied to discern the suitable glacier conditions for glacier lake formation. On analysis it is observed that the glacial lakes are preferably formed in association with large dimension glaciers (area, length and width), glaciers with higher percent ice exposed area, lower percent debris cover area and in general mean elevation value greater than 5300 m amsl. On analysis of lake type shows that the moraine dam lakes are formed associated with glaciers located at relatively higher altitude as compared to altitude of glaciers with supra glacier lakes. Analysis of frequency of occurrence of lakes vis a vis glacier orientation shows that more number of glacier lakes are formed associated with glaciers having orientation south, south east, south west, east and west directions. The supra glacial lakes are formed in association with glaciers having higher mean temperature as compared to moraine dam lakes as verified using LST data of 15 years (2000-2014).

Keywords: remote sensing, supra glacial lake, Himalaya, Kosi sub-basin, glaciers, moraine-dammed lake

Procedia PDF Downloads 357
914 Use of Alternative and Complementary Therapies in Patients with Chronic Pain in a Medical Institution in Medellin, Colombia, 2014

Authors: Lina María Martínez Sánchez, Juliana Molina Valencia, Esteban Vallejo Agudelo, Daniel Gallego González, María Isabel Pérez Palacio, Juan Ricardo Gaviria García, María De Los Ángeles Rodríguez Gázquez, Gloria Inés Martínez Domínguez

Abstract:

Alternative and complementary therapies constitute a vast and complex combination of interventions, philosophies, approaches, and therapies that acquire a holistic healthcare point of view, becoming an alternative for the treatment of patients with chronic pain. Objective: determine the characteristics of the use of alternative and complementary therapies in patients with chronic pain who consulted in a medical institution. Methodology: cross-sectional and descriptive study, with a population of patients that assisted to the outpatient consultation and met the eligibility criteria. Sampling was not conducted. A form was used for the collection of demographic and clinical variables and the Holistic Complementary and Alternative Medicine Questionnaire (HCAMQ) was validated. The analysis and processing of information was carried out using the SPSS program vr.19. Results: 220 people with chronic pain were included. The average age was 54.7±16.2 years, 78.2% were women, and 75.5% belonged to the socioeconomic strata 1 to 3. Musculoskeletal pain (77.7%), migraine (15%) and neuralgia (9.1%) were the most frequently types of chronic pain. 33.6% of participants have used some kind of alternative and complementary therapy; the most frequent were: homeopathy (14.5%), phytotherapy (12.7%), and acupuncture (11.4%). The total average HCAMQ score for the study group was 30.2±7.0 points, which shows a moderate attitude toward the use of complementary and alternative medicine. The highest scores according to the type of pain were: neuralgia (32.4±5.8), musculoskeletal pain (30.5±6.7), fibromyalgia (29.6±7.3) and migraine (28.5±8.8). The reliability of the HCAMQ was acceptable (Cronbach's α: 0.6). Conclusion: it was noted that the types of chronic pain and the clinical or therapeutic management of patients correspond to the data available in current literature. Despite the moderate attitude toward the use of these alternative and complementary therapies, one of every three patients uses them.

Keywords: chronic pain, complementary therapies, homeopathy, acupuncture analgesia

Procedia PDF Downloads 499
913 A Low Cost Gain-Coupled Distributed Feedback Laser Based on Periodic Surface p-Contacts

Authors: Yongyi Chen, Li Qin, Peng Jia, Yongqiang Ning, Yun Liu, Lijun Wang

Abstract:

The distributed feedback (DFB) lasers are indispensable in optical phase array (OPA) used for light detection and ranging (LIDAR) techniques, laser communication systems and integrated optics, thanks to their stable single longitudinal mode and narrow linewidth properties. Traditional index-coupled (IC) DFB lasers with uniform gratings have an inherent problem of lasing two degenerated modes. Phase shifts are usually required to eliminate the mode degeneration, making the grating structure complex and expensive. High-quality antireflection (AR) coatings on both lasing facets are also essential owing to the random facet phases introduced by the chip cleavage process, which means half of the lasing energy is wasted. Gain-coupled DFB (GC-DFB) lasers based on the periodic gain (or loss) are announced to have single longitudinal mode as well as capable of the unsymmetrical coating to increase lasing power and efficiency thanks to facet immunity. However, expensive and time-consuming technologies such as epitaxial regrowth and nanoscale grating processing are still required just as IC-DFB lasers, preventing them from practical applications and commercial markets. In this research, we propose a low-cost, single-mode regrowth-free GC-DFB laser based on periodic surface p-contacts. The gain coupling effect is achieved simply by periodic current distribution in the quantum well caused by periodic surface p-contacts, introducing very little index-coupling effect that can be omitted. It is prepared by i-line lithography, without nanoscale grating fabrication or secondary epitaxy. Due to easy fabrication techniques, it provides a method to fabricate practical low cost GC-DFB lasers for widespread practical applications.

Keywords: DFB laser, gain-coupled, low cost, periodic p-contacts

Procedia PDF Downloads 117
912 Designing Form, Meanings, and Relationships for Future Industrial Products. Case Study Observation of PAD

Authors: Elisabetta Cianfanelli, Margherita Tufarelli, Paolo Pupparo

Abstract:

The dialectical mediation between desires and objects or between mass production and consumption continues to evolve over time. This relationship is influenced both by variable geometries of contexts that are distant from the mere design of product form and by aspects rooted in the very definition of industrial design. In particular, the overcoming of macro-areas of innovation in the technological, social, cultural, formal, and morphological spheres, supported by recent theories in critical and speculative design, seems to be moving further and further away from the design of the formal dimension of advanced products. The articulated fabric of theories and practices that feed the definition of “hyperobjects”, and no longer objects describes a common tension in all areas of design and production of industrial products. The latter are increasingly detached from the design of the form and meaning of the same in mass productions, thus losing the quality of products capable of social transformation. For years we have been living in a transformative moment as regards the design process in the definition of the industrial product. We are faced with a dichotomy in which there is, on the one hand, a reactionary aversion to the new techniques of industrial production and, on the other hand, a sterile adoption of the techniques of mass production that we can now consider traditional. This ambiguity becomes even more evident when we talk about industrial products, and we realize that we are moving further and further away from the concepts of "form" as a synthesis of a design thought aimed at the aesthetic-emotional component as well as the functional one. The design of forms and their contents, as statutes of social acts, allows us to investigate the tension on mass production that crosses seasons, trends, technicalities, and sterile determinisms. The design culture has always determined the formal qualities of objects as a sum of aesthetic characteristics functional and structural relationships that define a product as a coherent unit. The contribution proposes a reflection and a series of practical experiences of research on the form of advanced products. This form is understood as a kaleidoscope of relationships through the search for an identity, the desire for democratization, and between these two, the exploration of the aesthetic factor. The study of form also corresponds to the study of production processes, technological innovations, the definition of standards, distribution, advertising, the vicissitudes of taste and lifestyles. Specifically, we will investigate how the genesis of new forms for new meanings introduces a change in the relative innovative production techniques. It becomes, therefore, fundamental to investigate, through the reflections and the case studies exposed inside the contribution, also the new techniques of production and elaboration of the forms of the products, as new immanent and determining element inside the planning process.

Keywords: industrial design, product advanced design, mass productions, new meanings

Procedia PDF Downloads 111
911 Groundwater Quality Assessment in the Vicinity of Tannery Industries in Warangal, India

Authors: Mohammed Fathima Shahanaaz, Shaik Fayazuddin, M. Uday Kiran

Abstract:

Groundwater quality is deteriorating day by day in different parts of the world due to various reasons, toxic chemicals are being discharged without proper treatment into inland water bodies and land which in turn add pollutants to the groundwater. In this kind of situation, the rural communities which do not have municipal drinking water have to rely on groundwater though it is polluted for various uses. Tannery industry is one of the major industry which provides economy and employment to India. Since most of the developed countries stopped using chemicals which are toxic, the tanning industry which uses chromium as its major element are being shifted towards developing countries. Most of the tanning industries in India can be found in clusters concentrated mainly in states of Tamilnadu, West Bengal, Uttar Pradesh and limited places of Punjab. Limited work is present in the case of tanneries of Warangal. There exists 18 group of tanneries in Desaipet, Enamamula region of Warangal, out of which 4 are involved in dry process and are low responsible for groundwater pollution. These units of tanneries are discharging their effluents after treatment into Sai Cheruvu. Though the treatment effluents are being discharged, the Sai Cheruvu is turned in to Pink colour, with higher levels of BOD, COD, chromium, chlorides, total hardness, TDS and sulphates. An attempt was made to analyse the groundwater samples around this polluted Sai Cheruvu region since literature shows that a single tannery can pollute groundwater to a radius of 7-8 kms from the point of disposal. Sample are collected from 6 different locations around Sai Cheruvu. Analysis was performed for determining various constituents in groundwater such as pH, EC, TDS, TH, Ca+2, Mg+2, HCO3-, Na+, K+, Cl-, SO42-, NO3-, F and Cr+6. The analysis of these constitutes gave values greater than permissible limits. Even chromium is also present in groundwater samples which is exceeding permissible limits People in Paidepally and Sardharpeta villages already stopped the usage of groundwater. They are buying bottle water for drinking purpose. Though they are not using groundwater for drinking purpose complaints are made about using this water for washing also. So treatment process should be adopted for groundwater which should be simple and efficient. In this study rice husk silica (RHS) is used to treat pollutants in groundwater with varying dosages of RHS and contact time. Rice husk is treated, dried and place in a muffle furnace for 6 hours at 650°C. Reduction is observed in total hardness, chlorides and chromium levels are observed after the application RHS. Pollutants reached permissible limits for 27.5mg/l and 50 mg/l of dosage for a contact time of 130 min at constant pH and temperature.

Keywords: chromium, groundwater, rice husk silica, tanning industries

Procedia PDF Downloads 186
910 The Impact of Mining Activities on the Surface Water Quality: A Case Study of the Kaap River in Barberton, Mpumalanga

Authors: M. F. Mamabolo

Abstract:

Mining activities are identified as the most significant source of heavy metal contamination in river basins, due to inadequate disposal of mining waste thus resulting in acid mine drainage. Waste materials generated from gold mining and processing have severe and widespread impacts on water resources. Therefore, a total of 30 water samples were collected from Fig Tree Creek, Kaapriver, Sheba mine stream & Sauid kaap river to investigate the impact of gold mines on the Kaap River system. Physicochemical parameters (pH, EC and TDS) were taken using a BANTE 900P portable water quality meter. The concentration of Fe, Cu, Co, and SO₄²⁻ in water samples were analysed using Inductively Coupled Plasma-Mass spectrophotometry (ICP-MS) at 0.01 mg/L. The results were compared to the regulatory guideline of the World Health Organization (WHO) and the South Africa National Standards (SANS). It was found that Fe, Cu and Co were below the guideline values while SO₄²⁻ detected in Sheba mine stream exceeded the 250 mg/L limit for both seasons, attributed by mine wastewater. SO₄²⁻ was higher in wet season due to high evaporation rates and greater interaction between rocks and water. The pH of all the streams was within the limit (≥5 to ≤9.7), however EC of the Sheba mine stream, Suid Kaap River & where the tributary connects with the Fig Tree Creek exceeded 1700 uS/m, due to dissolved material. The TDS of Sheba mine stream exceeded 1000 mg/L, attributed by high SO₄²⁻ concentration. While the tributary connecting to the Fig Tree Creek exceed the value due to pollution from household waste, runoff from agriculture etc. In conclusion, the water from all sampled streams were safe for consumption due to low concentrations of physicochemical parameters. However, elevated concentration of SO₄²⁻ should be monitored and managed to avoid water quality deterioration in the Kaap River system.

Keywords: Kaap river system, mines, heavy metals, sulphate

Procedia PDF Downloads 61
909 An Empirical Study on the Integration of Listening and Speaking Activities with Writing Instruction for Middles School English Language Learners

Authors: Xueyan Hu, Liwen Chen, Weilin He, Sujie Peng

Abstract:

Writing is an important but challenging skill For English language learners. Due to the small amount of time allocated for writing classes at schools, students have relatively few opportunities to practice writing in the classroom. While the practice of integrating listening and speaking activates with writing instruction has been used for adult English language learners, its application for young English learners has seldom been examined due to the challenge of listening and speaking activities for young English language learners. The study attempted to integrating integrating listening and speaking activities with writing instruction for middle school English language learners so as to improving their writing achievements and writing abilities in terms of the word use, coherence, and complexity in their writings. Guided by Gagne's information processing learning theory and memetics, this study conducted a 8-week writing instruction with an experimental class (n=44) and a control class (n=48) . Students in the experimental class participated in a series of listening and retelling activities about a writing sample the teacher used for writing instruction during each period of writing class. Students in the control class were taught traditionally with teachers’ direction instruction using the writing sample. Using the ANCOVA analysis of the scores of students’ writing, word-use, Chinese-English translation and the text structure, this study showed that the experimental writing instruction can significantly improve students’ writing performance. Compared with the students in the control class, the students in experimental class had significant better performance in word use and complexity in their essays. This study provides useful enlightenment for the teaching of English writing for middle school English language learners. Teachers can skillfully use information technology to integrate listening, speaking, and writing teaching, considering students’ language input and output. Teachers need to select suitable and excellent composition templates for students to ensure their high-quality language input.

Keywords: wring instruction, retelling, English language learners, listening and speaking

Procedia PDF Downloads 65
908 Gender Policies and Political Culture: An Examination of the Canadian Context

Authors: Chantal Maille

Abstract:

This paper is about gender-based analysis plus (GBA+), an intersectional gender policy used in Canada to assess the impact of policies and programs for men and women from different origins. It looks at Canada’s political culture to explain the nature of its gender policies. GBA+ is defined as an analysis method that makes it possible to assess the eventual effects of policies, programs, services, and other initiatives on women and men of different backgrounds because it takes account of gender and other identity factors. The ‘plus’ in the name serves to emphasize that GBA+ goes beyond gender to include an examination of a wide range of other related identity factors, such as age, education, language, geography, culture, and income. The point of departure for GBA+ is that women and men are not homogeneous populations and gender is never the only factor in defining a person’s identity; rather, it interacts with factors such as ethnic origin, age, disabilities, where the person lives, and other aspects of individual and social identity. GBA+ takes account of these factors and thus challenges notions of similarity or homogeneity within populations of women and men. Comparative analysis based on sex and gender may serve as a gateway to studying a given question, but women, men, girls, and boys do not form homogeneous populations. In the 1990s, intersectionality emerged as a new feminist framework. The popularity of the notion of intersectionality corresponds to a time when, in hindsight, the damage done to minoritized groups by state disengagement policies in concert with global intensification of neoliberalism, and vice versa, can be measured. Although GBA+ constitutes a form of intersectionalization of GBA, it must be understood that the two frameworks do not spring from a similar logic. Intersectionality first emerged as a dynamic analysis of differences between women that was oriented toward change and social justice, whereas GBA is a technique developed by state feminists in a context of analyzing governmental policies and aiming to promote equality between men and women. It can nevertheless be assumed that there might be interest in such a policy and program analysis grid that is decentred from gender and offers enough flexibility to take account of a group of inequalities. In terms of methodology, the research is supported by a qualitative analysis of governmental documents about GBA+ in Canada. Research findings identify links between Canadian gender policies and its political culture. In Canada, diversity has been taken into account as an element at the basis of gendered analysis of public policies since 1995. The GBA+ adopted by the government of Canada conveys an opening to intersectionality and a sensitivity to multiculturalism. The Canadian Multiculturalism Act, adopted 1988, proposes to recognize the fact that multiculturalism is a fundamental characteristic of the Canadian identity and heritage and constitutes an invaluable resource for the future of the country. In conclusion, Canada’s distinct political culture can be associated with the specific nature of its gender policies.

Keywords: Canada, gender-based analysis, gender policies, political culture

Procedia PDF Downloads 209
907 Deep Learning Framework for Predicting Bus Travel Times with Multiple Bus Routes: A Single-Step Multi-Station Forecasting Approach

Authors: Muhammad Ahnaf Zahin, Yaw Adu-Gyamfi

Abstract:

Bus transit is a crucial component of transportation networks, especially in urban areas. Any intelligent transportation system must have accurate real-time information on bus travel times since it minimizes waiting times for passengers at different stations along a route, improves service reliability, and significantly optimizes travel patterns. Bus agencies must enhance the quality of their information service to serve their passengers better and draw in more travelers since people waiting at bus stops are frequently anxious about when the bus will arrive at their starting point and when it will reach their destination. For solving this issue, different models have been developed for predicting bus travel times recently, but most of them are focused on smaller road networks due to their relatively subpar performance in high-density urban areas on a vast network. This paper develops a deep learning-based architecture using a single-step multi-station forecasting approach to predict average bus travel times for numerous routes, stops, and trips on a large-scale network using heterogeneous bus transit data collected from the GTFS database. Over one week, data was gathered from multiple bus routes in Saint Louis, Missouri. In this study, Gated Recurrent Unit (GRU) neural network was followed to predict the mean vehicle travel times for different hours of the day for multiple stations along multiple routes. Historical time steps and prediction horizon were set up to 5 and 1, respectively, which means that five hours of historical average travel time data were used to predict average travel time for the following hour. The spatial and temporal information and the historical average travel times were captured from the dataset for model input parameters. As adjacency matrices for the spatial input parameters, the station distances and sequence numbers were used, and the time of day (hour) was considered for the temporal inputs. Other inputs, including volatility information such as standard deviation and variance of journey durations, were also included in the model to make it more robust. The model's performance was evaluated based on a metric called mean absolute percentage error (MAPE). The observed prediction errors for various routes, trips, and stations remained consistent throughout the day. The results showed that the developed model could predict travel times more accurately during peak traffic hours, having a MAPE of around 14%, and performed less accurately during the latter part of the day. In the context of a complicated transportation network in high-density urban areas, the model showed its applicability for real-time travel time prediction of public transportation and ensured the high quality of the predictions generated by the model.

Keywords: gated recurrent unit, mean absolute percentage error, single-step forecasting, travel time prediction.

Procedia PDF Downloads 57
906 Shape Management Method of Large Structure Based on Octree Space Partitioning

Authors: Gichun Cha, Changgil Lee, Seunghee Park

Abstract:

The objective of the study is to construct the shape management method contributing to the safety of the large structure. In Korea, the research of the shape management is lack because of the new attempted technology. Terrestrial Laser Scanning (TLS) is used for measurements of large structures. TLS provides an efficient way to actively acquire accurate the point clouds of object surfaces or environments. The point clouds provide a basis for rapid modeling in the industrial automation, architecture, construction or maintenance of the civil infrastructures. TLS produce a huge amount of point clouds. Registration, Extraction and Visualization of data require the processing of a massive amount of scan data. The octree can be applied to the shape management of the large structure because the scan data is reduced in the size but, the data attributes are maintained. The octree space partitioning generates the voxel of 3D space, and the voxel is recursively subdivided into eight sub-voxels. The point cloud of scan data was converted to voxel and sampled. The experimental site is located at Sungkyunkwan University. The scanned structure is the steel-frame bridge. The used TLS is Leica ScanStation C10/C5. The scan data was condensed 92%, and the octree model was constructed with 2 millimeter in resolution. This study presents octree space partitioning for handling the point clouds. The basis is created by shape management of the large structures such as double-deck tunnel, building and bridge. The research will be expected to improve the efficiency of structural health monitoring and maintenance. "This work is financially supported by 'U-City Master and Doctor Course Grant Program' and the National Research Foundation of Korea(NRF) grant funded by the Korea government (MSIP) (NRF- 2015R1D1A1A01059291)."

Keywords: 3D scan data, octree space partitioning, shape management, structural health monitoring, terrestrial laser scanning

Procedia PDF Downloads 284
905 3D Text Toys: Creative Approach to Experiential and Immersive Learning for World Literacy

Authors: Azyz Sharafy

Abstract:

3D Text Toys is an innovative and creative approach that utilizes 3D text objects to enhance creativity, literacy, and basic learning in an enjoyable and gamified manner. By using 3D Text Toys, children can develop their creativity, visually learn words and texts, and apply their artistic talents within their creative abilities. This process incorporates haptic engagement with 2D and 3D texts, word building, and mechanical construction of everyday objects, thereby facilitating better word and text retention. The concept involves constructing visual objects made entirely out of 3D text/words, where each component of the object represents a word or text element. For instance, a bird can be recreated using words or text shaped like its wings, beak, legs, head, and body, resulting in a 3D representation of the bird purely composed of text. This can serve as an art piece or a learning tool in the form of a 3D text toy. These 3D text objects or toys can be crafted using natural materials such as leaves, twigs, strings, or ropes, or they can be made from various physical materials using traditional crafting tools. Digital versions of these objects can be created using 2D or 3D software on devices like phones, laptops, iPads, or computers. To transform digital designs into physical objects, computerized machines such as CNC routers, laser cutters, and 3D printers can be utilized. Once the parts are printed or cut out, students can assemble the 3D texts by gluing them together, resulting in natural or everyday 3D text objects. These objects can be painted to create artistic pieces or text toys, and the addition of wheels can transform them into moving toys. One of the significant advantages of this visual and creative object-based learning process is that students not only learn words but also derive enjoyment from the process of creating, painting, and playing with these objects. The ownership and creation process further enhances comprehension and word retention. Moreover, for individuals with learning disabilities such as dyslexia, ADD (Attention Deficit Disorder), or other learning difficulties, the visual and haptic approach of 3D Text Toys can serve as an additional creative and personalized learning aid. The application of 3D Text Toys extends to both the English language and any other global written language. The adaptation and creative application may vary depending on the country, space, and native written language. Furthermore, the implementation of this visual and haptic learning tool can be tailored to teach foreign languages based on age level and comprehension requirements. In summary, this creative, haptic, and visual approach has the potential to serve as a global literacy tool.

Keywords: 3D text toys, creative, artistic, visual learning for world literacy

Procedia PDF Downloads 48