Search results for: work experience
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17454

Search results for: work experience

714 Teachers' and Learners' Experiences of Learners' Writing in English First Additional Language

Authors: Jane-Francis A. Abongdia, Thandiswa Mpiti

Abstract:

There is an international concern to develop children’s literacy skills. In many parts of the world, the need to become fluent in a second language is essential for gaining meaningful access to education, the labour market and broader social functioning. In spite of these efforts, the problem still continues. The level of English language proficiency is far from satisfactory and these goals are unattainable by others. The issue is more complex in South Africa as learners are immersed in a second language (L2) curriculum. South Africa is a prime example of a country facing the dilemma of how to effectively equip a majority of its population with English as a second language or first additional language (FAL). Given the multilingual nature of South Africa with eleven official languages, and the position and power of English, the study investigates teachers’ and learners’ experiences on isiXhosa and Afrikaans background learners’ writing in English First Additional Language (EFAL). Moreover, possible causes of writing difficulties and teacher’s practices for writing are explored. The theoretical and conceptual framework for the study is provided by studies on constructivist theories and sociocultural theories. In exploring these issues, a qualitative approach through semi-structured interviews, classroom observations, and document analysis were adopted. This data is analysed by critical discourse analysis (CDA). The study identified a weak correlation between teachers’ beliefs and their actual teaching practices. Although the teachers believe that writing is as important as listening, speaking, reading, grammar and vocabulary, and that it needs regular practice, the data reveal that they fail to put their beliefs into practice. Moreover, the data revealed that learners were disturbed by their home language because when they do not know a word they would write either the isiXhosa or the Afrikaans equivalent. Code-switching seems to have instilled a sense of “dependence on translations” where some learners would not even try to answer English questions but would wait for the teacher to translate the questions into isiXhosa or Afrikaans before they could attempt to give answers. The findings of the study show a marked improvement in the writing performance of learners who used the process approach in writing. These findings demonstrate the need for assisting teachers to shift away from focusing only on learners’ performance (testing and grading) towards a stronger emphasis on the process of writing. The study concludes that the process approach to writing could enable teachers to focus on the various parts of the writing process which can give more freedom to learners to experiment their language proficiency. It would require that teachers develop a deeper understanding of the process/genre approaches to teaching writing advocated by CAPS. All in all, the study shows that both learners and teachers face numerous challenges relating to writing. This means that more work still needs to be done in this area. The present study argues that teachers teaching EFAL learners should approach writing as a critical and core aspect of learners’ education. Learners should be exposed to intensive writing activities throughout their school years.

Keywords: constructivism, English second language, language of learning and teaching, writing

Procedia PDF Downloads 217
713 Evaluation of Coupled CFD-FEA Simulation for Fire Determination

Authors: Daniel Martin Fellows, Sean P. Walton, Jennifer Thompson, Oubay Hassan, Ella Quigley, Kevin Tinkham

Abstract:

Fire performance is a crucial aspect to consider when designing cladding products, and testing this performance is extremely expensive. Appropriate use of numerical simulation of fire performance has the potential to reduce the total number of fire tests required when designing a product by eliminating poor-performing design ideas early in the design phase. Due to the complexity of fire and the large spectrum of failures it can cause, multi-disciplinary models are needed to capture the complex fire behavior and its structural effects on its surroundings. Working alongside Tata Steel U.K., the authors have focused on completing a coupled CFD-FEA simulation model suited to test Polyisocyanurate (PIR) based sandwich panel products to gain confidence before costly experimental standards testing. The sandwich panels are part of a thermally insulating façade system primarily for large non-domestic buildings. The work presented in this paper compares two coupling methodologies of a replicated physical experimental standards test LPS 1181-1, carried out by Tata Steel U.K. The two coupling methodologies that are considered within this research are; one-way and two-way. A one-way coupled analysis consists of importing thermal data from the CFD solver into the FEA solver. A two-way coupling analysis consists of continuously importing the updated changes in thermal data, due to the fire's behavior, to the FEA solver throughout the simulation. Likewise, the mechanical changes will also be updated back to the CFD solver to include geometric changes within the solution. For CFD calculations, a solver called Fire Dynamic Simulator (FDS) has been chosen due to its adapted numerical scheme to focus solely on fire problems. Validation of FDS applicability has been achieved in past benchmark cases. In addition, an FEA solver called ABAQUS has been chosen to model the structural response to the fire due to its crushable foam plasticity model, which can accurately model the compressibility of PIR foam. An open-source code called FDS-2-ABAQUS is used to couple the two solvers together, using several python modules to complete the process, including failure checks. The coupling methodologies and experimental data acquired from Tata Steel U.K are compared using several variables. The comparison data includes; gas temperatures, surface temperatures, and mechanical deformation of the panels. Conclusions are drawn, noting improvements to be made on the current coupling open-source code FDS-2-ABAQUS to make it more applicable to Tata Steel U.K sandwich panel products. Future directions for reducing the computational cost of the simulation are also considered.

Keywords: fire engineering, numerical coupling, sandwich panels, thermo fluids

Procedia PDF Downloads 88
712 The Joy of Painless Maternity: The Reproductive Policy of the Bolsheviks in the 1930s

Authors: Almira Sharafeeva

Abstract:

In the Soviet Union of the 1930s, motherhood was seen as a natural need of women. The masculine Bolshevik state did not see the emancipated woman as free from her maternal burden. In order to support the idea of "joyful motherhood," a medical discourse on the anesthesia of childbirth emerges. In March 1935 at the IX Congress of obstetricians and gynecologists the People's Commissar of Public Health of the RSFSR G.N. Kaminsky raised the issue of anesthesia of childbirth. It was also from that year that medical, literary and artistic editions with enviable frequency began to publish articles, studies devoted to the issue, the goal - to anesthetize all childbirths in the USSR - was proclaimed. These publications were often filled with anti-German and anti-capitalist propaganda, through which the advantages of socialism over Capitalism and Nazism were demonstrated. At congresses, in journals, and at institute meetings, doctors' discussions around obstetric anesthesia were accompanied by discussions of shortening the duration of the childbirth process, the prevention and prevention of disease, the admission of nurses to the procedure, and the proper behavior of women during the childbirth process. With the help of articles from medical periodicals of the 1930s., brochures, as well as documents from the funds of the Institute of Obstetrics and Gynecology of the Academy of Medical Sciences of the USSR (TsGANTD SPb) and the Department of Obstetrics and Gynecology of the NKZ USSR (GARF) in this paper we will show, how the advantages of the Soviet system and the socialist way of life were constructed through the problem of childbirth pain relief, and we will also show how childbirth pain relief in the USSR was related to the foreign policy situation and how projects of labor pain relief were related to the anti-abortion policy of the state. This study also attempts to answer the question of why anesthesia of childbirth in the USSR did not become widespread and how, through this medical procedure, the Soviet authorities tried to take control of a female function (childbirth) that was not available to men. Considering this subject from the perspective of gender studies and the social history of medicine, it is productive to use the term "biopolitics. Michel Foucault and Antonio Negri, wrote that biopolitics takes under its wing the control and management of hygiene, nutrition, fertility, sexuality, contraception. The central issue of biopolitics is population reproduction. It includes strategies for intervening in collective existence in the name of life and health, ways of subjectivation by which individuals are forced to work on themselves. The Soviet state, through intervention in the reproductive lives of its citizens, sought to realize its goals of population growth, which was necessary to demonstrate the benefits of living in the Soviet Union and to train a pool of builders of socialism. The woman's body was seen as the object over which the socialist experiment of reproductive policy was being conducted.

Keywords: labor anesthesia, biopolitics of stalinism, childbirth pain relief, reproductive policy

Procedia PDF Downloads 69
711 A Complex Network Approach to Structural Inequality of Educational Deprivation

Authors: Harvey Sanchez-Restrepo, Jorge Louca

Abstract:

Equity and education are major focus of government policies around the world due to its relevance for addressing the sustainable development goals launched by Unesco. In this research, we developed a primary analysis of a data set of more than one hundred educational and non-educational factors associated with learning, coming from a census-based large-scale assessment carried on in Ecuador for 1.038.328 students, their families, teachers, and school directors, throughout 2014-2018. Each participating student was assessed by a standardized computer-based test. Learning outcomes were calibrated through item response theory with two-parameters logistic model for getting raw scores that were re-scaled and synthetized by a learning index (LI). Our objective was to develop a network for modelling educational deprivation and analyze the structure of inequality gaps, as well as their relationship with socioeconomic status, school financing, and student's ethnicity. Results from the model show that 348 270 students did not develop the minimum skills (prevalence rate=0.215) and that Afro-Ecuadorian, Montuvios and Indigenous students exhibited the highest prevalence with 0.312, 0.278 and 0.226, respectively. Regarding the socioeconomic status of students (SES), modularity class shows clearly that the system is out of equilibrium: the first decile (the poorest) exhibits a prevalence rate of 0.386 while rate for decile ten (the richest) is 0.080, showing an intense negative relationship between learning and SES given by R= –0.58 (p < 0.001). Another interesting and unexpected result is the average-weighted degree (426.9) for both private and public schools attending Afro-Ecuadorian students, groups that got the highest PageRank (0.426) and pointing out that they suffer the highest educational deprivation due to discrimination, even belonging to the richest decile. The model also found the factors which explain deprivation through the highest PageRank and the greatest degree of connectivity for the first decile, they are: financial bonus for attending school, computer access, internet access, number of children, living with at least one parent, books access, read books, phone access, time for homework, teachers arriving late, paid work, positive expectations about schooling, and mother education. These results provide very accurate and clear knowledge about the variables affecting poorest students and the inequalities that it produces, from which it might be defined needs profiles, as well as actions on the factors in which it is possible to influence. Finally, these results confirm that network analysis is fundamental for educational policy, especially linking reliable microdata with social macro-parameters because it allows us to infer how gaps in educational achievements are driven by students’ context at the time of assigning resources.

Keywords: complex network, educational deprivation, evidence-based policy, large-scale assessments, policy informatics

Procedia PDF Downloads 122
710 Graphene-Graphene Oxide Dopping Effect on the Mechanical Properties of Polyamide Composites

Authors: Daniel Sava, Dragos Gudovan, Iulia Alexandra Gudovan, Ioana Ardelean, Maria Sonmez, Denisa Ficai, Laurentia Alexandrescu, Ecaterina Andronescu

Abstract:

Graphene and graphene oxide have been intensively studied due to the very good properties, which are intrinsic to the material or come from the easy doping of those with other functional groups. Graphene and graphene oxide have known a broad band of useful applications, in electronic devices, drug delivery systems, medical devices, sensors and opto-electronics, coating materials, sorbents of different agents for environmental applications, etc. The board range of applications does not come only from the use of graphene or graphene oxide alone, or by its prior functionalization with different moieties, but also it is a building block and an important component in many composite devices, its addition coming with new functionalities on the final composite or strengthening the ones that are already existent on the parent product. An attempt to improve the mechanical properties of polyamide elastomers by compounding with graphene oxide in the parent polymer composition was attempted. The addition of the graphene oxide contributes to the properties of the final product, improving the hardness and aging resistance. Graphene oxide has a lower hardness and textile strength, and if the amount of graphene oxide in the final product is not correctly estimated, it can lead to mechanical properties which are comparable to the starting material or even worse, the graphene oxide agglomerates becoming a tearing point in the final material if the amount added is too high (in a value greater than 3% towards the parent material measured in mass percentages). Two different types of tests were done on the obtained materials, the hardness standard test and the tensile strength standard test, and they were made on the obtained materials before and after the aging process. For the aging process, an accelerated aging was used in order to simulate the effect of natural aging over a long period of time. The accelerated aging was made in extreme heat. For all materials, FT-IR spectra were recorded using FT-IR spectroscopy. From the FT-IR spectra only the bands corresponding to the polyamide were intense, while the characteristic bands for graphene oxide were very small in comparison due to the very small amounts introduced in the final composite along with the low absorptivity of the graphene backbone and limited number of functional groups. In conclusion, some compositions showed very promising results, both in tensile strength test and in hardness tests. The best ratio of graphene to elastomer was between 0.6 and 0.8%, this addition extending the life of the product. Acknowledgements: The present work was possible due to the EU-funding grant POSCCE-A2O2.2.1-2013-1, Project No. 638/12.03.2014, code SMIS-CSNR 48652. The financial contribution received from the national project ‘New nanostructured polymeric composites for centre pivot liners, centre plate and other components for the railway industry (RONERANANOSTRUCT)’, No: 18 PTE (PN-III-P2-2.1-PTE-2016-0146) is also acknowledged.

Keywords: graphene, graphene oxide, mechanical properties, dopping effect

Procedia PDF Downloads 311
709 Cultural Adaptation of an Appropriate Intervention Tool for Mental Health among the Mohawk in Quebec

Authors: Liliana Gomez Cardona, Mary McComber, Kristyn Brown, Arlene Laliberté, Outi Linnaranta

Abstract:

The history of colonialism and more contemporary political issues have resulted in the exposure of Kanien'kehá:ka: non (Kanien'kehá:ka of Kahnawake) to challenging and even traumatic experiences. Colonization, religious missions, residential schools as well as economic and political marginalization are the factors that have challenged the wellbeing and mental health of these populations. In psychiatry, screening for mental illness is often done using questionnaires with which the patient is expected to respond to how often he/she has certain symptoms. However, the Indigenous view of mental wellbeing may not fit well with this approach. Moreover, biomedical treatments do not always meet the needs of Indigenous people because they do not understand the culture and traditional healing methods that persist in many communities. Assess whether the questionnaires used to measure symptoms, commonly used in psychiatry are appropriate and culturally safe for the Mohawk in Quebec. Identify the most appropriate tool to assess and promote wellbeing and follow the process necessary to improve its cultural sensitivity and safety for the Mohawk population. Qualitative, collaborative, and participatory action research project which respects First Nations protocols and the principles of ownership, control, access, and possession (OCAP). Data collection based on five focus groups with stakeholders working with these populations and members of Indigenous communities. Thematic analysis of the data collected and emerging through an advisory group that led a revision of the content, use, and cultural and conceptual relevance of the instruments. The questionnaires measuring psychiatric symptoms face significant limitations in the local indigenous context. We present the factors that make these tools not relevant among Mohawks. Although the scale called Growth and Empowerment Measure (GEM) was originally developed among Indigenous in Australia, the Mohawk in Quebec found that this tool comprehends critical aspects of their mental health and wellbeing more respectfully and accurately than questionnaires focused on measuring symptoms. We document the process of cultural adaptation of this tool which was supported by community members to create a culturally safe tool that helps in growth and empowerment. The cultural adaptation of the GEM provides valuable information about the factors affecting wellbeing and contributes to mental health promotion. This process improves mental health services by giving health care providers useful information about the Mohawk population and their clients. We believe that integrating this tool in interventions can help create a bridge to improve communication between the Indigenous cultural perspective of the patient and the biomedical view of health care providers. Further work is needed to confirm the clinical utility of this tool in psychological and psychiatric intervention along with social and community services.

Keywords: cultural adaptation, cultural safety, empowerment, Mohawks, mental health, Quebec

Procedia PDF Downloads 153
708 Using Convolutional Neural Networks to Distinguish Different Sign Language Alphanumerics

Authors: Stephen L. Green, Alexander N. Gorban, Ivan Y. Tyukin

Abstract:

Within the past decade, using Convolutional Neural Networks (CNN)’s to create Deep Learning systems capable of translating Sign Language into text has been a breakthrough in breaking the communication barrier for deaf-mute people. Conventional research on this subject has been concerned with training the network to recognize the fingerspelling gestures of a given language and produce their corresponding alphanumerics. One of the problems with the current developing technology is that images are scarce, with little variations in the gestures being presented to the recognition program, often skewed towards single skin tones and hand sizes that makes a percentage of the population’s fingerspelling harder to detect. Along with this, current gesture detection programs are only trained on one finger spelling language despite there being one hundred and forty-two known variants so far. All of this presents a limitation for traditional exploitation for the state of current technologies such as CNN’s, due to their large number of required parameters. This work aims to present a technology that aims to resolve this issue by combining a pretrained legacy AI system for a generic object recognition task with a corrector method to uptrain the legacy network. This is a computationally efficient procedure that does not require large volumes of data even when covering a broad range of sign languages such as American Sign Language, British Sign Language and Chinese Sign Language (Pinyin). Implementing recent results on method concentration, namely the stochastic separation theorem, an AI system is supposed as an operate mapping an input present in the set of images u ∈ U to an output that exists in a set of predicted class labels q ∈ Q of the alphanumeric that q represents and the language it comes from. These inputs and outputs, along with the interval variables z ∈ Z represent the system’s current state which implies a mapping that assigns an element x ∈ ℝⁿ to the triple (u, z, q). As all xi are i.i.d vectors drawn from a product mean distribution, over a period of time the AI generates a large set of measurements xi called S that are grouped into two categories: the correct predictions M and the incorrect predictions Y. Once the network has made its predictions, a corrector can then be applied through centering S and Y by subtracting their means. The data is then regularized by applying the Kaiser rule to the resulting eigenmatrix and then whitened before being split into pairwise, positively correlated clusters. Each of these clusters produces a unique hyperplane and if any element x falls outside the region bounded by these lines then it is reported as an error. As a result of this methodology, a self-correcting recognition process is created that can identify fingerspelling from a variety of sign language and successfully identify the corresponding alphanumeric and what language the gesture originates from which no other neural network has been able to replicate.

Keywords: convolutional neural networks, deep learning, shallow correctors, sign language

Procedia PDF Downloads 99
707 Vortex Control by a Downstream Splitter Plate in Psudoplastic Fluid Flow

Authors: Sudipto Sarkar, Anamika Paul

Abstract:

Pseudoplastic (n<1, n is the power index) fluids have great importance in food, pharmaceutical and chemical process industries which require a lot of attention. Unfortunately, due to its complex flow behavior inadequate research works can be found even in laminar flow regime. A practical problem is solved in the present research work by numerical simulation where we tried to control the vortex shedding from a square cylinder using a horizontal splitter plate placed at the downstream flow region. The position of the plate is at the centerline of the cylinder with varying distance from the cylinder to calculate the critical gap-ratio. If the plate is placed inside this critical gap, the vortex shedding from the cylinder suppressed completely. The Reynolds number considered here is in unsteady laminar vortex shedding regime, Re = 100 (Re = U∞a/ν, where U∞ is the free-stream velocity of the flow, a is the side of the cylinder and ν is the maximum value of kinematic viscosity of the fluid). Flow behavior has been studied for three different gap-ratios (G/a = 2, 2.25 and 2.5, where G is the gap between cylinder and plate) and for a fluid with three different flow behavior indices (n =1, 0.8 and 0.5). The flow domain is constructed using Gambit 2.2.30 and this software is also used to generate the mesh and to impose the boundary conditions. For G/a = 2, the domain size is considered as 37.5a × 16a with 316 × 208 grid points in the streamwise and flow-normal directions respectively after a thorough grid independent study. Fine and equal grid spacing is used close to the geometry to capture the vortices shed from the cylinder and the boundary layer developed over the flat plate. Away from the geometry meshes are unequal in size and stretched out. For other gap-ratios, proportionate domain size and total grid points are used with similar kind of mesh distribution. Velocity inlet (u = U∞), pressure outlet (Neumann condition), symmetry (free-slip boundary condition) at upper and lower domain boundary conditions are used for the simulation. Wall boundary condition (u = v = 0) is considered both on the cylinder and the splitter plate surfaces. Discretized forms of fully conservative 2-D unsteady Navier Stokes equations are then solved by Ansys Fluent 14.5. SIMPLE algorithm written in finite volume method is selected for this purpose which is a default solver inculcate in Fluent. The results obtained for Newtonian fluid flow agree well with previous works supporting Fluent’s usefulness in academic research. A thorough analysis of instantaneous and time-averaged flow fields are depicted both for Newtonian and pseudoplastic fluid flow. It has been observed that as the value of n reduces the stretching of shear layers also reduce and these layers try to roll up before the plate. For flow with high pseudoplasticity (n = 0.5) the nature of vortex shedding changes and the value of critical gap-ratio reduces. These are the remarkable findings for laminar periodic vortex shedding regime in pseudoplastic flow environment.

Keywords: CFD, pseudoplastic fluid flow, wake-boundary layer interactions, critical gap-ratio

Procedia PDF Downloads 110
706 Topographic and Thermal Analysis of Plasma Polymer Coated Hybrid Fibers for Composite Applications

Authors: Hande Yavuz, Grégory Girard, Jinbo Bai

Abstract:

Manufacturing of hybrid composites requires particular attention to overcome various critical weaknesses that are originated from poor interfacial compatibility. A large number of parameters have to be considered to optimize the interfacial bond strength either to avoid flaw sensitivity or delamination that occurs in composites. For this reason, surface characterization of reinforcement phase is needed in order to provide necessary data to drive an assessment of fiber-matrix interfacial compatibility prior to fabrication of composite structures. Compared to conventional plasma polymerization processes such as radiofrequency and microwave, dielectric barrier discharge assisted plasma polymerization is a promising process that can be utilized to modify the surface properties of carbon fibers in a continuous manner. Finding the most suitable conditions (e.g., plasma power, plasma duration, precursor proportion) for plasma polymerization of pyrrole in post-discharge region either in the presence or in the absence of p-toluene sulfonic acid monohydrate as well as the characterization of plasma polypyrrole coated fibers are the important aspects of this work. Throughout the current investigation, atomic force microscopy (AFM) and thermogravimetric analysis (TGA) are used to characterize plasma treated hybrid fibers (CNT-grafted Toray T700-12K carbon fibers, referred as T700/CNT). TGA results show the trend in the change of decomposition process of deposited polymer on fibers as a function of temperature up to 900 °C. Within the same period of time, all plasma pyrrole treated samples began to lose weight with relatively fast rate up to 400 °C which suggests the loss of polymeric structures. The weight loss between 300 and 600 °C is attributed to evolution of CO2 due to decomposition of functional groups (e.g. carboxyl compounds). With keeping in mind the surface chemical structure, the higher the amount of carbonyl, alcohols, and ether compounds, the lower the stability of deposited polymer. Thus, the highest weight loss is observed in 1400 W 45 s pyrrole+pTSA.H2O plasma treated sample probably because of the presence of less stable polymer than that of other plasma treated samples. Comparison of the AFM images for untreated and plasma treated samples shows that the surface topography may change on a microscopic scale. The AFM image of 1800 W 45 s treated T700/CNT fiber possesses the most significant increase in roughening compared to untreated T700/CNT fiber. Namely, the fiber surface became rougher with ~3.6 fold that of the T700/CNT fiber. The increase observed in surface roughness compared to untreated T700/CNT fiber may provide more contact points between fiber and matrix due to increased surface area. It is believed to be beneficial for their application as reinforcement in composites.

Keywords: hybrid fibers, surface characterization, surface roughness, thermal stability

Procedia PDF Downloads 231
705 Electro-Hydrodynamic Effects Due to Plasma Bullet Propagation

Authors: Panagiotis Svarnas, Polykarpos Papadopoulos

Abstract:

Atmospheric-pressure cold plasmas continue to gain increasing interest for various applications due to their unique properties, like cost-efficient production, high chemical reactivity, low gas temperature, adaptability, etc. Numerous designs have been proposed for these plasmas production in terms of electrode configuration, driving voltage waveform and working gas(es). However, in order to exploit most of the advantages of these systems, the majority of the designs are based on dielectric-barrier discharges (DBDs) either in filamentary or glow regimes. A special category of the DBD-based atmospheric-pressure cold plasmas refers to the so-called plasma jets, where a carrier noble gas is guided by the dielectric barrier (usually a hollow cylinder) and left to flow up to the atmospheric air where a complicated hydrodynamic interplay takes place. Although it is now well established that these plasmas are generated due to ionizing waves reminding in many ways streamer propagation, they exhibit discrete characteristics which are better mirrored on the terms 'guided streamers' or 'plasma bullets'. These 'bullets' travel with supersonic velocities both inside the dielectric barrier and the channel formed by the noble gas during its penetration into the air. The present work is devoted to the interpretation of the electro-hydrodynamic effects that take place downstream of the dielectric barrier opening, i.e., in the noble gas-air mixing area where plasma bullet propagate under the influence of local electric fields in regions of variable noble gas concentration. Herein, we focus on the role of the local space charge and the residual ionic charge left behind after the bullet propagation in the gas flow field modification. The study communicates both experimental and numerical results, coupled in a comprehensive manner. The plasma bullets are here produced by a custom device having a quartz tube as a dielectric barrier and two external ring-type electrodes driven by sinusoidal high voltage at 10 kHz. Helium gas is fed to the tube and schlieren photography is employed for mapping the flow field downstream of the tube orifice. Mixture mass conservation equation, momentum conservation equation, energy conservation equation in terms of temperature and helium transfer equation are simultaneously solved, leading to the physical mechanisms that govern the experimental results. Namely, we deal with electro-hydrodynamic effects mainly due to momentum transfer from atomic ions to neutrals. The atomic ions are left behind as residual charge after the bullet propagation and gain energy from the locally created electric field. The electro-hydrodynamic force is eventually evaluated.

Keywords: atmospheric-pressure plasmas, dielectric-barrier discharges, schlieren photography, electro-hydrodynamic force

Procedia PDF Downloads 137
704 Polymer Dispersed Liquid Crystals Based on Poly Vinyl Alcohol Boric Acid Matrix

Authors: Daniela Ailincai, Bogdan C. Simionescu, Luminita Marin

Abstract:

Polymer dispersed liquid crystals (PDLC) represent an interesting class of materials which combine the ability of polymers to form films and their mechanical strength with the opto-electronic properties of liquid crystals. The proper choice of the two components - the liquid crystal and the polymeric matrix - leads to materials suitable for a large area of applications, from electronics to biomedical devices. The objective of our work was to obtain PDLC films with potential applications in the biomedical field, using poly vinyl alcohol boric acid (PVAB) as a polymeric matrix for the first time. Presenting all the tremendous properties of poly vinyl alcohol (such as: biocompatibility, biodegradability, water solubility, good chemical stability and film forming ability), PVAB brings the advantage of containing the electron deficient boron atom, and due to this, it should promote the liquid crystal anchoring and a narrow liquid crystal droplets polydispersity. Two different PDLC systems have been obtained, by the use of two liquid crystals, a nematic commercial one: 4-cyano-4’-penthylbiphenyl (5CB) and a new smectic liquid crystal, synthesized by us: buthyl-p-[p’-n-octyloxy benzoyloxy] benzoate (BBO). The PDLC composites have been obtained by the encapsulation method, working with four different ratios between the polymeric matrix and the liquid crystal, from 60:40 to 90:10. In all cases, the composites were able to form free standing, flexible films. Polarized light microscopy, scanning electron microscopy, differential scanning calorimetry, RAMAN- spectroscopy and the contact angle measurements have been performed, in order to characterize the new composites. The new smectic liquid crystal has been characterized using 1H-NMR and single crystal X-ray diffraction and its thermotropic behavior has been established using differential scanning calorimetry and polarized light microscopy. The polarized light microscopy evidenced the formation of round birefringent droplets, anchored homeotropic in the first case and planar in the second, with a narrow dimensional polydispersity, especially for the PDLC containing the largest amount of liquid crystal, fact evidenced by SEM, also. The obtained values for the water to air contact angle showed that the composites have a proper hydrophilic-hydrophobic balance, making them potential candidates for bioapplications. More than this, our studies demonstrated that the water to air contact angle varies as a function of PVAB matrix crystalinity degree, which can be controled as a function of time. This fact allowed us to conclude that the use of PVAB as matrix for PDLCs obtaining offers the possibility to modulate their properties for specific applications.

Keywords: 4-cyano-4’-penthylbiphenyl, buthyl-p-[p’-n-octyloxy benzoyloxy] benzoate, contact angle, polymer dispersed liquid crystals, poly vinyl alcohol boric acid

Procedia PDF Downloads 449
703 Assessment of DNA Sequence Encoding Techniques for Machine Learning Algorithms Using a Universal Bacterial Marker

Authors: Diego Santibañez Oyarce, Fernanda Bravo Cornejo, Camilo Cerda Sarabia, Belén Díaz Díaz, Esteban Gómez Terán, Hugo Osses Prado, Raúl Caulier-Cisterna, Jorge Vergara-Quezada, Ana Moya-Beltrán

Abstract:

The advent of high-throughput sequencing technologies has revolutionized genomics, generating vast amounts of genetic data that challenge traditional bioinformatics methods. Machine learning addresses these challenges by leveraging computational power to identify patterns and extract information from large datasets. However, biological sequence data, being symbolic and non-numeric, must be converted into numerical formats for machine learning algorithms to process effectively. So far, some encoding methods, such as one-hot encoding or k-mers, have been explored. This work proposes additional approaches for encoding DNA sequences in order to compare them with existing techniques and determine if they can provide improvements or if current methods offer superior results. Data from the 16S rRNA gene, a universal marker, was used to analyze eight bacterial groups that are significant in the pulmonary environment and have clinical implications. The bacterial genes included in this analysis are Prevotella, Abiotrophia, Acidovorax, Streptococcus, Neisseria, Veillonella, Mycobacterium, and Megasphaera. These data were downloaded from the NCBI database in Genbank file format, followed by a syntactic analysis to selectively extract relevant information from each file. For data encoding, a sequence normalization process was carried out as the first step. From approximately 22,000 initial data points, a subset was generated for testing purposes. Specifically, 55 sequences from each bacterial group met the length criteria, resulting in an initial sample of approximately 440 sequences. The sequences were encoded using different methods, including one-hot encoding, k-mers, Fourier transform, and Wavelet transform. Various machine learning algorithms, such as support vector machines, random forests, and neural networks, were trained to evaluate these encoding methods. The performance of these models was assessed using multiple metrics, including the confusion matrix, ROC curve, and F1 Score, providing a comprehensive evaluation of their classification capabilities. The results show that accuracies between encoding methods vary by up to approximately 15%, with the Fourier transform obtaining the best results for the evaluated machine learning algorithms. These findings, supported by the detailed analysis using the confusion matrix, ROC curve, and F1 Score, provide valuable insights into the effectiveness of different encoding methods and machine learning algorithms for genomic data analysis, potentially improving the accuracy and efficiency of bacterial classification and related genomic studies.

Keywords: DNA encoding, machine learning, Fourier transform, Fourier transformation

Procedia PDF Downloads 22
702 Customer Focus in Digital Economy: Case of Russian Companies

Authors: Maria Evnevich

Abstract:

In modern conditions, in most markets, price competition is becoming less effective. On the one hand, there is a gradual decrease in the level of marginality in main traditional sectors of the economy, so further price reduction becomes too ‘expensive’ for the company. On the other hand, the effect of price reduction is leveled, and the reason for this phenomenon is likely to be informational. As a result, it turns out that even if the company reduces prices, making its products more accessible to the buyer, there is a high probability that this will not lead to increase in sales unless additional large-scale advertising and information campaigns are conducted. Similarly, a large-scale information and advertising campaign have a much greater effect itself than price reductions. At the same time, the cost of mass informing is growing every year, especially when using the main information channels. The article presents generalization, systematization and development of theoretical approaches and best practices in the field of customer focus approach to business management and in the field of relationship marketing in the modern digital economy. The research methodology is based on the synthesis and content-analysis of sociological and marketing research and on the study of the systems of working with consumer appeals and loyalty programs in the 50 largest client-oriented companies in Russia. Also, the analysis of internal documentation on customers’ purchases in one of the largest retail companies in Russia allowed to identify if buyers prefer to buy goods for complex purchases in one retail store with the best price image for them. The cost of attracting a new client is now quite high and continues to grow, so it becomes more important to keep him and increase the involvement through marketing tools. A huge role is played by modern digital technologies used both in advertising (e-mailing, SEO, contextual advertising, banner advertising, SMM, etc.) and in service. To implement the above-described client-oriented omnichannel service, it is necessary to identify the client and work with personal data provided when filling in the loyalty program application form. The analysis of loyalty programs of 50 companies identified the following types of cards: discount cards, bonus cards, mixed cards, coalition loyalty cards, bank loyalty programs, aviation loyalty programs, hybrid loyalty cards, situational loyalty cards. The use of loyalty cards allows not only to stimulate the customer to purchase ‘untargeted’, but also to provide individualized offers, as well as to produce more targeted information. The development of digital technologies and modern means of communication has significantly changed not only the sphere of marketing and promotion, but also the economic landscape as a whole. Factors of competitiveness are the digital opportunities of companies in the field of customer orientation: personalization of service, customization of advertising offers, optimization of marketing activity and improvement of logistics.

Keywords: customer focus, digital economy, loyalty program, relationship marketing

Procedia PDF Downloads 163
701 Multicomponent Positive Psychology Intervention for Health Promotion of Retirees: A Feasibility Study

Authors: Helen Durgante, Mariana F. Sparremberger, Flavia C. Bernardes, Debora D. DellAglio

Abstract:

Health promotion programmes for retirees, based on Positive Psychology perspectives for the development of strengths and virtues, demand broadened empirical investigation in Brazil. In the case of evidence-based applied research, it is suggested feasibility studies are conducted prior to efficacy trials of the intervention, in order to identify and rectify possible faults in the design and implementation of the intervention. The aim of this study was to evaluate the feasibility of a multicomponent Positive Psychology programme for health promotion of retirees, based on Cognitive Behavioural Therapy and Positive Psychology perspectives. The programme structure included six weekly group sessions (two hours each) encompassing strengths such as Values and self-care, Optimism, Empathy, Gratitude, Forgiveness, and Meaning of life and work. The feasibility criteria evaluated were: Demand, Acceptability, Satisfaction with the programme and with the moderator, Comprehension/Generalization of contents, Evaluation of the moderator (Social Skills and Integrity/Fidelity), Adherence, and programme implementation. Overall, 11 retirees (F=11), age range 54-75, from the metropolitan region of Porto Alegre-RS-Brazil took part in the study. The instruments used were: Qualitative Admission Questionnaire; Moderator Field Diary; the Programme Evaluation Form to assess participants satisfaction with the programme and with the moderator (a six-item 4-point likert scale), and Comprehension/Generalization of contents (a three-item 4-point likert scale); Observers’ Evaluation Form to assess the moderator Social Skills (a five-item 4-point likert scale), Integrity/Fidelity (a 10 item 4-point likert scale), and Adherence (a nine-item 5-point likert scale). Qualitative data were analyzed using content analysis. Descriptive statistics as well as Intraclass Correlations coefficients were used for quantitative data and inter-rater reliability analysis. The results revealed high demand (N = 55 interested people) and acceptability (n = 10 concluded the programme with overall 88.3% frequency rate), satisfaction with the program and with the moderator (X = 3.76, SD = .34), and participants self-report of Comprehension/Generalization of contents provided in the programme (X = 2.82, SD = .51). In terms of the moderator Social Skills (X = 3.93; SD = .40; ICC = .752 [IC = .429-.919]), Integrity/Fidelity (X = 3.93; SD = .31; ICC = .936 [IC = .854-.981]), and participants Adherence (X = 4.90; SD = .29; ICC = .906 [IC = .783-.969]), evaluated by two independent observers present in each session of the programme, descriptive and Intraclass Correlation results were considered adequate. Structural changes were introduced in the intervention design and implementation methods, as well as the removal of items from questionnaires and evaluation forms. The obtained results were satisfactory, allowing changes to be made for further efficacy trials of the programme. Results are discussed taking cultural and contextual demands in Brazil into account.

Keywords: feasibility study, health promotion, positive psychology intervention, programme evaluation, retirees

Procedia PDF Downloads 193
700 Audio-Visual Co-Data Processing Pipeline

Authors: Rita Chattopadhyay, Vivek Anand Thoutam

Abstract:

Speech is the most acceptable means of communication where we can quickly exchange our feelings and thoughts. Quite often, people can communicate orally but cannot interact or work with computers or devices. It’s easy and quick to give speech commands than typing commands to computers. In the same way, it’s easy listening to audio played from a device than extract output from computers or devices. Especially with Robotics being an emerging market with applications in warehouses, the hospitality industry, consumer electronics, assistive technology, etc., speech-based human-machine interaction is emerging as a lucrative feature for robot manufacturers. Considering this factor, the objective of this paper is to design the “Audio-Visual Co-Data Processing Pipeline.” This pipeline is an integrated version of Automatic speech recognition, a Natural language model for text understanding, object detection, and text-to-speech modules. There are many Deep Learning models for each type of the modules mentioned above, but OpenVINO Model Zoo models are used because the OpenVINO toolkit covers both computer vision and non-computer vision workloads across Intel hardware and maximizes performance, and accelerates application development. A speech command is given as input that has information about target objects to be detected and start and end times to extract the required interval from the video. Speech is converted to text using the Automatic speech recognition QuartzNet model. The summary is extracted from text using a natural language model Generative Pre-Trained Transformer-3 (GPT-3). Based on the summary, essential frames from the video are extracted, and the You Only Look Once (YOLO) object detection model detects You Only Look Once (YOLO) objects on these extracted frames. Frame numbers that have target objects (specified objects in the speech command) are saved as text. Finally, this text (frame numbers) is converted to speech using text to speech model and will be played from the device. This project is developed for 80 You Only Look Once (YOLO) labels, and the user can extract frames based on only one or two target labels. This pipeline can be extended for more than two target labels easily by making appropriate changes in the object detection module. This project is developed for four different speech command formats by including sample examples in the prompt used by Generative Pre-Trained Transformer-3 (GPT-3) model. Based on user preference, one can come up with a new speech command format by including some examples of the respective format in the prompt used by the Generative Pre-Trained Transformer-3 (GPT-3) model. This pipeline can be used in many projects like human-machine interface, human-robot interaction, and surveillance through speech commands. All object detection projects can be upgraded using this pipeline so that one can give speech commands and output is played from the device.

Keywords: OpenVINO, automatic speech recognition, natural language processing, object detection, text to speech

Procedia PDF Downloads 78
699 Steel Concrete Composite Bridge: Modelling Approach and Analysis

Authors: Kaviyarasan D., Satish Kumar S. R.

Abstract:

India being vast in area and population with great scope of international business, roadways and railways network connection within the country is expected to have a big growth. There are numerous rail-cum-road bridges constructed across many major rivers in India and few are getting very old. So there is more possibility of repairing or coming up with such new bridges in India. Analysis and design of such bridges are practiced through conventional procedure and end up with heavy and uneconomical sections. Such heavy class steel bridges when subjected to high seismic shaking has more chance to fail by stability because the members are too much rigid and stocky rather than being flexible to dissipate the energy. This work is the collective study of the researches done in the truss bridge and steel concrete composite truss bridges presenting the method of analysis, tools for numerical and analytical modeling which evaluates its seismic behaviour and collapse mechanisms. To ascertain the inelastic and nonlinear behaviour of the structure, generally at research level static pushover analysis is adopted. Though the static pushover analysis is now extensively used for the framed steel and concrete buildings to study its lateral action behaviour, those findings by pushover analysis done for the buildings cannot directly be used for the bridges as such, because the bridges have completely a different performance requirement, behaviour and typology as compared to that of the buildings. Long span steel bridges are mostly the truss bridges. Truss bridges being formed by many members and connections, the failure of the system does not happen suddenly with single event or failure of one member. Failure usually initiates from one member and progresses gradually to the next member and so on when subjected to further loading. This kind of progressive collapse of the truss bridge structure is dependent on many factors, in which the live load distribution and span to length ratio are most significant. The ultimate collapse is anyhow by the buckling of the compression members only. For regular bridges, single step pushover analysis gives results closer to that of the non-linear dynamic analysis. But for a complicated bridge like heavy class steel bridge or the skewed bridges or complicated dynamic behaviour bridges, nonlinear analysis capturing the progressive yielding and collapse pattern is mandatory. With the knowledge of the postelastic behaviour of the bridge and advancements in the computational facility, the current level of analysis and design of bridges has moved to state of ascertaining the performance levels of the bridges based on the damage caused by seismic shaking. This is because the buildings performance levels deals much with the life safety and collapse prevention levels, whereas the bridges mostly deal with the extent damages and how quick it can be repaired with or without disturbing the traffic after a strong earthquake event. The paper would compile the wide spectrum of modeling to analysis of the steel concrete composite truss bridges in general.

Keywords: bridge engineering, performance based design of steel truss bridge, seismic design of composite bridge, steel-concrete composite bridge

Procedia PDF Downloads 183
698 Strategies for Synchronizing Chocolate Conching Data Using Dynamic Time Warping

Authors: Fernanda A. P. Peres, Thiago N. Peres, Flavio S. Fogliatto, Michel J. Anzanello

Abstract:

Batch processes are widely used in food industry and have an important role in the production of high added value products, such as chocolate. Process performance is usually described by variables that are monitored as the batch progresses. Data arising from these processes are likely to display a strong correlation-autocorrelation structure, and are usually monitored using control charts based on multiway principal components analysis (MPCA). Process control of a new batch is carried out comparing the trajectories of its relevant process variables with those in a reference set of batches that yielded products within specifications; it is clear that proper determination of the reference set is key for the success of a correct signalization of non-conforming batches in such quality control schemes. In chocolate manufacturing, misclassifications of non-conforming batches in the conching phase may lead to significant financial losses. In such context, the accuracy of process control grows in relevance. In addition to that, the main assumption in MPCA-based monitoring strategies is that all batches are synchronized in duration, both the new batch being monitored and those in the reference set. Such assumption is often not satisfied in chocolate manufacturing process. As a consequence, traditional techniques as MPCA-based charts are not suitable for process control and monitoring. To address that issue, the objective of this work is to compare the performance of three dynamic time warping (DTW) methods in the alignment and synchronization of chocolate conching process variables’ trajectories, aimed at properly determining the reference distribution for multivariate statistical process control. The power of classification of batches in two categories (conforming and non-conforming) was evaluated using the k-nearest neighbor (KNN) algorithm. Real data from a milk chocolate conching process was collected and the following variables were monitored over time: frequency of soybean lecithin dosage, rotation speed of the shovels, current of the main motor of the conche, and chocolate temperature. A set of 62 batches with durations between 495 and 1,170 minutes was considered; 53% of the batches were known to be conforming based on lab test results and experts’ evaluations. Results showed that all three DTW methods tested were able to align and synchronize the conching dataset. However, synchronized datasets obtained from these methods performed differently when inputted in the KNN classification algorithm. Kassidas, MacGregor and Taylor’s (named KMT) method was deemed the best DTW method for aligning and synchronizing a milk chocolate conching dataset, presenting 93.7% accuracy, 97.2% sensitivity and 90.3% specificity in batch classification, being considered the best option to determine the reference set for the milk chocolate dataset. Such method was recommended due to the lowest number of iterations required to achieve convergence and highest average accuracy in the testing portion using the KNN classification technique.

Keywords: batch process monitoring, chocolate conching, dynamic time warping, reference set distribution, variable duration

Procedia PDF Downloads 165
697 Validation of Mapping Historical Linked Data to International Committee for Documentation (CIDOC) Conceptual Reference Model Using Shapes Constraint Language

Authors: Ghazal Faraj, András Micsik

Abstract:

Shapes Constraint Language (SHACL), a World Wide Web Consortium (W3C) language, provides well-defined shapes and RDF graphs, named "shape graphs". These shape graphs validate other resource description framework (RDF) graphs which are called "data graphs". The structural features of SHACL permit generating a variety of conditions to evaluate string matching patterns, value type, and other constraints. Moreover, the framework of SHACL supports high-level validation by expressing more complex conditions in languages such as SPARQL protocol and RDF Query Language (SPARQL). SHACL includes two parts: SHACL Core and SHACL-SPARQL. SHACL Core includes all shapes that cover the most frequent constraint components. While SHACL-SPARQL is an extension that allows SHACL to express more complex customized constraints. Validating the efficacy of dataset mapping is an essential component of reconciled data mechanisms, as the enhancement of different datasets linking is a sustainable process. The conventional validation methods are the semantic reasoner and SPARQL queries. The former checks formalization errors and data type inconsistency, while the latter validates the data contradiction. After executing SPARQL queries, the retrieved information needs to be checked manually by an expert. However, this methodology is time-consuming and inaccurate as it does not test the mapping model comprehensively. Therefore, there is a serious need to expose a new methodology that covers the entire validation aspects for linking and mapping diverse datasets. Our goal is to conduct a new approach to achieve optimal validation outcomes. The first step towards this goal is implementing SHACL to validate the mapping between the International Committee for Documentation (CIDOC) conceptual reference model (CRM) and one of its ontologies. To initiate this project successfully, a thorough understanding of both source and target ontologies was required. Subsequently, the proper environment to run SHACL and its shape graphs were determined. As a case study, we performed SHACL over a CIDOC-CRM dataset after running a Pellet reasoner via the Protégé program. The applied validation falls under multiple categories: a) data type validation which constrains whether the source data is mapped to the correct data type. For instance, checking whether a birthdate is assigned to xsd:datetime and linked to Person entity via crm:P82a_begin_of_the_begin property. b) Data integrity validation which detects inconsistent data. For instance, inspecting whether a person's birthdate occurred before any of the linked event creation dates. The expected results of our work are: 1) highlighting validation techniques and categories, 2) selecting the most suitable techniques for those various categories of validation tasks. The next plan is to establish a comprehensive validation model and generate SHACL shapes automatically.

Keywords: SHACL, CIDOC-CRM, SPARQL, validation of ontology mapping

Procedia PDF Downloads 251
696 Dyadic Video Evidence on How Emotions in Parent Verbal Bids Affect Child Compliance in a British Sample

Authors: Iris Sirirada Pattara-Angkoon, Rory Devine, Anja Lindberg, Wendy Browne, Sarah Foley, Gabrielle McHarg, Claire Hughes

Abstract:

Introduction: The “Terrible Twos” is a phrase used to describe toddlers 18-30 months old. It characterizes a transition from high dependency to their caregivers in infancy to more autonomy and mastery of the body and environment. Toddlers at this age may also show more willfulness and stubbornness that could predict a future trajectory leading to conduct disorders. Thus, an important goal for this age group is to promote responsiveness to their caregivers (i.e., compliance). Existing literature tends to focus on praise to increase desirable child behavior. However, this relationship is not always straightforward as some studies have found no or negative association between praise and child compliance. Research suggests positive emotions and affection showed through body language (e.g., smiles) and actions (e.g., hugs, kisses) along with positive parent-child relationship can strengthen the praise and child compliance association. Nonetheless, few studies have examined the influences of positive emotionality within the speech. This is important as implementing verbal positive emotionality is easier than physical adjustments. The literature also tends not to include fathers in the study sample as mothers were traditionally the primary caregiver. However, as child-caring duties are increasing shared equally between mothers and fathers, it is important to include fathers within the study as studies have frequently found differences between female and male caregiver characteristics. Thus, the study will address the literary gap in two ways: 1. explore the influences of positive emotionality in parental speech and 2. include an equal sample of mothers and fathers. Positive emotionality is expected to positively correlate with and predict child compliance. Methodology: This study analyzed toddlers (18-24 months) in their dyadic interactions with mothers and fathers. A Duplo (block) task was used where parents had to work with their children to build the Duplo according to the given photo for four minutes. Then, they would be told to clean up the blocks. Parental positive emotionality in different speech types (e.g., bids, praises, affirmations) and child compliance were measured. Results: The study found that mothers (M = 28.92, SD = 12.01) were significantly more likely than fathers (M = 23.01, SD = 12.28) to use positive verbal emotionality in their speech, t(105) = 4.35, p< .001. High positive emotionality in bids during Duplo task and Clean Up was positively correlated with more child compliance in each task, r(273) = .35, p< .001 and r(264) = .58, p< .001, respectively. Overall, parental positive emotionality in speech significantly predicted child compliance, F(6, 218) = 13.33, p< .001, R² = .27) with emotionality in verbal bids (t = 6.20, p< .001) and affirmations (t = 3.12, p = .002) being significant predictors. Conclusion: Positive verbal emotions may be useful for increasing compliance in toddlers. This can be beneficial for compliance interventions as well as to the parent-child relationship quality through reduction of conflict and child defiance. As this study is correlational in nature, it will be important for future research to test the directional influence of positive emotionality within speech.

Keywords: child temperament, compliance, positive emotion, toddler, verbal bids

Procedia PDF Downloads 180
695 Understanding the Experiences of School Teachers and Administrators Involved in a Multi-Sectoral Approach to the Creation of a Physical Literacy Enriched Community

Authors: M. Louise Humbert, Karen E. Chad, Natalie E. Houser, Marta E. Erlandson

Abstract:

Physical literacy is the motivation, confidence, physical competence, knowledge, and understanding to value and takes responsibility for engagement in physical activities for life. In recent years, physical literacy has emerged as a determinant of health, promoting a positive lifelong physical activity trajectory. Physical literacy’s holistic approach and emphasis on the intrinsic valuation of movement provide an encouraging avenue for intervention among children to develop competent and confident movers. Although there is research on physical literacy interventions, no evidence exists on the outcomes of multi-sectoral interventions involving a combination of home, school, and community contexts. Since children interact with and in a wide range of contexts (home, school, community) daily, interventions designed to address a combination of these contexts are critical to the development of physical literacy. Working with school administrators and teachers, sports and recreation leaders, and community members, our team of university and community researchers conducted and evaluated one of the first multi-contextual and multi-sectoral physical literacy interventions in Canada. Schools played a critical role in this multi-sector intervention, and in this project, teachers and administrators focused their actions on developing physical literacy in students 10 to 14 years of age through the instruction of physical literacy-focused physical education lessons. Little is known about the experiences of educators when they work alongside an array of community representatives to develop physical literacy in school-aged children. Given the uniqueness of this intervention, we sought to answer the question, ‘What were the experiences of school-based educators involved in a multi-sectoral partnership focused on creating a physical literacy enriched community intervention?’ A thematic analysis approach was used to analyze data collected from interviews with educators and administrators, informal conversations, documents, and observations at workshops and meetings. Results indicated that schools and educators played the largest role in this multi-sector intervention. Educators initially reported a limited understanding of physical literacy and expressed a need for resources linked to the physical education curriculum. Some anxiety was expressed by the teachers as their students were measured, and educators noted they wanted to increase their understanding and become more involved in the assessment of physical literacy. Teachers reported that the intervention’s focus on physical literacy positively impacted the scheduling and their instruction of physical education. Administrators shared their desire for school and division-level actions targeting physical literacy development like the current focus on numeracy and literacy, treaty education, and safe schools. As this was one of the first multi-contextual and multi-sectoral physical literacy interventions, it was important to document creation and delivery experiences to encourage future growth in the area and develop suggested best practices.

Keywords: physical literacy, multi sector intervention, physical education, teachers

Procedia PDF Downloads 101
694 Treatment and Diagnostic Imaging Methods of Fetal Heart Function in Radiology

Authors: Mahdi Farajzadeh Ajirlou

Abstract:

Prior evidence of normal cardiac anatomy is desirable to relieve the anxiety of cases with a family history of congenital heart disease or to offer the option of early gestation termination or close follow-up should a cardiac anomaly be proved. Fetal heart discovery plays an important part in the opinion of the fetus, and it can reflect the fetal heart function of the fetus, which is regulated by the central nervous system. Acquisition of ventricular volume and inflow data would be useful to quantify more valve regurgitation and ventricular function to determine the degree of cardiovascular concession in fetal conditions at threat for hydrops fetalis. This study discusses imaging the fetal heart with transvaginal ultrasound, Doppler ultrasound, three-dimensional ultrasound (3DUS) and four-dimensional (4D) ultrasound, spatiotemporal image correlation (STIC), glamorous resonance imaging and cardiac catheterization. Doppler ultrasound (DUS) image is a kind of real- time image with a better imaging effect on blood vessels and soft tissues. DUS imaging can observe the shape of the fetus, but it cannot show whether the fetus is hypoxic or distressed. Spatiotemporal image correlation (STIC) enables the acquisition of a volume of data concomitant with the beating heart. The automated volume accession is made possible by the array in the transducer performing a slow single reach, recording a single 3D data set conforming to numerous 2D frames one behind the other. The volume accession can be done in a stationary 3D, either online 4D (direct volume scan, live 3D ultrasound or a so-called 4D (3D/ 4D)), or either spatiotemporal image correlation-STIC (off-line 4D, which is a circular volume check-up). Fetal cardiovascular MRI would appear to be an ideal approach to the noninvasive disquisition of the impact of abnormal cardiovascular hemodynamics on antenatal brain growth and development. Still, there are practical limitations to the use of conventional MRI for fetal cardiovascular assessment, including the small size and high heart rate of the mortal fetus, the lack of conventional cardiac gating styles to attend data accession, and the implicit corruption of MRI data due to motherly respiration and unpredictable fetal movements. Fetal cardiac MRI has the implicit to complement ultrasound in detecting cardiovascular deformations and extracardiac lesions. Fetal cardiac intervention (FCI), minimally invasive catheter interventions, is a new and evolving fashion that allows for in-utero treatment of a subset of severe forms of congenital heart deficiency. In special cases, it may be possible to modify the natural history of congenital heart disorders. It's entirely possible that future generations will ‘repair’ congenital heart deficiency in utero using nanotechnologies or remote computer-guided micro-robots that work in the cellular layer.

Keywords: fetal, cardiac MRI, ultrasound, 3D, 4D, heart disease, invasive, noninvasive, catheter

Procedia PDF Downloads 37
693 Monocoque Systems: The Reuniting of Divergent Agencies for Wood Construction

Authors: Bruce Wrightsman

Abstract:

Construction and design are inexorably linked. Traditional building methodologies, including those using wood, comprise a series of material layers differentiated and separated from each other. This results in the separation of two agencies of building envelope (skin) separate from the structure. However, from a material performance position reliant on additional materials, this is not an efficient strategy for the building. The merits of traditional platform framing are well known. However, its enormous effectiveness within wood-framed construction has seldom led to serious questioning and challenges in defining what it means to build. There are several downsides of using this method, which is less widely discussed. The first and perhaps biggest downside is waste. Second, its reliance on wood assemblies forming walls, floors and roofs conventionally nailed together through simple plate surfaces is structurally inefficient. It requires additional material through plates, blocking, nailers, etc., for stability that only adds to the material waste. In contrast, when we look back at the history of wood construction in airplane and boat manufacturing industries, we will see a significant transformation in the relationship of structure with skin. The history of boat construction transformed from indigenous wood practices of birch bark canoes to copper sheathing over wood to improve performance in the late 18th century and the evolution of merged assemblies that drives the industry today. In 1911, Swiss engineer Emile Ruchonnet designed the first wood monocoque structure for an airplane called the Cigare. The wing and tail assemblies consisted of thin, lightweight, and often fabric skin stretched tightly over a wood frame. This stressed skin has evolved into semi-monocoque construction, in which the skin merges with structural fins that take additional forces. It provides even greater strength with less material. The monocoque, which translates to ‘mono or single shell,’ is a structural system that supports loads and transfers them through an external enclosure system. They have largely existed outside the domain of architecture. However, this uniting of divergent systems has been demonstrated to be lighter, utilizing less material than traditional wood building practices. This paper will examine the role monocoque systems have played in the history of wood construction through lineage of boat and airplane building industries and its design potential for wood building systems in architecture through a case-study examination of a unique wood construction approach. The innovative approach uses a wood monocoque system comprised of interlocking small wood members to create thin shell assemblies for the walls, roof and floor, increasing structural efficiency and wasting less than 2% of the wood. The goal of the analysis is to expand the work of practice and the academy in order to foster deeper, more honest discourse regarding the limitations and impact of traditional wood framing.

Keywords: wood building systems, material histories, monocoque systems, construction waste

Procedia PDF Downloads 77
692 The Effect of Degraded Shock Absorbers on the Safety-Critical Tipping and Rolling Behaviour of Passenger Cars

Authors: Tobias Schramm, Günther Prokop

Abstract:

In Germany, the number of road fatalities has been falling since 2010 at a more moderate rate than before. At the same time, the average age of all registered passenger cars in Germany is rising continuously. Studies show that there is a correlation between the age and mileage of passenger cars and the degradation of their chassis components. Various studies show that degraded shock absorbers increase the braking distance of passenger cars and have a negative impact on driving stability. The exact effect of degraded vehicle shock absorbers on road safety is still the subject of research. A shock absorber examination as part of the periodic technical inspection is only mandatory in very few countries. In Germany, there is as yet no requirement for such a shock absorber examination. More comprehensive findings on the effect of degraded shock absorbers on the safety-critical driving dynamics of passenger cars can provide further arguments for the introduction of mandatory shock absorber testing as part of the periodic technical inspection. The specific effect chains of untripped rollover accidents are also still the subject of research. However, current research results show that the high proportion of sport utility vehicles in the vehicle field significantly increases the probability of untripped rollover accidents. The aim of this work is to estimate the effect of degraded twin-tube shock absorbers on the safety-critical tipping and rolling behaviour of passenger cars, which can lead to untripped rollover accidents. A characteristic curve-based five-mass full vehicle model and a semi-physical phenomenological shock absorber model were set up, parameterized and validated. The shock absorber model is able to reproduce the damping characteristics of vehicle twin-tube shock absorbers with oil and gas loss for various excitations. The full vehicle model was validated with steering wheel angle sinus sweep driving maneuvers. The model was then used to simulate steering wheel angle sine and fishhook maneuvers, which investigate the safety-critical tipping and rolling behavior of passenger cars. The simulations were carried out in a realistic parameter space in order to demonstrate the effect of various vehicle characteristics on the effect of degraded shock absorbers. As a result, it was shown that degraded shock absorbers have a negative effect on the tipping and rolling behavior of all passenger cars. Shock absorber degradation leads to a significant increase in the observed roll angles, particularly in the range of the roll natural frequency. This superelevation has a negative effect on the wheel load distribution during the driving maneuvers investigated. In particular, the height of the vehicle's center of gravity and the stabilizer stiffness of the vehicles has a major influence on the effect of degraded shock absorbers on the overturning and rolling behaviour of passenger cars.

Keywords: numerical simulation, safety-critical driving dynamics, suspension degradation, tipping and rolling behavior of passenger cars, vehicle shock absorber

Procedia PDF Downloads 4
691 Graphene Metamaterials Supported Tunable Terahertz Fano Resonance

Authors: Xiaoyong He

Abstract:

The manipulation of THz waves is still a challenging task due to lack of natural materials interacted with it strongly. Designed by tailoring the characters of unit cells (meta-molecules), the advance of metamaterials (MMs) may solve this problem. However, because of Ohmic and radiation losses, the performance of MMs devices is subjected to the dissipation and low quality factor (Q-factor). This dilemma may be circumvented by Fano resonance, which arises from the destructive interference between a bright continuum mode and dark discrete mode (or a narrow resonance). Different from symmetric Lorentz spectral curve, Fano resonance indicates a distinct asymmetric line-shape, ultrahigh quality factor, steep variations in spectrum curves. Fano resonance is usually realized through symmetry breaking. However, if concentric double rings (DR) are placed closely to each other, the near-field coupling between them gives rise to two hybridized modes (bright and narrowband dark modes) because of the local asymmetry, resulting into the characteristic Fano line shape. Furthermore, from the practical viewpoint, it is highly desirable requirement that to achieve the modulation of Fano spectral curves conveniently, which is an important and interesting research topics. For current Fano systems, the tunable spectral curves can be realized by adjusting the geometrical structural parameters or magnetic fields biased the ferrite-based structure. But due to limited dispersion properties of active materials, it is still a tough work to tailor Fano resonance conveniently with the fixed structural parameters. With the favorable properties of extreme confinement and high tunability, graphene is a strong candidate to achieve this goal. The DR-structure possesses the excitation of so-called “trapped modes,” with the merits of simple structure and high quality of resonances in thin structures. By depositing graphene circular DR on the SiO2/Si/ polymer substrate, the tunable Fano resonance has been theoretically investigated in the terahertz regime, including the effects of graphene Fermi level, structural parameters and operation frequency. The results manifest that the obvious Fano peak can be efficiently modulated because of the strong coupling between incident waves and graphene ribbons. As Fermi level increases, the peak amplitude of Fano curve increases, and the resonant peak position shifts to high frequency. The amplitude modulation depth of Fano curves is about 30% if Fermi level changes in the scope of 0.1-1.0 eV. The optimum gap distance between DR is about 8-12 μm, where the value of figure of merit shows a peak. As the graphene ribbon width increases, the Fano spectral curves become broad, and the resonant peak denotes blue shift. The results are very helpful to develop novel graphene plasmonic devices, e.g. sensors and modulators.

Keywords: graphene, metamaterials, terahertz, tunable

Procedia PDF Downloads 343
690 Providing Support On-Time: Need to Establish De-Radicalization Hotlines

Authors: Ashir Ahmed

Abstract:

Peacekeeping is a collective responsibility of governments, law enforcement agencies, communities, families, and individuals. Moreover, the complex nature of peacekeeping activities requires a holistic and collaborative approach where various community sectors work together to form collective strategies that are likely to be more effective than strategies designed and delivered in isolation. Similarly, it is important to learn from past programs to evaluate the initiatives that have worked well and the areas that need further improvement. Review of recent peacekeeping initiatives suggests that there have been tremendous efforts and resources put in place to deal with the emerging threat of terrorism, radicalization and violent extremism through number of de-radicalization programs. Despite various attempts in designing and delivering successful programs for deradicalization, the threat of people being radicalized is growing more than ever before. This research reviews the prominent de-radicalization programs to draw an understanding of their strengths and weaknesses. Some of the weaknesses in the existing programs include. Inaccessibility: Limited resources, geographical location of potential participants (for offline programs), inaccessibility or inability to use various technologies (for online programs) makes it difficult for people to participate in de-radicalization programs. Timeliness: People might need to wait for a program on a set date/time to get the required information and to get their questions answered. This is particularly true for offline programs. Lack of trust: The privacy issues and lack of trust between participants and program organizers are another hurdle in the success of de-radicalization programs. The fear of sharing participants information with organizations (such as law enforcement agencies) without their consent led them not to participate in these programs. Generalizability: Majority of these programs are very generic in nature and do not cater the specific needs of an individual. Participants in these programs may feel that the contents are irrelevant to their individual situations and hence feel disconnected with purpose of the programs. To address the above-mentioned weaknesses, this research developed a framework that recommends some improvements in de-radicalization programs. One of the recommendations is to offer 24/7, secure, private and online hotline (also referred as helpline) for the people who have any question, concern or situation to discuss with someone who is qualified (a counsellor) to deal with people who are vulnerable to be radicalized. To make these hotline services viable and sustainable, the existing organizations offering support for depression, anxiety or suicidal ideation could additionally host these services. These helplines should be available via phone, the internet, social media and in-person. Since these services will be embedded within existing and well-known services, they would likely to get more visibility and promotion. The anonymous and secure conversation between a person and a counsellor would ensure that a person can discuss the issues without being afraid of information sharing with any third party – without his/her consent. The next stage of this project would include the operationalization of the framework by collaborating with other organizations to host de-radicalization hotlines and would assess the effectiveness of such initiatives.

Keywords: de-radicalization, framework, hotlines, peacekeeping

Procedia PDF Downloads 214
689 Text Mining Past Medical History in Electrophysiological Studies

Authors: Roni Ramon-Gonen, Amir Dori, Shahar Shelly

Abstract:

Background and objectives: Healthcare professionals produce abundant textual information in their daily clinical practice. The extraction of insights from all the gathered information, mainly unstructured and lacking in normalization, is one of the major challenges in computational medicine. In this respect, text mining assembles different techniques to derive valuable insights from unstructured textual data, so it has led to being especially relevant in Medicine. Neurological patient’s history allows the clinician to define the patient’s symptoms and along with the result of the nerve conduction study (NCS) and electromyography (EMG) test, assists in formulating a differential diagnosis. Past medical history (PMH) helps to direct the latter. In this study, we aimed to identify relevant PMH, understand which PMHs are common among patients in the referral cohort and documented by the medical staff, and examine the differences by sex and age in a large cohort based on textual format notes. Methods: We retrospectively identified all patients with abnormal NCS between May 2016 to February 2022. Age, gender, and all NCS attributes reports were recorded, including the summary text. All patients’ histories were extracted from the text report by a query. Basic text cleansing and data preparation were performed, as well as lemmatization. Very popular words (like ‘left’ and ‘right’) were deleted. Several words were replaced with their abbreviations. A bag of words approach was used to perform the analyses. Different visualizations which are common in text analysis, were created to easily grasp the results. Results: We identified 5282 unique patients. Three thousand and five (57%) patients had documented PMH. Of which 60.4% (n=1817) were males. The total median age was 62 years (range 0.12 – 97.2 years), and the majority of patients (83%) presented after the age of forty years. The top two documented medical histories were diabetes mellitus (DM) and surgery. DM was observed in 16.3% of the patients, and surgery at 15.4%. Other frequent patient histories (among the top 20) were fracture, cancer (ca), motor vehicle accident (MVA), leg, lumbar, discopathy, back and carpal tunnel release (CTR). When separating the data by sex, we can see that DM and MVA are more frequent among males, while cancer and CTR are less frequent. On the other hand, the top medical history in females was surgery and, after that, DM. Other frequent histories among females are breast cancer, fractures, and CTR. In the younger population (ages 18 to 26), the frequent PMH were surgery, fractures, trauma, and MVA. Discussion: By applying text mining approaches to unstructured data, we were able to better understand which medical histories are more relevant in these circumstances and, in addition, gain additional insights regarding sex and age differences. These insights might help to collect epidemiological demographical data as well as raise new hypotheses. One limitation of this work is that each clinician might use different words or abbreviations to describe the same condition, and therefore using a coding system can be beneficial.

Keywords: abnormal studies, healthcare analytics, medical history, nerve conduction studies, text mining, textual analysis

Procedia PDF Downloads 94
688 Nondestructive Monitoring of Atomic Reactions to Detect Precursors of Structural Failure

Authors: Volodymyr Rombakh

Abstract:

This article was written to substantiate the possibility of detecting the precursors of catastrophic destruction of a structure or device and stopping operation before it. Damage to solids results from breaking the bond between atoms, which requires energy. Modern theories of strength and fracture assume that such energy is due to stress. However, in a letter to W. Thomson (Lord Kelvin) dated December 18, 1856, J.C. Maxwell provided evidence that elastic energy cannot destroy solids. He proposed an equation for estimating a deformable body's energy, equal to the sum of two energies. Due to symmetrical compression, the first term does not change, but the second term is distortion without compression. Both types of energy are represented in the equation as a quadratic function of strain, but Maxwell repeatedly wrote that it is not stress but strain. Furthermore, he notes that the nature of the energy causing the distortion is unknown to him. An article devoted to theories of elasticity was published in 1850. Maxwell tried to express mechanical properties with the help of optics, which became possible only after the creation of quantum mechanics. However, Maxwell's work on elasticity is not cited in the theories of strength and fracture. The authors of these theories and their associates are still trying to describe the phenomena they observe based on classical mechanics. The study of Faraday's experiments, Maxwell's and Rutherford's ideas, made it possible to discover a previously unknown area of electromagnetic radiation. The properties of photons emitted in this reaction are fundamentally different from those of photons emitted in nuclear reactions and are caused by the transition of electrons in an atom. The photons released during all processes in the universe, including from plants and organs in natural conditions; their penetrating power in metal is millions of times greater than that of one of the gamma rays. However, they are not non-invasive. This apparent contradiction is because the chaotic motion of protons is accompanied by the chaotic radiation of photons in time and space. Such photons are not coherent. The energy of a solitary photon is insufficient to break the bond between atoms, one of the stages of which is ionization. The photographs registered the rail deformation by 113 cars, while the Gaiger Counter did not. The author's studies show that the cause of damage to a solid is the breakage of bonds between a finite number of atoms due to the stimulated emission of metastable atoms. The guarantee of the reliability of the structure is the ratio of the energy dissipation rate to the energy accumulation rate, but not the strength, which is not a physical parameter since it cannot be measured or calculated. The possibility of continuous control of this ratio is due to the spontaneous emission of photons by metastable atoms. The article presents calculation examples of the destruction of energy and photographs due to the action of photons emitted during the atomic-proton reaction.

Keywords: atomic-proton reaction, precursors of man-made disasters, strain, stress

Procedia PDF Downloads 91
687 Digital Technology Relevance in Archival and Digitising Practices in the Republic of South Africa

Authors: Tashinga Matindike

Abstract:

By means of definition, digital artworks encompass an array of artistic productions that are expressed in a technological form as an essential part of a creative process. Examples include illustrations, photos, videos, sculptures, and installations. Within the context of the visual arts, the process of repatriation involves the return of once-appropriated goods. Archiving denotes the preservation of a commodity for storage purposes in order to nurture its continuity. The aforementioned definitions form the foundation of the academic framework and premise of the argument, which is outlined in this paper. This paper aims to define, discuss and decipher the complexities involved in digitising artworks, whilst explaining the benefits of the process, particularly within the South African context, which is rich in tangible and intangible traditional cultural material, objects, and performances. With the internet having been introduced to the African Continent in the early 1990s, this new form of technology, in its own right, initiated a high degree of efficiency, which also resulted in the progressive transformation of computer-generated visual output. Subsequently, this caused a revolutionary influence on the manner in which technological software was developed and uterlised in art-making. Digital technology and the digitisation of creative processes then opened up new avenues of collating and recording information. One of the first visual artists to make use of digital technology software in his creative productions was United States-based artist John Whitney. His inventive work contributed greatly to the onset and development of digital animation. Comparable by technique and originality, South African contemporary visual artists who make digital artworks, both locally and internationally, include David Goldblatt, Katherine Bull, Fritha Langerman, David Masoga, Zinhle Sethebe, Alicia Mcfadzean, Ivan Van Der Walt, Siobhan Twomey, and Fhatuwani Mukheli. In conclusion, the main objective of this paper is to address the following questions: In which ways has the South African art community of visual artists made use of and benefited from technology, in its digital form, as a means to further advance creativity? What are the positive changes that have resulted in art production in South Africa since the onset and use of digital technological software? How has digitisation changed the manner in which we record, interpret, and archive both written and visual information? What is the role of South African art institutions in the development of digital technology and its use in the field of visual art. What role does digitisation play in the process of the repatriation of artworks and artefacts. The methodology in terms of the research process of this paper takes on a multifacted form, inclusive of data analysis of information attained by means of qualitative and quantitative approaches.

Keywords: digital art, digitisation, technology, archiving, transformation and repatriation

Procedia PDF Downloads 50
686 Concussion: Clinical and Vocational Outcomes from Sport Related Mild Traumatic Brain Injury

Authors: Jack Nash, Chris Simpson, Holly Hurn, Ronel Terblanche, Alan Mistlin

Abstract:

There is an increasing incidence of mild traumatic brain injury (mTBI) cases throughout sport and with this, a growing interest from governing bodies to ensure these are managed appropriately and player welfare is prioritised. The Berlin consensus statement on concussion in sport recommends a multidisciplinary approach when managing those patients who do not have full resolution of mTBI symptoms. There are as of yet no standardised guideline to follow in the treatment of complex cases mTBI in athletes. The aim of this project was to analyse the outcomes, both clinical and vocational, of all patients admitted to the mild Traumatic Brain Injury (mTBI) service at the UK’s Defence Military Rehabilitation Centre Headley Court between 1st June 2008 and 1st February 2017, as a result of a sport induced injury, and evaluate potential predictive indicators of outcome. Patients were identified from a database maintained by the mTBI service. Clinical and occupational outcomes were ascertained from medical and occupational employment records, recorded prospectively, at time of discharge from the mTBI service. Outcomes were graded based on the vocational independence scale (VIS) and clinical documentation at discharge. Predictive indicators including referral time, age at time of injury, previous mental health diagnosis and a financial claim in place at time of entry to service were assessed using logistic regression. 45 Patients were treated for sport-related mTBI during this time frame. Clinically 96% of patients had full resolution of their mTBI symptoms after input from the mTBI service. 51% of patients returned to work at their previous vocational level, 4% had ongoing mTBI symptoms, 22% had ongoing physical rehabilitation needs, 11% required mental health input and 11% required further vestibular rehabilitation. Neither age, time to referral, pre-existing mental health condition nor compensation seeking had a significant impact on either vocational or clinical outcome in this population. The vast majority of patients reviewed in the mTBI clinic had persistent symptoms which could not be managed in primary care. A consultant-led, multidisciplinary approach to the diagnosis and management of mTBI has resulted in excellent clinical outcomes in these complex cases. High levels of symptom resolution suggest that this referral and treatment pathway is successful and is a model which could be replicated in other organisations with consultant led input. Further understanding of both predictive and individual factors would allow clinicians to focus treatments on those who are most likely to develop long-term complications following mTBI. A consultant-led, multidisciplinary service ensures a large number of patients will have complete resolution of mTBI symptoms after sport-related mTBI. Further research is now required to ascertain the key predictive indicators of outcome following sport-related mTBI.

Keywords: brain injury, concussion, neurology, rehabilitation, sports injury

Procedia PDF Downloads 156
685 Estimating Industrial Pollution Load in Phnom Penh by Industrial Pollution Projection System

Authors: Vibol San, Vin Spoann

Abstract:

Manufacturing plays an important role in job creation around the world. In 2013, it is estimated that there were more than half a billion jobs in manufacturing. In Cambodia in 2015, the primary industry occupies 26.18% of the total economy, while agriculture is contributing 29% and the service sector 39.43%. The number of industrial factories, which are dominated by garment and textiles, has increased since 1994, mainly in Phnom Penh city. Approximately 56% out of total 1302 firms are operated in the Capital city in Cambodia. Industrialization to achieve the economic growth and social development is directly responsible for environmental degradation, threatening the ecosystem and human health issues. About 96% of total firms in Phnom Penh city are the most and moderately polluting firms, which have contributed to environmental concerns. Despite an increasing array of laws, strategies and action plans in Cambodia, the Ministry of Environment has encountered some constraints in conducting the monitoring work, including lack of human and financial resources, lack of research documents, the limited analytical knowledge, and lack of technical references. Therefore, the necessary information on industrial pollution to set strategies, priorities and action plans on environmental protection issues is absent in Cambodia. In the absence of this data, effective environmental protection cannot be implemented. The objective of this study is to estimate industrial pollution load by employing the Industrial Pollution Projection System (IPPS), a rapid environmental management tool for assessment of pollution load, to produce a scientific rational basis for preparing future policy direction to reduce industrial pollution in Phnom Penh city. Due to lack of industrial pollution data in Phnom Penh, industrial emissions to the air, water and land as well as the sum of emissions to all mediums (air, water, land) are estimated using employment economic variable in IPPS. Due to the high number of employees, the total environmental load generated in Phnom Penh city is estimated to be 476.980.93 tons in 2014, which is the highest industrial pollution compared to other locations in Cambodia. The result clearly indicates that Phnom Penh city is the highest emitter of all pollutants in comparison with environmental pollutants released by other provinces. The total emission of industrial pollutants in Phnom Penh shares 55.79% of total industrial pollution load in Cambodia. Phnom Penh city generates 189,121.68 ton of VOC, 165,410.58 ton of toxic chemicals to air, 38,523.33 ton of toxic chemicals to land and 28,967.86 ton of SO2 in 2014. The results of the estimation show that Textile and Apparel sector is the highest generators of toxic chemicals into land and air, and toxic metals into land, air and water, while Basic Metal sector is the highest contributor of toxic chemicals to water. Textile and Apparel sector alone emits 436,015.84 ton of total industrial pollution loads. The results suggest that reduction in industrial pollution could be achieved by focusing on the most polluting sectors.

Keywords: most polluting area, polluting industry, pollution load, pollution intensity

Procedia PDF Downloads 259