Search results for: hand control
625 Carbonyl Iron Particles Modified with Pyrrole-Based Polymer and Electric and Magnetic Performance of Their Composites
Authors: Miroslav Mrlik, Marketa Ilcikova, Martin Cvek, Josef Osicka, Michal Sedlacik, Vladimir Pavlinek, Jaroslav Mosnacek
Abstract:
Magnetorheological elastomers (MREs) are a unique type of materials consisting of two components, magnetic filler, and elastomeric matrix. Their properties can be tailored upon application of an external magnetic field strength. In this case, the change of the viscoelastic properties (viscoelastic moduli, complex viscosity) are influenced by two crucial factors. The first one is magnetic performance of the particles and the second one is off-state stiffness of the elastomeric matrix. The former factor strongly depends on the intended applications; however general rule is that higher magnetic performance of the particles provides higher MR performance of the MRE. Since magnetic particles possess low stability properties against temperature and acidic environment, several methods how to improve these drawbacks have been developed. In the most cases, the preparation of the core-shell structures was employed as a suitable method for preservation of the magnetic particles against thermal and chemical oxidations. However, if the shell material is not single-layer substance, but polymer material, the magnetic performance is significantly suppressed, due to the in situ polymerization technique, when it is very difficult to control the polymerization rate and the polymer shell is too thick. The second factor is the off-state stiffness of the elastomeric matrix. Since the MR effectivity is calculated as the relative value of the elastic modulus upon magnetic field application divided by elastic modulus in the absence of the external field, also the tuneability of the cross-linking reaction is highly desired. Therefore, this study is focused on the controllable modification of magnetic particles using a novel monomeric system based on 2-(1H-pyrrol-1-yl)ethyl methacrylate. In this case, the short polymer chains of different chain lengths and low polydispersity index will be prepared, and thus tailorable stability properties can be achieved. Since the relatively thin polymer chains will be grafted on the surface of magnetic particles, their magnetic performance will be affected only slightly. Furthermore, also the cross-linking density will be affected, due to the presence of the short polymer chains. From the application point of view, such MREs can be utilized for, magneto-resistors, piezoresistors or pressure sensors especially, when the conducting shell on the magnetic particles will be created. Therefore, the selection of the pyrrole-based monomer is very crucial and controllably thin layer of conducting polymer can be prepared. Finally, such composite particle consisting of magnetic core and conducting shell dispersed in elastomeric matrix can find also the utilization in shielding application of electromagnetic waves.Keywords: atom transfer radical polymerization, core-shell, particle modification, electromagnetic waves shielding
Procedia PDF Downloads 209624 Modelling of Meandering River Dynamics in Colombia: A Case Study of the Magdalena River
Authors: Laura Isabel Guarin, Juliana Vargas, Philippe Chang
Abstract:
The analysis and study of Open Channel flow dynamics for River applications has been based on flow modelling using discreet numerical models based on hydrodynamic equations. The overall spatial characteristics of rivers, i.e. its length to depth to width ratio generally allows one to correctly disregard processes occurring in the vertical or transverse dimensions thus imposing hydrostatic pressure conditions and considering solely a 1D flow model along the river length. Through a calibration process an accurate flow model may thus be developed allowing for channel study and extrapolation of various scenarios. The Magdalena River in Colombia is a large river basin draining the country from South to North with 1550 km with 0.0024 average slope and 275 average width across. The river displays high water level fluctuation and is characterized by a series of meanders. The city of La Dorada has been affected over the years by serious flooding in the rainy and dry seasons. As the meander is evolving at a steady pace repeated flooding has endangered a number of neighborhoods. This study has been undertaken in pro of correctly model flow characteristics of the river in this region in order to evaluate various scenarios and provide decision makers with erosion control measures options and a forecasting tool. Two field campaigns have been completed over the dry and rainy seasons including extensive topographical and channel survey using Topcon GR5 DGPS and River Surveyor ADCP. Also in order to characterize the erosion process occurring through the meander, extensive suspended and river bed samples were retrieved as well as soil perforation over the banks. Hence based on DEM ground digital mapping survey and field data a 2DH flow model was prepared using the Iber freeware based on the finite volume method in a non-structured mesh environment. The calibration process was carried out comparing available historical data of nearby hydrologic gauging station. Although the model was able to effectively predict overall flow processes in the region, its spatial characteristics and limitations related to pressure conditions did not allow for an accurate representation of erosion processes occurring over specific bank areas and dwellings. As such a significant helical flow has been observed through the meander. Furthermore, the rapidly changing channel cross section as a consequence of severe erosion has hindered the model’s ability to provide decision makers with a valid up to date planning tool.Keywords: erosion, finite volume method, flow dynamics, flow modelling, meander
Procedia PDF Downloads 319623 Shear Strength Envelope Characteristics of LimeTreated Clays
Authors: Mohammad Moridzadeh, Gholamreza Mesri
Abstract:
The effectiveness of lime treatment of soils has been commonly evaluated in terms of improved workability and increased undrained unconfined compressive strength in connection to road and airfield construction. The most common method of strength measurement has been the unconfined compression test. However, if the objective of lime treatment is to improve long-term stability of first-time or reactivated landslides in stiff clays and shales, permanent changes in the size and shape of clay particles must be realized to increase drained frictional resistance. Lime-soil interactions that may produce less platy and larger soil particles begin and continue with time under the highly alkaline pH environment. In this research, pH measurements are used to monitor chemical environment and progress of reactions. Atterberg limits are measured to identify changes in particle size and shape indirectly. Also, fully softened and residual strength measurements are used to examine an improvement in frictional resistance due to lime-soil interactions. The main variables are soil plasticity and mineralogy, lime content, water content, and curing period. Lime effect on frictional resistance is examined using samples of clays with different mineralogy and characteristics which may react with lime to various extents. Drained direct shear tests on reconstituted lime-treated clay specimens with various properties have been performed to measure fully softened shear strength. To measure residual shear strength, drained multiple reversal direct shear tests on precut specimens were conducted. This way, soil particles are oriented along the direction of shearing to the maximum possible extent and provide minimum frictional resistance. This is applicable to reactivated and part of first-time landslides. The Brenna clay, which is the highly plastic lacustrine clay of Lake Agassiz causing slope instability along the banks of the Red River, is one of the soil samples used in this study. The Brenna Formation characterized as a uniform, soft to firm, dark grey, glaciolacustrine clay with little or no visible stratification, is full of slickensided surfaces. The major source of sediment for the Brenna Formation was the highly plastic montmorillonitic Pierre Shale bedrock. The other soil used in this study is one of the main sources of slope instability in Harris County Flood Control District (HCFCD), i.e. the Beaumont clay. The shear strengths of untreated and treated clays were obtained under various normal pressures to evaluate the shear envelope nonlinearity.Keywords: Brenna clay, friction resistance, lime treatment, residual
Procedia PDF Downloads 159622 Analysis of Superconducting and Optical Properties in Atomic Layer Deposition and Sputtered Thin Films for Next-Generation Single-Photon Detectors
Authors: Nidhi Choudhary, Silke A. Peeters, Ciaran T. Lennon, Dmytro Besprozvannyy, Harm C. M. Knoops, Robert H. Hadfield
Abstract:
Superconducting Nanowire Single Photon Detectors (SNSPDs) have become leading devices in quantum optics and photonics, known for their exceptional efficiency in detecting single photons from ultraviolet to mid-infrared wavelengths with minimal dark counts, low noise, and reduced timing jitter. Recent advancements in materials science focus attention on refractory metal thin films such as NbN and NbTiN to enhance the optical properties and superconducting performance of SNSPDs, opening the way for next-generation detectors. These films have been deposited by several different techniques, such as atomic layer deposition (ALD), plasma pro-advanced plasma processing (ASP) and magnetron sputtering. The fabrication flexibility of these films enables precise control over morphology, crystallinity, stoichiometry and optical properties, which is crucial for optimising the SNSPD performance. Hence, it is imperative to study the optical and superconducting properties of these materials across a wide range of wavelengths. This study provides a comprehensive analysis of the optical and superconducting properties of some important materials in this category (NbN, NbTiN) by different deposition methods. Using Variable angle ellipsometry spectroscopy (VASE), we measured the refractive index, extinction, and absorption coefficient across a wide wavelength range (200-1700 nm) to enhance light confinement for optical communication devices. The critical temperature and sheet resistance were measured using a four-probe method in a custom-built, cryogen-free cooling system with a Sumitomo RDK-101D cold head and CNA-11C compressor. Our results indicate that ALD-deposited NbN shows a higher refractive index and extinction coefficient in the near-infrared region (~1500 nm) than sputtered NbN of the same thickness. Further, the analysis of the optical properties of plasma pro-ASP deposited NbTiN was performed at different substrate bias voltages and different thicknesses. The analysis of substrate bias voltage indicates that the maximum value of the refractive index and extinction coefficient observed for the substrate biasing of 50-80 V across a substrate bias range of (0 V - 150 V). The optical properties of sputtered NbN films are also investigated in terms of the different substrate temperatures during deposition (100 °C-500 °C). We find the higher the substrate temperature during deposition, the higher the value of the refractive index and extinction coefficient has been observed. In all our superconducting thin films ALD-deposited NbN films possess the highest critical temperature (~12 K) compared to sputtered (~8 K) and plasma pro-ASP (~5 K).Keywords: optical communication, thin films, superconductivity, atomic layer deposition (ALD), niobium nitride (NbN), niobium titanium nitride (NbTiN), SNSPD, superconducting detector, photon-counting.
Procedia PDF Downloads 29621 The Influence of Ibuprofen, Diclofenac and Naproxen on Composition and Ultrastructural Characteristics of Atriplex patula and Spinacia oleracea
Authors: Ocsana Opris, Ildiko Lung, Maria L. Soran, Alexandra Ciorita, Lucian Copolovici
Abstract:
The effects assessment of environmental stress factors on both crop and wild plants of nutritional value are a very important research topic. Continuously worldwide consumption of drugs leads to significant environmental pollution, thus generating environmental stress. Understanding the effects of the important drugs on plant composition and ultrastructural modification is still limited, especially at environmentally relevant concentrations. The aim of the present work was to investigate the influence of three non-steroidal anti-inflammatory drugs (NSAIDs) on chlorophylls content, carotenoids content, total polyphenols content, antioxidant capacity, and ultrastructure of orache (Atriplex patula L.) and spinach (Spinacia oleracea L.). All green leafy vegetables selected for this study were grown in controlled conditions and treated with solutions of different concentrations (0.1‒1 mg L⁻¹) of diclofenac, ibuprofen, and naproxen. After eight weeks of exposure of the plants to NSAIDs, the chlorophylls and carotenoids content were analyzed by high-performance liquid chromatography coupled with photodiode array and mass spectrometer detectors, total polyphenols and antioxidant capacity by ultraviolet-visible spectroscopy. Also, the ultrastructural analyses of the vegetables were performed using transmission electron microscopy in order to assess the influence of the selected NSAIDs on cellular organisms, mainly photosynthetic organisms (chloroplasts), energy supply organisms (mitochondria) and nucleus as a cellular metabolism coordinator. In comparison with the control plants, decreases in the content of chlorophylls were observed in the case of the Atriplex patula L. plants treated with ibuprofen (11-34%) and naproxen (25-52%). Also, the chlorophylls content from Spinacia oleracea L. was affected, the lowest decrease (34%) being obtained in the case of the treatment with naproxen (1 mg L⁻¹). Diclofenac (1 mg L⁻¹) affected the total polyphenols content (a decrease of 45%) of Atriplex patula L. and ibuprofen (1 mg L⁻¹) affected the total polyphenols content (a decrease of 20%) of Spinacia oleracea L. The results obtained also indicate a moderate reduction of carotenoids and antioxidant capacity in the treated plants, in comparison with the controls. The investigations by transmission electron microscopy demonstrated that the green leafy vegetables were affected by the selected NSAIDs. Thus, this research contributes to a better understanding of the adverse effects of these drugs on studied plants. Important to mention is that the dietary intake of these drugs contaminated plants, plants with important nutritional value, may also presume a risk to human health, but currently little is known about the fate of the drugs in plants and their effect on or risk to the ecosystem.Keywords: abiotic stress, green leafy vegetables, pigments content, ultra structure
Procedia PDF Downloads 125620 Multiple Intelligences to Improve Pronunciation
Authors: Jean Pierre Ribeiro Daquila
Abstract:
This paper aims to analyze the use of the Theory of Multiple Intelligences as a tool to facilitate students’ learning. This theory, proposed by the American psychologist and educator Howard Gardner, was first established in 1983 and advocates that human beings possess eight intelligence and not only one, as defended by psychologists prior to his theory. These intelligence are bodily-kinesthetic intelligence, musical, linguistic, logical-mathematical, spatial, interpersonal, intrapersonal, and naturalist. This paper will focus on bodily-kinesthetic intelligence. Spatial and bodily-kinesthetic intelligences are sensed by athletes, dancers, and others who use their bodies in ways that exceed normal abilities. These are intelligences that are closely related. A quarterback or a ballet dancer needs to have both an awareness of body motions and abilities as well as a sense of the space involved in the action. Nevertheless, there are many reasons which make classical ballet dance more integrated with other intelligences. Ballet dancers make it look effortless as they move across the stage, from the lifts to the toe points; therefore, there is acting both in the performance of the repertoire and in hiding the pain or physical stress. The ballet dancer has to have great mathematical intelligence to perform a fast allegro; for instance, each movement has to be executed in a specific millisecond. Flamenco dancers need to rely as well on their mathematic abilities, as the footwork requires the ability to make half, two, three, four or even six movements in just one beat. However, the precision of the arm movements is freer than in ballet dance; for this reason, ballet dancers need to be more holistically aware of their movements; therefore, our experiment will test whether this greater attention required by ballet dancers makes them acquire better results in the training sessions when compared to flamenco dancers. An experiment will be carried out in this study by training ballet dancers through dance (four years of experience dancing minimum – experimental group 1); a group of flamenco dancers (four years of experience dancing minimum – experimental group 2). Both experimental groups will be trained in two different domains – phonetics and chemistry – to examine whether there is a significant improvement in these areas compared to the control group (a group of regular students who will receive the same training through a traditional method). However, this paper will focus on phonetic training. Experimental group 1 will be trained with the aid of classical music plus bodily work. Experimental group 2 will be trained with flamenco rhythm and kinesthetic work. We would like to highlight that this study takes dance as an example of a possible area of strength; nonetheless, other types of arts can and should be used to support students, such as drama, creative writing, music and others. The main aim of this work is to suggest that other intelligences, in the case of this study, bodily-kinesthetic, can be used to help improve pronunciation.Keywords: multiple intelligences, pronunciation, effective pronunciation trainings, short drills, musical intelligence, bodily-kinesthetic intelligence
Procedia PDF Downloads 96619 The Use of Geographic Information System Technologies for Geotechnical Monitoring of Pipeline Systems
Authors: A. G. Akhundov
Abstract:
Issues of obtaining unbiased data on the status of pipeline systems of oil- and oil product transportation become especially important when laying and operating pipelines under severe nature and climatic conditions. The essential attention is paid here to researching exogenous processes and their impact on linear facilities of the pipeline system. Reliable operation of pipelines under severe nature and climatic conditions, timely planning and implementation of compensating measures are only possible if operation conditions of pipeline systems are regularly monitored, and changes of permafrost soil and hydrological operation conditions are accounted for. One of the main reasons for emergency situations to appear is the geodynamic factor. Emergency situations are proved by the experience to occur within areas characterized by certain conditions of the environment and to develop according to similar scenarios depending on active processes. The analysis of natural and technical systems of main pipelines at different stages of monitoring gives a possibility of making a forecast of the change dynamics. The integration of GIS technologies, traditional means of geotechnical monitoring (in-line inspection, geodetic methods, field observations), and remote methods (aero-visual inspection, aero photo shooting, air and ground laser scanning) provides the most efficient solution of the problem. The united environment of geo information system (GIS) is a comfortable way to implement the monitoring system on the main pipelines since it provides means to describe a complex natural and technical system and every element thereof with any set of parameters. Such GIS enables a comfortable simulation of main pipelines (both in 2D and 3D), the analysis of situations and selection of recommendations to prevent negative natural or man-made processes and to mitigate their consequences. The specifics of such systems include: a multi-dimensions simulation of facilities in the pipeline system, math modelling of the processes to be observed, and the use of efficient numeric algorithms and software packets for forecasting and analyzing. We see one of the most interesting possibilities of using the monitoring results as generating of up-to-date 3D models of a facility and the surrounding area on the basis of aero laser scanning, data of aerophotoshooting, and data of in-line inspection and instrument measurements. The resulting 3D model shall be the basis of the information system providing means to store and process data of geotechnical observations with references to the facilities of the main pipeline; to plan compensating measures, and to control their implementation. The use of GISs for geotechnical monitoring of pipeline systems is aimed at improving the reliability of their operation, reducing the probability of negative events (accidents and disasters), and at mitigation of consequences thereof if they still are to occur.Keywords: databases, 3D GIS, geotechnical monitoring, pipelines, laser scaning
Procedia PDF Downloads 189618 Machine Learning Techniques in Seismic Risk Assessment of Structures
Authors: Farid Khosravikia, Patricia Clayton
Abstract:
The main objective of this work is to evaluate the advantages and disadvantages of various machine learning techniques in two key steps of seismic hazard and risk assessment of different types of structures. The first step is the development of ground-motion models, which are used for forecasting ground-motion intensity measures (IM) given source characteristics, source-to-site distance, and local site condition for future events. IMs such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available. Second, it is investigated how machine learning techniques could be beneficial for developing probabilistic seismic demand models (PSDMs), which provide the relationship between the structural demand responses (e.g., component deformations, accelerations, internal forces, etc.) and the ground motion IMs. In the risk framework, such models are used to develop fragility curves estimating exceeding probability of damage for pre-defined limit states, and therefore, control the reliability of the predictions in the risk assessment. In this study, machine learning algorithms like artificial neural network, random forest, and support vector machine are adopted and trained on the demand parameters to derive PSDMs for them. It is observed that such models can provide more accurate estimates of prediction in relatively shorter about of time compared to conventional methods. Moreover, they can be used for sensitivity analysis of fragility curves with respect to many modeling parameters without necessarily requiring more intense numerical response-history analysis.Keywords: artificial neural network, machine learning, random forest, seismic risk analysis, seismic hazard analysis, support vector machine
Procedia PDF Downloads 106617 Characterization of Phenolic Compounds from Carménère Wines during Aging with Oak Wood (Staves, Chips and Barrels)
Authors: E. Obreque-Slier, J. Laqui-Estaña, A. Peña-Neira, M. Medel-Marabolí
Abstract:
Wine is an important source of polyphenols. Red wines show important concentrations of nonflavonoid (gallic acid, ellagic acid, caffeic acid and coumaric acid) and flavonoid compounds [(+)-catechin, (-)-epicatechin, (+)-gallocatechin and (-)-epigallocatechin]. However, a significant variability in the quantitative and qualitative distribution of chemical constituents in wine has to be expected depending on an array of important factors, such as the varietal differences of Vitis vinifera and cultural practices. It has observed that Carménère grapes present a differential composition and evolution of phenolic compounds when compared to other varieties and specifically with Cabernet Sauvignon grapes. Likewise, among the cultural practices, the aging in contact with oak wood is a high relevance factor. Then, the extraction of different polyphenolic compounds from oak wood into wine during its ageing process produces both qualitative and quantitative changes. Recently, many new techniques have been introduced in winemaking. One of these involves putting new pieces of wood (oak chips or inner staves) into inert containers. It offers some distinct and previously unavailable flavour advantages, as well as new options in wine handling. To our best knowledge, there is not information about the behaviour of Carménère wines (Chilean emblematic cultivar) in contact with oak wood. In addition, the effect of aging time and wood product (barrels, chips or staves) on the phenolic composition in Carménère wines has not been studied. This study aims at characterizing the condensed and hydrolyzable tannins from Carménère wines during the aging with staves, chips and barrels from French oak wood. The experimental design was completely randomized with two independent assays: aging time (0-12 month) and different formats of wood (barrel, chips and staves). The wines were characterized by spectrophotometric (total tannins and fractionation of proanthocyanidins into monomers, oligomers and polymers) and HPLC-DAD (ellagitannins) analysis. The wines in contact with different products of oak wood showed a similar content of total tannins during the study, while the control wine (without oak wood) presented a lower content of these compounds. In addition, it was observed that the polymeric proanthocyanidin fraction was the most abundant, while the monomeric fraction was the less abundant fraction in all treatments in two sample. However, significative differences in each fractions were observed between wines in contact from barrel, chips, and staves in two sample dates. Finally, the wine from barrels presented the highest content of the ellagitannins from the fourth to the last sample date. In conclusion, the use of alternative formats of oak wood affects the chemical composition of wines during aging, and these enological products are an interesting alternative to contribute with tannins to wine.Keywords: enological inputs, oak wood aging, polyphenols, red wine
Procedia PDF Downloads 158616 Investigating the Algorithm to Maintain a Constant Speed in the Wankel Engine
Authors: Adam Majczak, Michał Bialy, Zbigniew Czyż, Zdzislaw Kaminski
Abstract:
Increasingly stringent emission standards for passenger cars require us to find alternative drives. The share of electric vehicles in the sale of new cars increases every year. However, their performance and, above all, range cannot be today successfully compared to those of cars with a traditional internal combustion engine. Battery recharging lasts hours, which can be hardly accepted due to the time needed to refill a fuel tank. Therefore, the ways to reduce the adverse features of cars equipped with electric motors only are searched for. One of the methods is a combination of an electric engine as a main source of power and a small internal combustion engine as an electricity generator. This type of drive enables an electric vehicle to achieve a radically increased range and low emissions of toxic substances. For several years, the leading automotive manufacturers like the Mazda and the Audi together with the best companies in the automotive industry, e.g., AVL have developed some electric drive systems capable of recharging themselves while driving, known as a range extender. An electricity generator is powered by a Wankel engine that has seemed to pass into history. This low weight and small engine with a rotating piston and a very low vibration level turned out to be an excellent source in such applications. Its operation as an energy source for a generator almost entirely eliminates its disadvantages like high fuel consumption, high emission of toxic substances, or short lifetime typical of its traditional application. The operation of the engine at a constant rotational speed enables a significant increase in its lifetime, and its small external dimensions enable us to make compact modules to drive even small urban cars like the Audi A1 or the Mazda 2. The algorithm to maintain a constant speed was investigated on the engine dynamometer with an eddy current brake and the necessary measuring apparatus. The research object was the Aixro XR50 rotary engine with the electronic power supply developed at the Lublin University of Technology. The load torque of the engine was altered during the research by means of the eddy current brake capable of giving any number of load cycles. The parameters recorded included speed and torque as well as a position of a throttle in an inlet system. Increasing and decreasing load did not significantly change engine speed, which means that control algorithm parameters are correctly selected. This work has been financed by the Polish Ministry of Science and Higher Education.Keywords: electric vehicle, power generator, range extender, Wankel engine
Procedia PDF Downloads 157615 Public Participation in Political Transformation: From the Coup D’etat in 2014 to the Events Leading up to the Proposed Election in 2018 in Thailand
Authors: Pataramon Satalak, Sakrit Isariyanon, Teerapong Puripanik
Abstract:
This article uses the recent events in Thailand as a case study for examining why democratic transition is necessary during political upheaval to ensure that the people’s power remains unaffected. After seizing power in May 2014, the military, backed by anti-government protestors, selected and established their own system to govern the country. They set up the National Council for Peace and Order (NCPO) which established a People’s Assembly, aiming to reach a compromise between the conflicting opinions of former, pro-government and anti-government protesters. It plans to achieve this through political reform before returning sovereign power to the people via an election in 2018. If a governmental authority is not representative of the people (e.g. a military government) it does not count as a legitimate government. During the last four years of military government, from May 2014 to January 2018, their rule of Thailand has been widely controversial, specifically regarding their commitment to democracy, human rights violations and their manipulation of the rule of law. Democratic legitimacy relies not only on established mechanisms for public participation (like referendums or elections) but also public participation based on accessible and educational reform (often via NGOs) to ensure that the free and fair will of the people can be expressed. Through their actions over the last three years, the Thai military government has damaged both of these components, impacting future public participation in politics. The authors make some observations about the specific actions the military government has taken to erode the democratic legitimacy of future public participation: the increasing dominance of military courts over civil courts; civil society’s limited involvement in political activities; the drafting of a new constitution and their attempt to master support through referenda and its consequence for delaying organic law-making process; the structure of the legislative powers (Senate and the members of parliament); and the control of people’s basic freedoms of expression, movement and assembly in political activities. One clear consequence of the military government’s specific actions over the last three years is the increased uncertainty amongst Thai people that their fundamental freedoms and political rights will be respected in the future. This will directly affect their participation in future democratic processes. The military government’s actions (e.g. their response to the UN representatives) will also have influenced potential international engagement in Thai civil society to help educate disadvantaged people about their rights, and their participation in the political arena. These actions challenge the democratic idea that there should be a checking and balancing of power between people and government. These examples provide evidence that a democratic transition is crucial during any process of political transformation.Keywords: political tranformation, public participation, Thailand coup d'etat 2014, election 2018
Procedia PDF Downloads 148614 Autism and Work, From the Perception of People Inserted in the Work
Authors: Nilson Rogério Da Silva, Ingrid Casagrande, Isabela Chicarelli Amaro Santos
Abstract:
Introduction: People with Autism Spectrum Disorder (ASD) may face difficulties in social inclusion in different segments of society, especially in entering and staying at work. In Brazil, although there is legislation that equates it to the condition of disability, the number of people at work is still low. The United Nations estimates that more than 80 percent of adults with autism are jobless. In Brazil, the scenario is even more nebulous because there is no control and tracking of accurate data on the number of individuals with autism and how many of these are inserted in the labor market. Pereira and Goyos (2019) found that there is practically no scientific production about people with ASD in the labor market. Objective: To describe the experience of people with ASD inserted in the work, facilities and difficulties found in the professional exercise and the strategies used to maintain the job. Methodology: The research was approved by the Research Ethics Committee. As inclusion criteria for participation, the professional should accept to participate voluntarily, be over 18 years of age and have had some experience with the labor market. As exclusion criteria, being under 18 years of age and having never worked in a work activity. Participated in the research of 04 people with a diagnosis of ASD, aged 22 to 32 years. For data collection, an interview script was used that addressed: 1) General characteristics of the participants; 2) Family support; 3) School process; 4) Insertion in the labor market; 5) Exercise of professional activity; (6) Future and Autism; 7) Possible coping strategies. For the analysis of the data obtained, the full transcription of the interviews was performed and the technique of Content Analysis was performed. Results: The participants reported problems in different aspects: In the school environment: difficulty in social relationships, and Bullying. Lack of adaptation to the school curriculum and the structure of the classroom; In the Faculty: difficulty in following the activities, ealizar group work, meeting deadlines and establishing networking; At work: little adaptation in the work environment, difficulty in establishing good professional bonds, difficulty in accepting changes in routine or operational processes, difficulty in understanding veiled social rules. Discussion: The lack of knowledge about what disability is and who the disabled person is leads to misconceptions and negatives regarding their ability to work and in this context, people with disabilities need to constantly prove that they are able to work, study and develop as a human person, which can be classified as ableism. The adaptations and the use of technologies to facilitate the performance of people with ASD, although guaranteed in national legislation, are not always available, highlighting the difficulties and prejudice. Final Considerations: The entry and permanence of people with ASD at work still constitute a challenge to be overcome, involving changes in society in general, in companies, families and government agencies.Keywords: autism spectrum disorder (ASD), work, disability, autism
Procedia PDF Downloads 79613 The Design of a Computer Simulator to Emulate Pathology Laboratories: A Model for Optimising Clinical Workflows
Authors: M. Patterson, R. Bond, K. Cowan, M. Mulvenna, C. Reid, F. McMahon, P. McGowan, H. Cormican
Abstract:
This paper outlines the design of a simulator to allow for the optimisation of clinical workflows through a pathology laboratory and to improve the laboratory’s efficiency in the processing, testing, and analysis of specimens. Often pathologists have difficulty in pinpointing and anticipating issues in the clinical workflow until tests are running late or in error. It can be difficult to pinpoint the cause and even more difficult to predict any issues which may arise. For example, they often have no indication of how many samples are going to be delivered to the laboratory that day or at a given hour. If we could model scenarios using past information and known variables, it would be possible for pathology laboratories to initiate resource preparations, e.g. the printing of specimen labels or to activate a sufficient number of technicians. This would expedite the clinical workload, clinical processes and improve the overall efficiency of the laboratory. The simulator design visualises the workflow of the laboratory, i.e. the clinical tests being ordered, the specimens arriving, current tests being performed, results being validated and reports being issued. The simulator depicts the movement of specimens through this process, as well as the number of specimens at each stage. This movement is visualised using an animated flow diagram that is updated in real time. A traffic light colour-coding system will be used to indicate the level of flow through each stage (green for normal flow, orange for slow flow, and red for critical flow). This would allow pathologists to clearly see where there are issues and bottlenecks in the process. Graphs would also be used to indicate the status of specimens at each stage of the process. For example, a graph could show the percentage of specimen tests that are on time, potentially late, running late and in error. Clicking on potentially late samples will display more detailed information about those samples, the tests that still need to be performed on them and their urgency level. This would allow any issues to be resolved quickly. In the case of potentially late samples, this could help to ensure that critically needed results are delivered on time. The simulator will be created as a single-page web application. Various web technologies will be used to create the flow diagram showing the workflow of the laboratory. JavaScript will be used to program the logic, animate the movement of samples through each of the stages and to generate the status graphs in real time. This live information will be extracted from an Oracle database. As well as being used in a real laboratory situation, the simulator could also be used for training purposes. ‘Bots’ would be used to control the flow of specimens through each step of the process. Like existing software agents technology, these bots would be configurable in order to simulate different situations, which may arise in a laboratory such as an emerging epidemic. The bots could then be turned on and off to allow trainees to complete the tasks required at that step of the process, for example validating test results.Keywords: laboratory-process, optimization, pathology, computer simulation, workflow
Procedia PDF Downloads 286612 Influence of Laser Treatment on the Growth of Sprouts of Different Wheat Varieties
Authors: N. Bakradze, T. Dumbadze, N. Gagelidze, L. Amiranashvili, A. D. L. Batako
Abstract:
Cereals are considered as a strategic product in human life and it demand is increasing with the growth of world population. There is always shortage of cereals in various areas of the globe. For example, Georgia own production meets only 15-20% of the demand for grain, despite the fact that the country is considered one of the main centers of wheat origin. In Georgia, there are 14 types of wheat and more than 150 subspecies, and 40 subspecies of common wheat. Increasing wheat production is important for the country. One of the ways to solve the problem is to develop and implement new, environmentally and economically acceptable technologies. Such technologies include pre-sowing treatment of seed with a laser and associative nitrogen-fixing of the Azospirillum brasilensse bacteria. In the region there are Dika and Lomtagora which are among the most common in Georgia. Dika is a frost-resistant wheat, with a high ability to adapt to the environment, resistant to falling and it is sown in highlands. Dicka excellent properties are due to its strong immunity to fungal diseases; Dicka grains are rich in protein and lysine. Lomtagora 126 differs with its winter and drought resistance, and, it has a great ability to germinate. Lomtagora is characterized by a strong root system and a high budding capacity. It is an early variety, fall-resistant, easy to thresh and suitable for mechanized harvesting with large and red grains. The plant is moderately resistant to fungal diseases. This paper presents some preliminary experimental results where, a continuous CO2 laser at a power of 25-40 W/cm2 was used to radiate grains at a flow rate of 10-15 cm/sec. The treatment was carried out on grains of the Triticum aestivum L. var. of Lutescens (local variety name - Lomtagora 126), and Triticum carthlicum Nevski (local variety name - Dika). Here the grains were treated with Azospirillum brasilensse isolate (108-109 CFU / ml), which was isolated from the rhizosphere of wheat. It was observed that the germination of the wheat was not significantly influenced by either laser or bacteria treatment. In the case of the variety Lomtagora 126, when irradiated at an angle of 90°, it slightly improved the growth within 38 days of sawing, and in the case of irradiation at an angle of 90°+1, by 23%. The treatment of seeds with Azospirillum brazilense in both irradiated and non-irradiated variants led to an improvement in the growth of ssprouts. However, in the case of treatment with azospiril alone - by 22%, and with joint treatment of seeds with azospiril and irradiation - by 29%. In the case of the Dika wheat, the irradiation only led to an increase in growth by 8-9%, and the combine treatment of seeds with azospiril and irradiation - by 10-15%, in comparison with the control. Thus, the combine treatment of wheat of different varieties provided the best effect on the growth. Acknowledgment: This work was supported by Shota Rustaveli National Science Foundation of Georgia (SRNSFG) (Grant number CARYS 19-573)Keywords: laser treatment, Azospirillum brasilensse, seeds, wheat varieties, Lomtagora, Dika
Procedia PDF Downloads 144611 Shale Gas Accumulation of Over-Mature Cambrian Niutitang Formation Shale in Structure-Complicated Area, Southeastern Margin of Upper Yangtze, China
Authors: Chao Yang, Jinchuan Zhang, Yongqiang Xiong
Abstract:
The Lower Cambrian Niutitang Formation shale (NFS) deposited in the marine deep-shelf environment in Southeast Upper Yangtze (SUY), possess excellent source rock basis for shale gas generation, however, it is currently challenged by being over-mature with strong tectonic deformations, leading to much uncertainty of gas-bearing potential. With emphasis on the shale gas enrichment of the NFS, analyses were made based on the regional gas-bearing differences obtained from field gas-desorption testing of 18 geological survey wells across the study area. Results show that the NFS bears low gas content of 0.2-2.5 m³/t, and the eastern region of SUY is higher than the western region in gas content. Moreover, the methane fraction also presents the similar regional differentiation with the western region less than 10 vol.% while the eastern region generally more than 70 vol.%. Through the analysis of geological theory, the following conclusions are drawn: Depositional environment determines the gas-enriching zones. In the western region, the Dengying Formation underlying the NFS in unconformity contact was mainly plateau facies dolomite with caves and thereby bears poor gas-sealing ability. Whereas the Laobao Formation underling the NFS in eastern region was a set of siliceous rocks of shelf-slope facies, which can effectively prevent the shale gas from escaping away from the NFS. The tectonic conditions control the gas-enriching bands in the SUY, which is located in the fold zones formed by the thrust of the Southern China plate towards to the Sichuan Basin. Compared with the western region located in the trough-like folds, the eastern region at the fold-thrust belts was uplifted early and deformed weakly, resulting in the relatively less mature level and relatively slight tectonic deformation of the NFS. Faults determine whether shale gas can be accumulated in large scale. Four deep and large normal faults in the study area cut through the Niutitang Formation to the Sinian strata, directly causing a large spillover of natural gas in the adjacent areas. For the secondary faults developed within the shale formation, the reverse faults generally have a positive influence on the shale accumulation while the normal faults perform the opposite influence. Overall, shale gas enrichment targets of the NFS, are the areas with certain thickness of siliceous rocks at the basement of the Niutitang Formation, and near the margin of the paleouplift with less developed faults. These findings provide direction for shale gas exploration in South China, and also provide references for the areas with similar geological conditions all over the world.Keywords: over-mature marine shale, shale gas accumulation, structure-complicated area, Southeast Upper Yangtze
Procedia PDF Downloads 147610 A Q-Methodology Approach for the Evaluation of Land Administration Mergers
Authors: Tsitsi Nyukurayi Muparari, Walter Timo De Vries, Jaap Zevenbergen
Abstract:
The nature of Land administration accommodates diversity in terms of both spatial data handling activities and the expertise involved, which supposedly aims to satisfy the unpredictable demands of land data and the diverse demands of the customers arising from the land. However, it is known that strategic decisions of restructuring are in most cases repelled in favour of complex structures that strive to accommodate professional diversity and diverse roles in the field of Land administration. Yet despite of this widely accepted knowledge, there is scanty theoretical knowledge concerning the psychological methodologies that can extract the deeper perceptions from the diverse spatial expertise in order to explain the invisible control arm of the polarised reception of the ideas of change. This paper evaluates Q methodology in the context of a cadastre and land registry merger (under one agency) using the Swedish cadastral system as a case study. Precisely, the aim of this paper is to evaluate the effectiveness of Q methodology towards modelling the diverse psychological perceptions of spatial professionals who are in a widely contested decision of merging the cadastre and land registry components of Land administration using the Swedish cadastral system as a case study. An empirical approach that is prescribed by Q methodology starts with the concourse development, followed by the design of statements and q sort instrument, selection of the participants, the q-sorting exercise, factor extraction by PQMethod and finally narrative development by logic of abduction. The paper uses 36 statements developed from a dominant competing value theory that stands out on its reliability and validity, purposively selects 19 participants to do the Qsorting exercise, proceeds with factor extraction from the diversity using varimax rotation and judgemental rotation provided by PQMethod and effect the narrative construction using the logic abduction. The findings from the diverse perceptions from cadastral professionals in the merger decision of land registry and cadastre components in Sweden’s mapping agency (Lantmäteriet) shows that focus is rather inclined on the perfection of the relationship between the legal expertise and technical spatial expertise. There is much emphasis on tradition, loyalty and communication attributes which concern the organisation’s internal environment rather than innovation and market attributes that reveals customer behavior and needs arising from the changing humankind-land needs. It can be concluded that Q methodology offers effective tools that pursues a psychological approach for the evaluation and gradations of the decisions of strategic change through extracting the local perceptions of spatial expertise.Keywords: cadastre, factor extraction, land administration merger, land registry, q-methodology, rotation
Procedia PDF Downloads 194609 The Direct Deconvolution Model for the Large Eddy Simulation of Turbulence
Authors: Ning Chang, Zelong Yuan, Yunpeng Wang, Jianchun Wang
Abstract:
Large eddy simulation (LES) has been extensively used in the investigation of turbulence. LES calculates the grid-resolved large-scale motions and leaves small scales modeled by sublfilterscale (SFS) models. Among the existing SFS models, the deconvolution model has been used successfully in the LES of the engineering flows and geophysical flows. Despite the wide application of deconvolution models, the effects of subfilter scale dynamics and filter anisotropy on the accuracy of SFS modeling have not been investigated in depth. The results of LES are highly sensitive to the selection of filters and the anisotropy of the grid, which has been overlooked in previous research. In the current study, two critical aspects of LES are investigated. Firstly, we analyze the influence of sub-filter scale (SFS) dynamics on the accuracy of direct deconvolution models (DDM) at varying filter-to-grid ratios (FGR) in isotropic turbulence. An array of invertible filters are employed, encompassing Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The significance of FGR becomes evident, as it acts as a pivotal factor in error control for precise SFS stress prediction. When FGR is set to 1, the DDM models cannot accurately reconstruct the SFS stress due to the insufficient resolution of SFS dynamics. Notably, prediction capabilities are enhanced at an FGR of 2, resulting in accurate SFS stress reconstruction, except for cases involving Helmholtz I and II filters. A remarkable precision close to 100% is achieved at an FGR of 4 for all DDM models. Additionally, the further exploration extends to the filter anisotropy to address its impact on the SFS dynamics and LES accuracy. By employing dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with the anisotropic filter, aspect ratios (AR) ranging from 1 to 16 in LES filters are evaluated. The findings highlight the DDM's proficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. High correlation coefficients exceeding 90% are observed in the a priori study for the DDM's reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as lter anisotropy increases. In the a posteriori studies, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, encompassing velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strain-rate tensors, and SFS stress. It is observed that as filter anisotropy intensify, the results of DSM and DMM become worse, while the DDM continues to deliver satisfactory results across all filter-anisotropy scenarios. The findings emphasize the DDM framework's potential as a valuable tool for advancing the development of sophisticated SFS models for LES of turbulence.Keywords: deconvolution model, large eddy simulation, subfilter scale modeling, turbulence
Procedia PDF Downloads 75608 The Neuropsychology of Obsessive Compulsion Disorder
Authors: Mia Bahar, Özlem Bozkurt
Abstract:
Obsessive-compulsive disorder (OCD) is a typical, persistent, and long-lasting mental health condition in which a person experiences uncontrollable, recurrent thoughts (or "obsessions") and/or activities (or "compulsions") that they feel compelled to engage in repeatedly. Obsessive-compulsive disorder is both underdiagnosed and undertreated. It frequently manifests in a variety of medical settings and is persistent, expensive, and burdensome. Obsessive-compulsive neurosis was long believed to be a condition that offered valuable insight into the inner workings of the unconscious mind. Obsessive-compulsive disorder is now recognized as a prime example of a neuropsychiatric condition susceptible to particular pharmacotherapeutic and psychotherapy therapies and mediated by pathology in particular neural circuits. An obsessive-compulsive disorder which is called OCD, usually has two components, one cognitive and the other behavioral, although either can occur alone. Obsessions are often repetitive and intrusive thoughts that invade consciousness. These obsessions are incredibly hard to control or dismiss. People who have OCD often engage in rituals to reduce anxiety associated with intrusive thoughts. Once the ritual is formed, the person may feel extreme relief and be free from anxiety until the thoughts of contamination intrude once again. These thoughts are strengthened through a manifestation of negative reinforcement because they allow the person to avoid anxiety and obscurity. These thoughts are described as autogenous, meaning they most likely come from nowhere. These unwelcome thoughts are related to actions which we can describe as Thought Action Fusion. The thought becomes equated with an action, such as if they refuse to perform the ritual, something bad might happen, and so people perform the ritual to escape the intrusive thought. In almost all cases of OCD, the person's life gets extremely disturbed by compulsions and obsessions. Studies show OCD is an estimated 1.1% prevalence, making it a challenging issue with high co-morbidities with other issues like depressive episodes, panic disorders, and specific phobias. The first to reveal brain anomalies in OCD were numerous CT investigations, although the results were inconsistent. A few studies have focused on the orbitofrontal cortex (OFC), anterior cingulate gyrus (AC), and thalamus, structures also implicated in the pathophysiology of OCD by functional neuroimaging studies, but few have found consistent results. However, some studies have found abnormalities in the basal ganglion. There have also been some discussions that OCD might be genetic. OCD has been linked to families in studies of family aggregation, and findings from twin studies show that this relationship is somewhat influenced by genetic variables. Some Research has shown that OCD is a heritable, polygenic condition that can result from de novo harmful mutations as well as common and unusual variants. Numerous studies have also presented solid evidence in favor of a significant additive genetic component to OCD risk, with distinct OCD symptom dimensions showing both common and individual genetic risks.Keywords: compulsions, obsessions, neuropsychiatric, genetic
Procedia PDF Downloads 64607 Study of the Impact of Quality Management System on Chinese Baby Dairy Product Industries
Authors: Qingxin Chen, Liben Jiang, Andrew Smith, Karim Hadjri
Abstract:
Since 2007, the Chinese food industry has undergone serious food contamination in the baby dairy industry, especially milk powder contamination. One of the milk powder products was found to contain melamine and a significant number (294,000) of babies were affected by kidney stones. Due to growing concerns among consumers about food safety and protection, and high pressure from central government, companies must take radical action to ensure food quality protection through the use of an appropriate quality management system. Previously, though researchers have investigated the health and safety aspects of food industries and products, quality issues concerning food products in China have been largely over-looked. Issues associated with baby dairy products and their quality issues have not been discussed in depth. This paper investigates the impact of quality management systems on the Chinese baby dairy product industry. A literature review was carried out to analyse the use of quality management systems within the Chinese milk power market. Moreover, quality concepts, relevant standards, laws, regulations and special issues (such as Melamine, Flavacin M1 contamination) have been analysed in detail. A qualitative research approach is employed, whereby preliminary analysis was conducted by interview, and data analysis based on interview responses from four selected Chinese baby dairy product companies was carried out. Through the analysis of literature review and data findings, it has been revealed that for quality management system that has been designed by many practitioners, many theories, models, conceptualisation, and systems are present. These standards and procedures should be followed in order to provide quality products to consumers, but the implementation is lacking in the Chinese baby dairy industry. Quality management systems have been applied by the selected companies but the implementation still needs improvement. For instance, the companies have to take measures to improve their processes and procedures with relevant standards. The government need to make more interventions and take a greater supervisory role in the production process. In general, this research presents implications for the regulatory bodies, Chinese Government and dairy food companies. There are food safety laws prevalent in China but they have not been widely practiced by companies. Regulatory bodies must take a greater role in ensuring compliance with laws and regulations. The Chinese government must also play a special role in urging companies to implement relevant quality control processes. The baby dairy companies not only have to accept the interventions from the regulatory bodies and government, they also need to ensure that production, storage, distribution and other processes will follow the relevant rules and standards.Keywords: baby dairy product, food quality, milk powder contamination, quality management system
Procedia PDF Downloads 473606 Diversified Farming and Agronomic Interventions Improve Soil Productivity, Soybean Yield and Biomass under Soil Acidity Stress
Authors: Imran, Murad Ali Rahat
Abstract:
One of the factors affecting crop production and nutrient availability is acidic stress. The most important element decreasing under acidic stress conditions is phosphorus deficiency, which results in stunted growth and yield because of inefficient nutrient cycling. At the Agriculture Research Institute Mingora Swat, Pakistan, tests were carried out for the first time throughout the course of two consecutive summer seasons in 2016 (year 1) and 2017 (year 2) with the goal of increasing crop productivity and nutrient availability under acidic stress. Three organic supplies (peach nano-black carbon, compost, and dry-based peach wastes), three phosphorus rates, and two advantageous microorganisms (Trichoderma and PSB) were incorporated in the experimental treatments. The findings showed that, in conditions of acid stress, peach organic sources had a significant impact on yield and yield components. The application of nano-black carbon produced the greatest thousand seed weight of 164.6 g among organic sources, however the use of phosphorus solubilizing bacteria (PSB) for seed inoculation increased the thousand seed weight of beneficial microbes when compared to Trichoderma soil application. The thousand seed weight was significantly impacted by the quantities of phosphorus. The treatment of 100 kg P ha-1 produced the highest thousand seed weight (167.3 g), which was followed by 75 kg P ha-1 (162.5 g). Compost amendments provided the highest seed yield (2,140 kg ha-1) and were comparable to the application of nano-black carbon (2,120 kg ha-1). With peach residues, the lowest seed output (1,808 kg ha-1) was observed.Compared to seed inoculation with PSB (1,913 kg ha-1), soil treatment with Trichoderma resulted in the maximum seed production (2,132 kg ha-1). Applying phosphorus to the soybean crop greatly increased its output. The highest seed yield (2,364 kg ha-1) was obtained with 100 kg P ha-1, which was comparable to 75 kg P ha-1 (2,335 kg ha-1), while the lowest seed yield (1,569 kg ha-1) was obtained with 50 kg P ha-1. The average values showed that compared to control plots (3.3 g kg-1), peach organic sources produced greatest SOC (10.0 g kg-1). Plots with treated soil had a maximum soil P of 19.7 mg kg-1, while plots under stress had a maximum soil P of 4.8 mg kg-1. While peach compost resulted in the lowest soil P levels, peach nano-black carbon yielded the highest soil P levels (21.6 mg kg-1). Comparing beneficial bacteria with PSB to Trichoderma (18.3 mg/kg-1), the former also shown an improvement in soil P (21.1 mg kg-1). Regarding P treatments, the application of 100 kg P per ha produced significantly higher soil P values (26.8 mg /kg-1), followed by 75 kg P per ha (18.3 mg /kg-1), and 50 kg P ha-1 produced the lowest soil P values (14.1 mg /kg-1). Comparing peach wastes and compost to peach nano-black carbon (13.7 g kg-1), SOC rose. In contrast to PSB (8.8 g kg-1), soil-treated Trichoderma was shown to have a greater SOC (11.1 g kg-1). Higher among the P levels.Keywords: acidic stress, trichoderma, beneficial microbes, nano-black carbon, compost, peach residues, phosphorus, soybean
Procedia PDF Downloads 77605 Effects of Narghile Smoking in Tongue, Trachea and Lung
Authors: Sarah F. M. Pilati, Carolina S. Flausino, Guilherme F. Hoffmeister, Davi R. Tames, Telmo J. Mezadri
Abstract:
The effects that may be related to narghile smoking in the tissues of the oral cavity, trachea and lung and associated inflammation has been the question raised lately. The objective of this study was to identify histopathological changes and the presence of inflammation through the exposure of mice to narghile smoking through a whole-body study. The animals were divided in 4 groups with 5 animals in each group, being: one control group, one with 7 days of exposure, 15 days and the last one with 30 days. The animals were exposed to the conventional hookah smoke from Mizo brand with 0.5% percentage of unwashed tobacco and the EcOco brand coconut fiber having a dimension of 2cm × 2cm. The duration of the session was 30 minutes / day per 7, 15 and 30 days. The tobacco smoke concentration at which test animals were exposed was 35 ml every two seconds while the remaining 58 seconds were pure air. Afterward, the mice were sacrificed and submitted to histological evaluation through slices. It was found in the tongue of the 7-day group the presence in epithelium areas with acanthosis, hyperkeratosis and epithelial projections. In-depth, more intense inflammation was observed. All alteration processes increased significantly as the days of exposure increased. In trachea, with the 7-day group, there was a decrease in thickening of the pseudostratified epithelium and a slight decrease in lashes, giving rise to the metaplasia process, a process that was established in the 31-day sampling when the epithelium became stratified. In the conjunctive tissue, it was observed the presence of defense cells and formation of new vessels, evidencing the chronic inflammatory process, which decreased in the course of the samples due to the deposition of collagen fibers as seen in the 15 and 31 days groups. Among the structures of the lung, the study focused on the bronchioles and alveoli. From the 7-day group, intra-alveolar septum thickness increased, alveolar space decreased, inflammatory infiltrate with mononuclear and defense cells and new vessels formation were observed, increasing the number of red blood cells in the region. The results showed that with the passing of the days a progressive increase of the signs of changes in the region was observed, a factor that shows that narghile smoking stimulates alterations mainly in the alveoli (place where gas exchanges occur that should not present alterations) calling attention to the harmful and aggressive effect of narghile smoking. These data also highlighted the harmful effect of smoking, since the presence of acanthosis, hyperkeratosis, epithelial projections and inflammation evidences the cellular alteration process for the tongue tissue protection. Also, the narghile smoking stimulates both epithelial and inflammatory changes in the trachea, in addition to a process of metaplasia, a factor that reinforces the harmful effect and the carcinogenic potential of the narghile smoking.Keywords: metaplasia, inflammation, pathological constriction, hyperkeratosis
Procedia PDF Downloads 173604 Verification of Low-Dose Diagnostic X-Ray as a Tool for Relating Vital Internal Organ Structures to External Body Armour Coverage
Authors: Natalie A. Sterk, Bernard van Vuuren, Petrie Marais, Bongani Mthombeni
Abstract:
Injuries to the internal structures of the thorax and abdomen remain a leading cause of death among soldiers. Body armour is a standard issue piece of military equipment designed to protect the vital organs against ballistic and stab threats. When configured for maximum protection, the excessive weight and size of the armour may limit soldier mobility and increase physical fatigue and discomfort. Providing soldiers with more armour than necessary may, therefore, hinder their ability to react rapidly in life-threatening situations. The capability to determine the optimal trade-off between the amount of essential anatomical coverage and hindrance on soldier performance may significantly enhance the design of armour systems. The current study aimed to develop and pilot a methodology for relating internal anatomical structures with actual armour plate coverage in real-time using low-dose diagnostic X-ray scanning. Several pilot scanning sessions were held at Lodox Systems (Pty) Ltd head-office in South Africa. Testing involved using the Lodox eXero-dr to scan dummy trunk rigs at various degrees and heights of measurement; as well as human participants, wearing correctly fitted body armour while positioned in supine, prone shooting, seated and kneeling shooting postures. The verification of sizing and metrics obtained from the Lodox eXero-dr were then confirmed through a verification board with known dimensions. Results indicated that the low-dose diagnostic X-ray has the capability to clearly identify the vital internal structures of the aortic arch, heart, and lungs in relation to the position of the external armour plates. Further testing is still required in order to fully and accurately identify the inferior liver boundary, inferior vena cava, and spleen. The scans produced in the supine, prone, and seated postures provided superior image quality over the kneeling posture. The X-ray-source and-detector distance from the object must be standardised to control for possible magnification changes and for comparison purposes. To account for this, specific scanning heights and angles were identified to allow for parallel scanning of relevant areas. The low-dose diagnostic X-ray provides a non-invasive, safe, and rapid technique for relating vital internal structures with external structures. This capability can be used for the re-evaluation of anatomical coverage required for essential protection while optimising armour design and fit for soldier performance.Keywords: body armour, low-dose diagnostic X-ray, scanning, vital organ coverage
Procedia PDF Downloads 122603 Hospice-Shared Care for a Child Patient Supported with Extracorporeal Membrane Oxygenation
Authors: Hsiao-Lin Fang
Abstract:
Every life is precious, and comprehensive care should be provided to individuals who are in the final stages of their lives. Hospice-shared care aims to provide optimal symptom control and palliative care to terminal (cancer) patients through the implementation of shared care, and to support patients and their families in making various physical and psychological adjustments in the face of death. This report examines a 10-year-boy diagnosed with Out-of-Hospital Cardiac Arrest (OHCA). The individual fainted when swimming at school and underwent 31 minutes of cardiopulmonary resuscitation (CPR). While receiving treatment at the hospital, the individual received extracorporeal membrane oxygenation(ECMO) due to unstable hemodynamics. Urgent cardiac catheterization found: Suspect acute fulminant myocarditis or underlying cardiomyopathy with acute decompensation, After the active rescue by the medical team, hemodynamics still showed only mean pressure value. With respect to the patient, interdepartmental hospice-shared care was implemented and a do-not-resuscitate (DNR) order was signed after family discussions were conducted. Assistance and instructions were provided as part of the comfort care process. A farewell gathering attended by the patient’s relatives, friends, teachers, and classmates was organized in an intensive care unit (ICU) in order to look back on the patient’s life and the beautiful memories that were created, as well as to alleviate the sorrow felt by family members, including the patient’s father and sister. For example, the patient was presented with drawings and accompanied to a garden to pick flowers. In this manner, the patient was able to say goodbye before death. Finally, the patient’s grandmother and father participated in the clinical hospice care and post-mortem care processes. A hospice-shared care clinician conducted regular follow-ups and provided care to the family of the deceased, supporting family members through the sorrowful period. Birth, old age, sickness, and death are the natural phases of human life. In recent years, growing attention has been paid to human-centered hospice care. Hospice care is individual holistic care provided by a professional team and it involves the provision of comprehensive care to a terminal patient. Hospice care aims to satisfy the physical, psychological, mental, and social needs of patients and their families. It does not involve the cessation of treatment but rather avoids the exacerbation or extension of the suffering endured by patients, thereby preserving the dignity and quality of life during the end-of-life period. Patients enjoy the company of others as they complete the last phase of their lives, and their families also receive guidance on how they can move on with their own lives after the patient’s death.Keywords: hospice-shared care, extracorporeal membrane oxygenation (ECMO), hospice-shared care, child patient
Procedia PDF Downloads 140602 Evaluation of Batch Splitting in the Context of Load Scattering
Authors: S. Wesebaum, S. Willeke
Abstract:
Production companies are faced with an increasingly turbulent business environment, which demands very high production volumes- and delivery date flexibility. If a decoupling by storage stages is not possible (e.g. at a contract manufacturing company) or undesirable from a logistical point of view, load scattering effects the production processes. ‘Load’ characterizes timing and quantity incidence of production orders (e.g. in work content hours) to workstations in the production, which results in specific capacity requirements. Insufficient coordination between load (demand capacity) and capacity supply results in heavy load scattering, which can be described by deviations and uncertainties in the input behavior of a capacity unit. In order to respond to fluctuating loads, companies try to implement consistent and realizable input behavior using the capacity supply available. For example, a uniform and high level of equipment capacity utilization keeps production costs down. In contrast, strong load scattering at workstations leads to performance loss or disproportionately fluctuating WIP, whereby the logistics objectives are affected negatively. Options for reducing load scattering are e.g. shifting the start and end dates of orders, batch splitting and outsourcing of operations or shifting to other workstations. This leads to an adjustment of load to capacity supply, and thus to a reduction of load scattering. If the adaptation of load to capacity cannot be satisfied completely, possibly flexible capacity must be used to ensure that the performance of a workstation does not decrease for a given load. Where the use of flexible capacities normally raises costs, an adjustment of load to capacity supply reduces load scattering and, in consequence, costs. In the literature you mostly find qualitative statements for describing load scattering. Quantitative evaluation methods that describe load mathematically are rare. In this article the authors discuss existing approaches for calculating load scattering and their various disadvantages such as lack of opportunity for normalization. These approaches are the basis for the development of our mathematical quantification approach for describing load scattering that compensates the disadvantages of the current quantification approaches. After presenting our mathematical quantification approach, the method of batch splitting will be described. Batch splitting allows the adaptation of load to capacity to reduce load scattering. After describing the method, it will be explicitly analyzed in the context of the logistic curve theory by Nyhuis using the stretch factor α1 in order to evaluate the impact of the method of batch splitting on load scattering and on logistic curves. The conclusion of this article will be to show how the methods and approaches presented can help companies in a turbulent environment to quantify the occurring work load scattering accurately and apply an efficient method for adjusting work load to capacity supply. In this way, the achievements of the logistical objectives are increased without causing additional costs.Keywords: batch splitting, production logistics, production planning and control, quantification, load scattering
Procedia PDF Downloads 399601 Mass Flux and Forensic Assessment: Informed Remediation Decision Making at One of Canada’s Most Polluted Sites
Authors: Tony R. Walker, N. Devin MacAskill, Andrew Thalhiemer
Abstract:
Sydney Harbour, Nova Scotia, Canada has long been subject to effluent and atmospheric inputs of contaminants, including thousands of tons of PAHs from a large coking and steel plant which operated in Sydney for nearly a century. Contaminants comprised of coal tar residues which were discharged from coking ovens into a small tidal tributary, which became known as the Sydney Tar Ponds (STPs), and subsequently discharged into Sydney Harbour. An Environmental Impact Statement concluded that mobilization of contaminated sediments posed unacceptable ecological risks, therefore immobilizing contaminants in the STPs using solidification and stabilization was identified as a primary source control remediation option to mitigate against continued transport of contaminated sediments from the STPs into Sydney Harbour. Recent developments in contaminant mass flux techniques focus on understanding “mobile” vs. “immobile” contaminants at remediation sites. Forensic source evaluations are also increasingly used for understanding origins of PAH contaminants in soils or sediments. Flux and forensic source evaluation-informed remediation decision-making uses this information to develop remediation end point goals aimed at reducing off-site exposure and managing potential ecological risk. This study included reviews of previous flux studies, calculating current mass flux estimates and a forensic assessment using PAH fingerprint techniques, during remediation of one of Canada’s most polluted sites at the STPs. Historically, the STPs was thought to be the major source of PAH contamination in Sydney Harbour with estimated discharges of nearly 800 kg/year of PAHs. However, during three years of remediation monitoring only 17-97 kg/year of PAHs were discharged from the STPs, which was also corroborated by an independent PAH flux study during the first year of remediation which estimated 119 kg/year. The estimated mass efflux of PAHs from the STPs during remediation was in stark contrast to ~2000 kg loading thought necessary to cause a short term increase in harbour sediment PAH concentrations. These mass flux estimates during remediation were also between three to eight times lower than PAHs discharged from the STPs a decade prior to remediation, when at the same time, government studies demonstrated on-going reduction in PAH concentrations in harbour sediments. Flux results were also corroborated using forensic source evaluations using PAH fingerprint techniques which found a common source of PAHs for urban soils, marine and aquatic sediments in and around Sydney. Coal combustion (from historical coking) and coal dust transshipment (from current coal transshipment facilities), are likely the principal source of PAHs in these media and not migration of PAH laden sediments from the STPs during a large scale remediation project.Keywords: contaminated sediment, mass flux, forensic source evaluations, remediation
Procedia PDF Downloads 239600 “Uninformed” Religious Orientation Can Lead to Violence in Any Given Community: The Case of African Independence Churches in South Africa
Authors: Ngwako Daniel Sebola
Abstract:
Introductory Statement: Religions are necessary as they offer and teach something to their adherence. People in one religion may not have a complete understanding of the Supreme Being (Deity) in a certain religion other than their own. South Africa, like other countries in the world, consists of various religions, including Christianity. Almost 80% of South African population adheres to the Christian faith, though in different denominations and sects. Each church fulfils spiritual needs that perhaps others cannot fill. African Independent Churches is one of the denominations in the country. These churches arose as a protest to the Western forms and expressions of Christianity. Their major concern was to develop an indigenous expression of Christianity. The relevance of African Independent Churches includes addressing the needs of the people holistically. Controlling diseases was an important aspect of change in different historical periods. Through healing services, leaders of African churches are able to attract many followers. The healing power associated with the founders of many African Initiated Churches leads to people following and respecting them as true leaders within many African communities. Despite its strong points, African Independent Churches, like many others, face a variety of challenges, especially conflicts. Ironically, destructive conflicts resulted in violence.. Such violence demonstrates a lack of informed religious orientation among those concerned. This paper investigates and analyses the causes of conflict and violence in the African Independent Church. The researcher used the Shembe and International Pentecostal Holiness Churches, in South Africa, as a point of departure. As a solution to curb violence, the researcher suggests useful strategies in handling conflicts. Methodology: Comparative and qualitative approaches have been used as methods of collecting data in this research. The intention is to analyse the similarities and differences of violence among members of the Shembe and International Pentecostal Holiness Churches. Equally important, the researcher aims to obtain data through interviews, questionnaires, focus groups, among others. The researcher aims to interview fifteen individuals from both churches. Finding: Leadership squabbles and power struggle appear to be the main contributing factors of violence in many Independent Churches. Ironically, violence resulted in the loss of life and destruction of properties, like in the case of the Shembe and International Pentecostal Holiness Churches. Violence is an indication that congregations and some leaders have not been properly equipped to deal with conflict. Concluding Statement: Conflict is a common part of every human existence in any given community. The concern is when such conflict becomes contagious; it leads to violence. There is a need to understand consciously and objectively towards devising the appropriate measures to handle the conflict. Conflict management calls for emotional maturity, self-control, empathy, patience, tolerance and informed religious orientation.Keywords: African, church, religion, violence
Procedia PDF Downloads 116599 Influence of Gamma-Radiation Dosimetric Characteristics on the Stability of the Persistent Organic Pollutants
Authors: Tatiana V. Melnikova, Lyudmila P. Polyakova, Alla A. Oudalova
Abstract:
As a result of environmental pollution, the production of agriculture and foodstuffs inevitably contain residual amounts of Persistent Organic Pollutants (POP). The special attention must be given to organic pollutants, including various organochlorinated pesticides (OCP). Among priorities, OCP is DDT (and its metabolite DDE), alfa-HCH, gamma-HCH (lindane). The control of these substances spends proceeding from requirements of sanitary norms and rules. During too time often is lost sight of that the primary product can pass technological processing (in particular irradiation treatment) as a result of which transformation of physicochemical forms of initial polluting substances is possible. The goal of the present work was to study the OCP radiation degradation at a various gamma-radiation dosimetric characteristics. The problems posed for goal achievement: to evaluate the content of the priority of OCPs in food; study the character the degradation of OCP in model solutions (with micro concentrations commensurate with the real content of their agricultural and food products) depending upon dosimetric characteristics of gamma-radiation. Qualitative and quantitative analysis of OCP in food and model solutions by gas chromatograph Varian 3400 (Varian, Inc. (USA)); chromatography-mass spectrometer Varian Saturn 4D (Varian, Inc. (USA)) was carried out. The solutions of DDT, DDE, alpha- and gamma- isomer HCH (0.01, 0.1, 1 ppm) were irradiated on "Issledovatel" (60Co) and "Luch - 1" (60Co) installations at a dose 10 kGy with a variation of dose rate from 0.0083 up to 2.33 kGy/sec. It was established experimentally that OCP residual concentration in individual samples of food products (fish, milk, cereal crops, meat, butter) are evaluated as 10-1-10-4 mg/kg, the value of which depends on the factor-sensations territory and natural migration processes. The results were used in the preparation of model solutions OCP. The dependence of a degradation extent of OCP from a dose rate gamma-irradiation has complex nature. According to our data at a dose 10 kGy, the degradation extent of OCP at first increase passes through a maximum (over the range 0.23 – 0.43 Gy/sec), and then decrease with the magnification of a dose rate. The character of the dependence of a degradation extent of OCP from a dose rate is kept for various OCP, in polar and nonpolar solvents and does not vary at the change of concentration of the initial substance. Also in work conditions of the maximal radiochemical yield of OCP which were observed at having been certain: influence of gamma radiation with a dose 10 kGy, in a range of doses rate 0.23 – 0.43 Gy/sec; concentration initial OCP 1 ppm; use of solvent - 2-propanol after preliminary removal of oxygen. Based on, that at studying model solutions of OCP has been established that the degradation extent of pesticides and qualitative structure of OCP radiolysis products depend on a dose rate, has been decided to continue researches radiochemical transformations OCP into foodstuffs at various of doses rate.Keywords: degradation extent, dosimetric characteristics, gamma-radiation, organochlorinated pesticides, persistent organic pollutants
Procedia PDF Downloads 249598 Carboxyfullerene-Modified Titanium Dioxide Nanoparticles in Singlet Oxygen and Hydroxyl Radicals Scavenging Activity
Authors: Kai-Cheng Yang, Yen-Ling Chen, Er-Chieh Cho, Kuen-Chan Lee
Abstract:
Titanium dioxide nanomaterials offer superior protection for human skin against the full spectrum of ultraviolet light. However, some literature reviews indicated that it might be associated with adverse effects such as cytotoxicity or reactive oxygen species (ROS) due to their nanoscale. The surface of fullerene is covered with π electrons constituting aromatic structures, which can effectively scavenge large amount of radicals. Unfortunately, fullerenes are poor solubility in water, severe aggregation, and toxicity in biological applications when dispersed in solvent have imposed the limitations to the use of fullerenes. Carboxyfullerene acts as the scavenger of radicals for several years. Some reports indicate that carboxyfullerene not only decrease the concentration of free radicals in ambience but also prevent cells from reducing the number or apoptosis under UV irradiation. The aim of this study is to decorate fullerene –C70-carboxylic acid (C70-COOH) on the surface of titanium dioxide nanoparticles (P25) for the purpose of scavenging ROS during the irradiation. The modified material is prepared through the esterification of C70-COOH with P25 (P25/C70-COOH). The binding edge and structure are studied by using Transmission electron microscope (TEM) and Fourier transform infrared (FTIR). The diameter of P25 is about 30 nm and C70-COOH is found to be conjugated on the edge of P25 in aggregation morphology with the size of ca. 100 nm. In the next step, the FTIR was used to confirm the binding structure between P25 and C70-COOH. There are two new peaks are shown at 1427 and 1720 cm-1 for P25/C70-COOH, resulting from the C–C stretch and C=O stretch formed during esterification with dilute sulfuric acid. The IR results further confirm the chemically bonded interaction between C70-COOH and P25. In order to provide the evidence of scavenging radical ability of P25/C70-COOH, we chose pyridoxine (Vit.B6) and terephthalic acid (TA) to react with singlet oxygen and hydroxyl radicals. We utilized these chemicals to observe the radicals scavenging statement via detecting the intensity of ultraviolet adsorption or fluorescence emission. The UV spectra are measured by using different concentration of C70-COOH modified P25 with 1mM pyridoxine under UV irradiation for various duration times. The results revealed that the concentration of pyridoxine was increased when cooperating with P25/C70-COOH after three hours as compared with control (only P25). It indicates fewer radicals could be reacted with pyridoxine because of the absorption via P25/C70-COOH. The fluorescence spectra are observed by measuring P25/C70-COOH with 1mM terephthalic acid under UV irradiation for various duration times. The fluorescence intensity of TAOH was decreased in ten minutes when cooperating with P25/C70-COOH. Here, it was found that the fluorescence intensity was increased after thirty minutes, which could be attributed to the saturation of C70-COOH in the absorption of radicals. However, the results showed that the modified P25/C70-COOH could reduce the radicals in the environment. Therefore, we expect that P25/C70-COOH is a potential materials in using for antioxidant.Keywords: titanium dioxide, fullerene, radical scavenging activity, antioxidant
Procedia PDF Downloads 404597 A Brazilian Study Applied to the Regulatory Environmental Issues of Nanomaterials
Authors: Luciana S. Almeida
Abstract:
Nanotechnology has revolutionized the world of science and technology bringing great expectations due to its great potential of application in the most varied industrial sectors. The same characteristics that make nanoparticles interesting from the point of view of the technological application, these may be undesirable when released into the environment. The small size of nanoparticles facilitates their diffusion and transport in the atmosphere, water, and soil and facilitates the entry and accumulation of nanoparticles in living cells. The main objective of this study is to evaluate the environmental regulatory process of nanomaterials in the Brazilian scenario. Three specific objectives were outlined. The first is to carry out a global scientometric study, in a research platform, with the purpose of identifying the main lines of study of nanomaterials in the environmental area. The second is to verify how environmental agencies in other countries have been working on this issue by means of a bibliographic review. And the third is to carry out an assessment of the Brazilian Nanotechnology Draft Law 6741/2013 with the state environmental agencies. This last one has the aim of identifying the knowledge of the subject by the environmental agencies and necessary resources available in the country for the implementation of the Policy. A questionnaire will be used as a tool for this evaluation to identify the operational elements and build indicators through the Environment of Evaluation Application, a computational application developed for the development of questionnaires. At the end will be verified the need to propose changes in the Draft Law of the National Nanotechnology Policy. Initial studies, in relation to the first specific objective, have already identified that Brazil stands out in the production of scientific publications in the area of nanotechnology, although the minority is in studies focused on environmental impact studies. Regarding the general panorama of other countries, some findings have also been raised. The United States has included the nanoform of the substances in an existing program in the EPA (Environmental Protection Agency), the TSCA (Toxic Substances Control Act). The European Union issued a draft of a document amending Regulation 1907/2006 of the European Parliament and Council to cover the nanoform of substances. Both programs are based on the study and identification of environmental risks associated with nanomaterials taking into consideration the product life cycle. In relation to Brazil, regarding the third specific objective, it is notable that the country does not have any regulations applicable to nanostructures, although there is a Draft Law in progress. In this document, it is possible to identify some requirements related to the environment, such as environmental inspection and licensing; industrial waste management; notification of accidents and application of sanctions. However, it is not known if these requirements are sufficient for the prevention of environmental impacts and if national environmental agencies will know how to apply them correctly. This study intends to serve as a basis for future actions regarding environmental management applied to the use of nanotechnology in Brazil.Keywords: environment; management; nanotecnology; politics
Procedia PDF Downloads 122596 ISIS and Social Media
Authors: Neda Jebellie
Abstract:
New information and communication technologies (ICT) not only has revolutionized the world of communication but has also strongly impacted the state of international terrorism. Using the potential of social media, the new wave of terrorism easily can recruit new jihadi members, spread their violent ideology and garner financial support. IS (Islamic State) as the most dangerous terrorist group has already conquered a great deal of social media space and has deployed sophisticated web-based strategies to promote its extremist doctrine. In this respect the vastly popular social media are the perfect tools for IS to establish its virtual Caliphate (e-caliphate) and e-Ommah (e-citizen).Using social media to release violent videos of beheading journalists, burning their hostages alive and mass killing of prisoners are IS strategies to terrorize and subjugate its enemies. Several Twitter and Facebook accounts which are IS affiliations have targeted young generation of Muslims all around the world. In fact IS terrorists use modern resources of communication not only to share information and conduct operations but also justify their violent acts. The strict Wahhabi doctrine of ISIS is based on a fundamental interpretation of Islam in which religious war against non Muslims (Jihad) and killing infidels (Qatal) have been praised and recommended. Via social media IS disseminates its propaganda to inspire sympathizers across the globe. Combating this new wave of terrorism which is exploiting new communication technologies is the most significant challenge for authorities. Before the rise of internet and social media governments had to control only mosques and religious gathering such as Friday sermons(Jamaah Pray) to prevent spreading extremism among Muslims community in their country. ICT and new communication technologies have heighten the challenge of dealing with Islamic radicalism and have amplified its threat .According to the official reports even some of the governments such as UK have created a special force of Facebook warriors to engage in unconventional warfare in digital age. In compare with other terrorist groups, IS has effectively grasped social media potential. Their horrifying released videos on YouTube easily got viral and were re-twitted and shared by thousands of social media users. While some of the social media such as Twitter and Facebook have shut down many accounts alleged to IS but new ones create immediately so only blocking their websites and suspending their accounts cannot solve the problem as terrorists recreate new accounts. To combat cyber terrorism focusing on disseminating counter narrative strategies can be a solution. Creating websites and providing online materials to propagate peaceful and moderate interpretation of Islam can provide a cogent alternative to extremist views.Keywords: IS-islamic state, cyber terrorism, social media, terrorism, information, communication technologies
Procedia PDF Downloads 487