Search results for: systems quality
2122 A Comparative Analysis of Conventional and Organic Dairy Supply Chain: Assessing Transport Costs and External Effects in Southern Sweden
Authors: Vivianne Aggestam
Abstract:
Purpose: Organic dairy products have steadily increased with consumer popularity in recent years in Sweden, permitting more transport activities. The main aim of this study was to compare the transport costs and the environmental emissions made by the organic and conventional dairy production in Sweden. The objective was to evaluate differences and environmental impacts of transport between the two different production systems, allowing a more transparent understanding of the real impact of transport within the supply chain. Methods: A partial attributional Life Cycle Assessment has been conducted based on a comprehensive survey of Swedish farmers, dairies and consumers regarding their transport needs and costs. Interviews addressed the farmers and dairies. Consumers were targeted through an online survey. Results: Higher transport inputs from conventional dairy transportation are mainly via feed and soil management on farm level. The regional organic milk brand illustrate less initial transport burdens on farm level, however, after leaving the farm, it had equal or higher transportation requirements. This was mainly due to the location of the dairy farm and shorter product expiry dates, which requires more frequent retail deliveries. Organic consumers tend to use public transport more than private vehicles. Consumers using private vehicles for shopping trips primarily bought conventional products for which price was the main deciding factor. Conclusions: Organic dairy products that emphasise its regional attributes do not ensure less transportation and may therefore not be a more “climate smart” option for the consumer. This suggests that the idea of localism needs to be analysed from a more systemic perspective. Fuel and regional feed efficiency can be further implemented, mainly via fuel type and the types of vehicles used for transport.Keywords: supply chains, distribution, transportation, organic food productions, conventional food production, agricultural fossil fuel use
Procedia PDF Downloads 4542121 Advancements in Predicting Diabetes Biomarkers: A Machine Learning Epigenetic Approach
Authors: James Ladzekpo
Abstract:
Background: The urgent need to identify new pharmacological targets for diabetes treatment and prevention has been amplified by the disease's extensive impact on individuals and healthcare systems. A deeper insight into the biological underpinnings of diabetes is crucial for the creation of therapeutic strategies aimed at these biological processes. Current predictive models based on genetic variations fall short of accurately forecasting diabetes. Objectives: Our study aims to pinpoint key epigenetic factors that predispose individuals to diabetes. These factors will inform the development of an advanced predictive model that estimates diabetes risk from genetic profiles, utilizing state-of-the-art statistical and data mining methods. Methodology: We have implemented a recursive feature elimination with cross-validation using the support vector machine (SVM) approach for refined feature selection. Building on this, we developed six machine learning models, including logistic regression, k-Nearest Neighbors (k-NN), Naive Bayes, Random Forest, Gradient Boosting, and Multilayer Perceptron Neural Network, to evaluate their performance. Findings: The Gradient Boosting Classifier excelled, achieving a median recall of 92.17% and outstanding metrics such as area under the receiver operating characteristics curve (AUC) with a median of 68%, alongside median accuracy and precision scores of 76%. Through our machine learning analysis, we identified 31 genes significantly associated with diabetes traits, highlighting their potential as biomarkers and targets for diabetes management strategies. Conclusion: Particularly noteworthy were the Gradient Boosting Classifier and Multilayer Perceptron Neural Network, which demonstrated potential in diabetes outcome prediction. We recommend future investigations to incorporate larger cohorts and a wider array of predictive variables to enhance the models' predictive capabilities.Keywords: diabetes, machine learning, prediction, biomarkers
Procedia PDF Downloads 532120 Impact of Fluid Flow Patterns on Metastable Zone Width of Borax in Dual Radial Impeller Crystallizer at Different Impeller Spacings
Authors: A. Čelan, M. Ćosić, D. Rušić, N. Kuzmanić
Abstract:
Conducting crystallization in an agitated vessel requires a proper selection of mixing parameters that would result in a production of crystals of specific properties. In dual impeller systems, which are characterized by a more complex hydrodynamics due to the possible fluid flow interactions, revealing a clear link between mixing parameters and crystallization kinetics is still an open issue. The aim of this work is to establish this connection by investigating how fluid flow patterns, generated by two impellers mounted on the same shaft, reflect on metastable zone width of borax decahydrate, one of the most important parameters of the crystallization process. Investigation was carried out in a 15-dm3 bench scale batch cooling crystallizer with an aspect ratio (H/T) equal to 1.3. For this reason, two radial straight blade turbines (4-SBT) were used for agitation. Experiments were conducted at different impeller spacings at the state of complete suspension. During the process of an unseeded batch cooling crystallization, solution temperature and supersaturation were continuously monitored what enabled a determination of the metastable zone width. Hydrodynamic conditions in the vessel achieved at different impeller spacings investigated were analyzed in detail. This was done firstly by measuring the mixing time required to attain the desired level of homogeneity. Secondly, fluid flow patterns generated in a described dual impeller system were both photographed and simulated by VisiMix Turbulent software. Also, a comparison of these two visualization methods was performed. Experimentally obtained results showed that metastable zone width is definitely affected by the hydrodynamics in the crystallizer. This means that this crystallization parameter can be controlled not only by adjusting the saturation temperature or cooling rate, as is usually done, but also by choosing a suitable impeller spacing that will result in a formation of crystals of wanted size distribution.Keywords: dual impeller crystallizer, fluid flow pattern, metastable zone width, mixing time, radial impeller
Procedia PDF Downloads 1952119 Comparison of Formation Sensitivity Gap between Islamic Maybank Indonesia and Islamic Maybank Malaysia
Authors: Puji Sucia Sukmaningrum, Achsania Hendratmi, Noven Suprayogi, Muhammad Madyan
Abstract:
Theoretically, Islamic banks in Indonesia and Malaysia not necessarily aware to the interest rate fluctuation, since they don’t use interest-based instruments. Both countries use dual banking system in which Islamic and conventional banking system are exist. This situation makes the profit-sharing level of the Islamic banks will be indirectly affected by the interest rate fluctuation from the conventional banks system. One of the risk management tools for anticipating the risk of interest rate fluctuation is gap management, which has purpose to narrow the difference between Rate Sensitive Asset (RSA) and Rate Sensitive Liability (RSL). This formed gap will give the information about the risk potential in Islamic banks which respect to the fluctuation on the interest rate. This study aims to determine the position of the gap formed at Islamic Maybank Indonesia and Islamic Maybank Malaysia, and analyze the difference in the formation of gap based on the period of sensitivity. This study is a quantitative research with comparative study using sensitivity gap analysis, independent sample t-test, and Mann-Whitney method. The data being used was secondary data from Maturity Profile contained in the Annual Financial Report of Islamic Maybank Indonesia and Islamic Maybank Malaysia from 2011 to 2015 period. The result shows that, cumulatively the formation of the gap was negative gap. From the results of independent sample t-test and Mann-Whitney, the formation of the gap in Islamic Maybank Indonesia and Islamic Maybank Malaysia for a period of sensitivity of ≤ 1 month and >1-3 months show a significant difference, while the period of sensitivity >3-12 months does not. The result shows, even though Indonesia and Malaysia using same dual banking systems, the gap values are different. The difference in debt policy between Indonesia and Malaysia also affecting the gap sensitivity in debt. In can be concluded that each country needs an appropriate gap management to support its Islamic banking performance specifically.Keywords: assets and liability management, gap management, interest rate risk, Islamic bank
Procedia PDF Downloads 2592118 Solid State Drive End to End Reliability Prediction, Characterization and Control
Authors: Mohd Azman Abdul Latif, Erwan Basiron
Abstract:
A flaw or drift from expected operational performance in one component (NAND, PMIC, controller, DRAM, etc.) may affect the reliability of the entire Solid State Drive (SSD) system. Therefore, it is important to ensure the required quality of each individual component through qualification testing specified using standards or user requirements. Qualification testing is time-consuming and comes at a substantial cost for product manufacturers. A highly technical team, from all the eminent stakeholders is embarking on reliability prediction from beginning of new product development, identify critical to reliability parameters, perform full-blown characterization to embed margin into product reliability and establish control to ensure the product reliability is sustainable in the mass production. The paper will discuss a comprehensive development framework, comprehending SSD end to end from design to assembly, in-line inspection, in-line testing and will be able to predict and to validate the product reliability at the early stage of new product development. During the design stage, the SSD will go through intense reliability margin investigation with focus on assembly process attributes, process equipment control, in-process metrology and also comprehending forward looking product roadmap. Once these pillars are completed, the next step is to perform process characterization and build up reliability prediction modeling. Next, for the design validation process, the reliability prediction specifically solder joint simulator will be established. The SSD will be stratified into Non-Operating and Operating tests with focus on solder joint reliability and connectivity/component latent failures by prevention through design intervention and containment through Temperature Cycle Test (TCT). Some of the SSDs will be subjected to the physical solder joint analysis called Dye and Pry (DP) and Cross Section analysis. The result will be feedbacked to the simulation team for any corrective actions required to further improve the design. Once the SSD is validated and is proven working, it will be subjected to implementation of the monitor phase whereby Design for Assembly (DFA) rules will be updated. At this stage, the design change, process and equipment parameters are in control. Predictable product reliability at early product development will enable on-time sample qualification delivery to customer and will optimize product development validation, effective development resource and will avoid forced late investment to bandage the end-of-life product failures. Understanding the critical to reliability parameters earlier will allow focus on increasing the product margin that will increase customer confidence to product reliability.Keywords: e2e reliability prediction, SSD, TCT, solder joint reliability, NUDD, connectivity issues, qualifications, characterization and control
Procedia PDF Downloads 1702117 The Composition of Biooil during Biomass Pyrolysis at Various Temperatures
Authors: Zoltan Sebestyen, Eszter Barta-Rajnai, Emma Jakab, Zsuzsanna Czegeny
Abstract:
Extraction of the energy content of lignocellulosic biomass is one of the possible pathways to reduce the greenhouse gas emission derived from the burning of the fossil fuels. The application of the bioenergy can mitigate the energy dependency of a country from the foreign natural gas and the petroleum. The diversity of the plant materials makes difficult the utilization of the raw biomass in power plants. This problem can be overcome by the application of thermochemical techniques. Pyrolysis is the thermal decomposition of the raw materials under inert atmosphere at high temperatures, which produces pyrolysis gas, biooil and charcoal. The energy content of these products can be exploited by further utilization. The differences in the chemical and physical properties of the raw biomass materials can be reduced by the use of torrefaction. Torrefaction is a promising mild thermal pretreatment method performed at temperatures between 200 and 300 °C in an inert atmosphere. The goal of the pretreatment from a chemical point of view is the removal of water and the acidic groups of hemicelluloses or the whole hemicellulose fraction with minor degradation of cellulose and lignin in the biomass. Thus, the stability of biomass against biodegradation increases, while its energy density increases. The volume of the raw materials decreases so the expenses of the transportation and the storage are reduced as well. Biooil is the major product during pyrolysis and an important by-product during torrefaction of biomass. The composition of biooil mostly depends on the quality of the raw materials and the applied temperature. In this work, thermoanalytical techniques have been used to study the qualitative and quantitative composition of the pyrolysis and torrefaction oils of a woody (black locust) and two herbaceous samples (rape straw and wheat straw). The biooil contains C5 and C6 anhydrosugar molecules, as well as aromatic compounds originating from hemicellulose, cellulose, and lignin, respectively. In this study, special emphasis was placed on the formation of the lignin monomeric products. The structure of the lignin fraction is different in the wood and in the herbaceous plants. According to the thermoanalytical studies the decomposition of lignin starts above 200 °C and ends at about 500 °C. The lignin monomers are present among the components of the torrefaction oil even at relatively low temperatures. We established that the concentration and the composition of the lignin products vary significantly with the applied temperature indicating that different decomposition mechanisms dominate at low and high temperatures. The evolutions of decomposition products as well as the thermal stability of the samples were measured by thermogravimetry/mass spectrometry (TG/MS). The differences in the structure of the lignin products of woody and herbaceous samples were characterized by the method of pyrolysis-gas chromatography/mass spectrometry (Py-GC/MS). As a statistical method, principal component analysis (PCA) has been used to find correlation between the composition of lignin products of the biooil and the applied temperatures.Keywords: pyrolysis, torrefaction, biooil, lignin
Procedia PDF Downloads 3272116 Thermodynamic Analysis and Experimental Study of Agricultural Waste Plasma Processing
Authors: V. E. Messerle, A. B. Ustimenko, O. A. Lavrichshev
Abstract:
A large amount of manure and its irrational use negatively affect the environment. As compared with biomass fermentation, plasma processing of manure enhances makes it possible to intensify the process of obtaining fuel gas, which consists mainly of synthesis gas (CO + H₂), and increase plant productivity by 150–200 times. This is achieved due to the high temperature in the plasma reactor and a multiple reduction in waste processing time. This paper examines the plasma processing of biomass using the example of dried mixed animal manure (dung with a moisture content of 30%). Characteristic composition of dung, wt.%: Н₂О – 30, С – 29.07, Н – 4.06, О – 32.08, S – 0.26, N – 1.22, P₂O₅ – 0.61, K₂O – 1.47, СаО – 0.86, MgO – 0.37. The thermodynamic code TERRA was used to numerically analyze dung plasma gasification and pyrolysis. Plasma gasification and pyrolysis of dung were analyzed in the temperature range 300–3,000 K and pressure 0.1 MPa for the following thermodynamic systems: 100% dung + 25% air (plasma gasification) and 100% dung + 25% nitrogen (plasma pyrolysis). Calculations were conducted to determine the composition of the gas phase, the degree of carbon gasification, and the specific energy consumption of the processes. At an optimum temperature of 1,500 K, which provides both complete gasification of dung carbon and the maximum yield of combustible components (99.4 vol.% during dung gasification and 99.5 vol.% during pyrolysis), and decomposition of toxic compounds of furan, dioxin, and benz(a)pyrene, the following composition of combustible gas was obtained, vol.%: СО – 29.6, Н₂ – 35.6, СО₂ – 5.7, N₂ – 10.6, H₂O – 17.9 (gasification) and СО – 30.2, Н₂ – 38.3, СО₂ – 4.1, N₂ – 13.3, H₂O – 13.6 (pyrolysis). The specific energy consumption of gasification and pyrolysis of dung at 1,500 K is 1.28 and 1.33 kWh/kg, respectively. An installation with a DC plasma torch with a rated power of 100 kW and a plasma reactor with a dung capacity of 50 kg/h was used for dung processing experiments. The dung was gasified in an air (or nitrogen during pyrolysis) plasma jet, which provided a mass-average temperature in the reactor volume of at least 1,600 K. The organic part of the dung was gasified, and the inorganic part of the waste was melted. For pyrolysis and gasification of dung, the specific energy consumption was 1.5 kWh/kg and 1.4 kWh/kg, respectively. The maximum temperature in the reactor reached 1,887 K. At the outlet of the reactor, a gas of the following composition was obtained, vol.%: СO – 25.9, H₂ – 32.9, СO₂ – 3.5, N₂ – 37.3 (pyrolysis in nitrogen plasma); СO – 32.6, H₂ – 24.1, СO₂ – 5.7, N₂ – 35.8 (air plasma gasification). The specific heat of combustion of the combustible gas formed during pyrolysis and plasma-air gasification of agricultural waste is 10,500 and 10,340 kJ/kg, respectively. Comparison of the integral indicators of dung plasma processing showed satisfactory agreement between the calculation and experiment.Keywords: agricultural waste, experiment, plasma gasification, thermodynamic calculation
Procedia PDF Downloads 392115 Liquid Nitrogen as Fracturing Method for Hot Dry Rocks in Kazakhstan
Authors: Sotirios Longinos, Anna Loskutova, Assel Tolegenova, Assem Imanzhussip, Lei Wang
Abstract:
Hot, dry rock (HDR) has substantial potential as a thermal energy source. It has been exploited by hydraulic fracturing to extract heat and generate electricity, which is a well-developed technique known for creating the enhanced geothermal systems (EGS). These days, LN2 is being tested as an environmental friendly fracturing fluid to generate densely interconnected crevices to augment heat exchange efficiency and production. This study examines experimentally the efficacy of LN2 cryogenic fracturing for granite samples in Kazakhstan with immersion method. A comparison of two different experimental models is carried out. The first mode is rock heating along with liquid nitrogen treatment (heating with freezing time), and the second mode is multiple times of heating along with liquid nitrogen treatment (heating with LN2 freezing-thawing cycles). The experimental results indicated that with multiple heating and LN2-treatment cycles, the permeability of granite first ameliorates with increasing number of cycles and later reaches a plateau after a certain number of cycles. On the other hand, density, P-wave velocity, uniaxial compressive strength, elastic modulus, and tensile strength indicate a downward trend with increasing heating and treatment cycles. The thermal treatment cycles do not seem to have an obvious effect on the Poisson’s ratio. The changing rate of granite rock properties decreases as the number of cycles increases. The deterioration of granite primarily happens within the early few cycles. The heating temperature during the cycles shows an important influence on the deterioration of granite. More specifically, mechanical deterioration and permeability amelioration become more remarkable as the heating temperature increases.LN2 fracturing generates many positives compared to conventional fracturing methods such as little water consumption, requirement of zero chemical additives, lessening of reservoir damage, and so forth. Based on the experimental observations, LN2 can work as a promising waterless fracturing fluid to stimulate hot, dry rock reservoirs.Keywords: granite, hydraulic fracturing, liquid nitrogen, Kazakhstan
Procedia PDF Downloads 1602114 Correlation between Sleeping Disturbance and Academic Achievement in University Female Students
Authors: Amel Fayed, Shaden AlSubaih, Nouf Al-Qahtani, Asmaa Gosty, Asma Aljuhaimi
Abstract:
Introduction: Sleep difficulties are vastly predominant among adults and affect different aspects of their life. Many literatures found out that females are more liable to suffer from sleeping problems. College students are typical example of people dealing with daily pressure and stress to fulfill the daily tasks and responsibilities. In addition to their ultimate goal of achieving excellent academic records which require their full concentration and effort. Consequently, many of them start complaining of sleep deprivations which can undesirably affect their academic achievements. This study was aiming to investigate how prevalent is sleeping disorders among different colleges in the university and its relation their academic achievements. Methods: A cross-sectional study of female university students at Princess Norah Bint Abdulrahman University using self-administered questionnaire was conducted. Insomnia Severity Index (ISI) was used to assess different grades of insomnia. Students were requested to answer the questions evaluating their sleeping habits over the last two weeks. Participants reported their latest Grade Point Average (GPA). According to ISI, insomnia severity is reported as ‘No clinically significant’, ‘Subthreshold ‘,’ Clinical moderate insomnia’ and ‘Clinical severe’. Results: In the current study, 228 students participated; 172(75.4%) from medical colleges and 56 (24.6%) from non-medical colleges. About 80% of them claimed to have never taken any medications to help them sleep while only three students confirmed their regular use of sleep-inducing medications. About 16% of the students drink milk or other hot drinks to help them fall asleep. None of the students was suspected of having obstructive sleep apnea or apparent psychiatric disorder. According to ISI, 182 (79.8%) students suffered from subthreshold insomnia, 37 (16.2%) had clinical insomnia (moderate severity) and 9 (3.9%) of students had sleeping problems of non-clinically significance level. However, none of students was found to have severe clinical insomnia. Clinical moderate insomnia was reported in 15.1% of medical students and 19.6% of non-medical students. Moreover, about 82% of medical students suffered from subthreshold insomnia compared to 73.2% of non-medical students. This difference was not statistically significant (P=0.24). About 63% of medical students and 48% of non-medical students believed that high percentage of their colleagues are suffering from insomnias (p-value 0.08) The association between GPA and insomnia revealed that; 19.5% of low GPA group compared to 9.3% of high GPA group had clinical moderate insomnia. This association was not statistically significant (p=0.15). The correlation between the GPA and the ISI score was negative but not conclusive (r=-0.08, p-value = 0.29). More than 92% of all students agreed that sleeping problems affect their academic achievement to varying degrees. Conclusion: our results suggest that insomnia is commonly prevalent among female university students and might affect the students’ achievement. This study provides preliminary data about the quality of sleep among medical and non-medical university students which may be used to promote the healthy sleeping habits among female students.Keywords: academic achievement, females, insomnia, university student
Procedia PDF Downloads 3302113 Development of Digital Twin Concept to Detect Abnormal Changes in Structural Behaviour
Authors: Shady Adib, Vladimir Vinogradov, Peter Gosling
Abstract:
Digital Twin (DT) technology is a new technology that appeared in the early 21st century. The DT is defined as the digital representation of living and non-living physical assets. By connecting the physical and virtual assets, data are transmitted smoothly, allowing the virtual asset to fully represent the physical asset. Although there are lots of studies conducted on the DT concept, there is still limited information about the ability of the DT models for monitoring and detecting unexpected changes in structural behaviour in real time. This is due to the large computational efforts required for the analysis and an excessively large amount of data transferred from sensors. This paper aims to develop the DT concept to be able to detect the abnormal changes in structural behaviour in real time using advanced modelling techniques, deep learning algorithms, and data acquisition systems, taking into consideration model uncertainties. finite element (FE) models were first developed offline to be used with a reduced basis (RB) model order reduction technique for the construction of low-dimensional space to speed the analysis during the online stage. The RB model was validated against experimental test results for the establishment of a DT model of a two-dimensional truss. The established DT model and deep learning algorithms were used to identify the location of damage once it has appeared during the online stage. Finally, the RB model was used again to identify the damage severity. It was found that using the RB model, constructed offline, speeds the FE analysis during the online stage. The constructed RB model showed higher accuracy for predicting the damage severity, while deep learning algorithms were found to be useful for estimating the location of damage with small severity.Keywords: data acquisition system, deep learning, digital twin, model uncertainties, reduced basis, reduced order model
Procedia PDF Downloads 972112 Realizing Teleportation Using Black-White Hole Capsule Constructed by Space-Time Microstrip Circuit Control
Authors: Mapatsakon Sarapat, Mongkol Ketwongsa, Somchat Sonasang, Preecha Yupapin
Abstract:
The designed and performed preliminary tests on a space-time control circuit using a two-level system circuit with a 4-5 cm diameter microstrip for realistic teleportation have been demonstrated. It begins by calculating the parameters that allow a circuit that uses the alternative current (AC) at a specified frequency as the input signal. A method that causes electrons to move along the circuit perimeter starting at the speed of light, which found satisfaction based on the wave-particle duality. It is able to establish the supersonic speed (faster than light) for the electron cloud in the middle of the circuit, creating a timeline and propulsive force as well. The timeline is formed by the stretching and shrinking time cancellation in the relativistic regime, in which the absolute time has vanished. In fact, both black holes and white holes are created from time signals at the beginning, where the speed of electrons travels close to the speed of light. They entangle together like a capsule until they reach the point where they collapse and cancel each other out, which is controlled by the frequency of the circuit. Therefore, we can apply this method to large-scale circuits such as potassium, from which the same method can be applied to form the system to teleport living things. In fact, the black hole is a hibernation system environment that allows living things to live and travel to the destination of teleportation, which can be controlled from position and time relative to the speed of light. When the capsule reaches its destination, it increases the frequency of the black holes and white holes canceling each other out to a balanced environment. Therefore, life can safely teleport to the destination. Therefore, there must be the same system at the origin and destination, which could be a network. Moreover, it can also be applied to space travel as well. The design system will be tested on a small system using a microstrip circuit system that we can create in the laboratory on a limited budget that can be used in both wired and wireless systems.Keywords: quantum teleportation, black-white hole, time, timeline, relativistic electronics
Procedia PDF Downloads 742111 Learning to Translate by Learning to Communicate to an Entailment Classifier
Authors: Szymon Rutkowski, Tomasz Korbak
Abstract:
We present a reinforcement-learning-based method of training neural machine translation models without parallel corpora. The standard encoder-decoder approach to machine translation suffers from two problems we aim to address. First, it needs parallel corpora, which are scarce, especially for low-resource languages. Second, it lacks psychological plausibility of learning procedure: learning a foreign language is about learning to communicate useful information, not merely learning to transduce from one language’s 'encoding' to another. We instead pose the problem of learning to translate as learning a policy in a communication game between two agents: the translator and the classifier. The classifier is trained beforehand on a natural language inference task (determining the entailment relation between a premise and a hypothesis) in the target language. The translator produces a sequence of actions that correspond to generating translations of both the hypothesis and premise, which are then passed to the classifier. The translator is rewarded for classifier’s performance on determining entailment between sentences translated by the translator to disciple’s native language. Translator’s performance thus reflects its ability to communicate useful information to the classifier. In effect, we train a machine translation model without the need for parallel corpora altogether. While similar reinforcement learning formulations for zero-shot translation were proposed before, there is a number of improvements we introduce. While prior research aimed at grounding the translation task in the physical world by evaluating agents on an image captioning task, we found that using a linguistic task is more sample-efficient. Natural language inference (also known as recognizing textual entailment) captures semantic properties of sentence pairs that are poorly correlated with semantic similarity, thus enforcing basic understanding of the role played by compositionality. It has been shown that models trained recognizing textual entailment produce high-quality general-purpose sentence embeddings transferrable to other tasks. We use stanford natural language inference (SNLI) dataset as well as its analogous datasets for French (XNLI) and Polish (CDSCorpus). Textual entailment corpora can be obtained relatively easily for any language, which makes our approach more extensible to low-resource languages than traditional approaches based on parallel corpora. We evaluated a number of reinforcement learning algorithms (including policy gradients and actor-critic) to solve the problem of translator’s policy optimization and found that our attempts yield some promising improvements over previous approaches to reinforcement-learning based zero-shot machine translation.Keywords: agent-based language learning, low-resource translation, natural language inference, neural machine translation, reinforcement learning
Procedia PDF Downloads 1272110 Autonomy in Healthcare Organisations: A Comparative Case Study of Middle Managers in England and Iran
Authors: Maryam Zahmatkesh
Abstract:
Middle managers form a significant occupational category in organisations. They undertake a vital role, as they sit between the operational and strategic roles. Traditionally they were acting as diplomat administrators, and were only in power to meet the demands of professionals. Following the introduction of internal market, in line with the principles of New Public Management, middle managers have been considered as change agents. More recently, in the debates of middle managers, there is emphasis on entrepreneurialism and enacting strategic role. It was assumed that granting autonomy to the local organisations and the inception of semi-autonomous hospitals (Foundation Trusts in England and Board of Trustees in Iran) would give managers more autonomy to act proactively and innovatively. This thesis explores the hospital middle managers’ perception of and responses to public management reforms (in particular, hospital autonomy) in England and Iran. In order to meet the aims of the thesis, research was undertaken within the interpretative paradigm, in line with social constructivism. Data were collected from interviews with forty-five middle managers, observational fieldwork and documentary analysis across four teaching university hospitals in England and Iran. The findings show the different ways middle managers’ autonomy is constrained in the two countries. In England, middle managers have financial and human recourses, but their autonomy is constrained by government policy and targets. In Iran, middle managers are less constrained by government policy and targets, but they do not have financial and human resources to exercise autonomy. Unbalanced autonomy causes tension and frustration for middle managers. According to neo-institutional theory, organisations are deeply embedded within social, political, economic and normative settings that exert isomorphic and internal population-level pressures to conform to existing and established modes of operation. Health systems which are seeking to devolve autonomy to middle managers must appreciate the multidimensional nature of the autonomy, as well as the wider environment that organisations are embedded, if they are about to improve the performance of managers and their organisations.Keywords: autonomy, healthcare organisations, middle managers, new public management
Procedia PDF Downloads 3102109 Enhancing Algal Bacterial Photobioreactor Efficiency: Nutrient Removal and Cost Analysis Comparison for Light Source Optimization
Authors: Shahrukh Ahmad, Purnendu Bose
Abstract:
Algal-Bacterial photobioreactors (ABPBRs) have emerged as a promising technology for sustainable biomass production and wastewater treatment. Nutrient removal is seldom done in sewage treatment plants and large volumes of wastewater which still have nutrients are being discharged and that can lead to eutrophication. That is why ABPBR plays a vital role in wastewater treatment. However, improving the efficiency of ABPBR remains a significant challenge. This study aims to enhance ABPBR efficiency by focusing on two key aspects: nutrient removal and cost-effective optimization of the light source. By integrating nutrient removal and cost analysis for light source optimization, this study proposes practical strategies for improving ABPBR efficiency. To reduce organic carbon and convert ammonia to nitrates, domestic wastewater from a 130 MLD sewage treatment plant (STP) was aerated with a hydraulic retention time (HRT) of 2 days. The treated supernatant had an approximate nitrate and phosphate values of 16 ppm as N and 6 ppm as P, respectively. This supernatant was then fed into the ABPBR, and the removal of nutrients (nitrate as N and phosphate as P) was observed using different colored LED bulbs, namely white, blue, red, yellow, and green. The ABPBR operated with a 9-hour light and 3-hour dark cycle, using only one color of bulbs per cycle. The study found that the white LED bulb, with a photosynthetic photon flux density (PPFD) value of 82.61 µmol.m-2 .sec-1 , exhibited the highest removal efficiency. It achieved a removal rate of 91.56% for nitrate and 86.44% for phosphate, surpassing the other colored bulbs. Conversely, the green LED bulbs showed the lowest removal efficiencies, with 58.08% for nitrate and 47.48% for phosphate at an HRT of 5 days. The quantum PAR (Photosynthetic Active Radiation) meter measured the photosynthetic photon flux density for each colored bulb setting inside the photo chamber, confirming that white LED bulbs operated at a wider wavelength band than the others. Furthermore, a cost comparison was conducted for each colored bulb setting. The study revealed that the white LED bulb had the lowest average cost (Indian Rupee)/light intensity (µmol.m-2 .sec-1 ) value at 19.40, while the green LED bulbs had the highest average cost (INR)/light intensity (µmol.m-2 .sec-1 ) value at 115.11. Based on these comparative tests, it was concluded that the white LED bulbs were the most efficient and costeffective light source for an algal photobioreactor. They can be effectively utilized for nutrient removal from secondary treated wastewater which helps in improving the overall wastewater quality before it is discharged back into the environment.Keywords: algal bacterial photobioreactor, domestic wastewater, nutrient removal, led bulbs
Procedia PDF Downloads 762108 Creating Standards to Define the Role of Employment Specialists: A Case Study
Authors: Joseph Ippolito, David Megenhardt
Abstract:
In the United States, displaced workers, the unemployed and those seeking to build additional work skills are provided employment training and job placement services through a system of One-Stop Career Centers that are sponsored by the country’s 593 local Workforce Boards. During the period 2010-2015, these centers served roughly 8 million individuals each year. The quality of services provided at these centers rests upon professional employment specialists who work closely with clients to identify their job interests, to connect them to appropriate training opportunities, to match them with needed supportive social services and to guide them to eventual employment. Despite the crucial role these Employment Specialists play, currently there are no broadly accepted standards that establish what these individuals are expected to do in the workplace, nor are there indicators to assess how well an individual performs these responsibilities. Education Development Center (EDC) and the United Labor Agency (ULA) have partnered to create a foundation upon which curriculum can be developed that addresses the skills, knowledge and behaviors that Employment Specialists must master in order to serve their clients effectively. EDC is a non-profit, education research and development organization that designs, implements, and evaluates programs to improve education, health and economic opportunity worldwide. ULA is the social action arm of organized labor in Greater Cleveland, Ohio. ULA currently operates One-Stop Career Centers in both Cleveland and Pittsburgh, Pennsylvania. This case study outlines efforts taken to create standards that define the work of Employment Specialists and to establish indicators that can guide assessment of work performance. The methodology involved in the study has engaged a panel of expert Employment Specialists in rigorous, structured dialogues that analyze and identify the characteristics that enable them to be effective in their jobs. It has also drawn upon and integrated reviews of the panel’s work by more than 100 other Employment Specialists across the country. The results of this process are two documents that provide resources for developing training curriculum for future Employment Specialists, namely: an occupational profile of an Employment Specialist that offers a detailed articulation of the skills, knowledge and behaviors that enable individuals to be successful at this job, and; a collection of performance based indicators, aligned to the profile, which illustrate what the work responsibilities of an Employment Specialist 'look like' a four levels of effectiveness ranging from novice to expert. The method of occupational analysis used by the study has application across a broad number of fields.Keywords: assessment, employability, job standards, workforce development
Procedia PDF Downloads 2342107 Team Teaching versus Traditional Pedagogical Method
Authors: L. M. H. Mustonen, S. A. Heikkilä
Abstract:
The focus of the paper is to describe team teaching as a HAMK’s pedagogical method, and its impacts to the teachers work. Background: Traditionally it is thought that teaching is a job where one mostly works alone. More and more teachers feel that their work is getting more stressful. Solutions to these problems have been sought in Häme University of Applied sciences’ (From now on referred to as HAMK). HAMK has made a strategic change to move to the group oriented working of teachers. Instead of isolated study courses, there are now larger 15 credits study modules. Implementation: As examples of the method, two cases are presented: technical project module and summer studies module, which was integrated into the EU development project called Energy Efficiency with Precise Control. In autumn 2017, technical project will be implemented third time. There are at least three teachers involved in it and it is the first module of the new students. Main focus is to learn the basic skills of project working. From communicational viewpoint, they learn the basics of written and oral reporting and the basics of video reporting skills. According to our quality control system, the need for the development is evaluated in the end of the module. There are always some differences in each implementation but the basics are the same. The other case summer studies 2017 is new and part of a larger EU project. For the first time, we took a larger group of first to third year students from different study programmes to the summer studies. The students learned professional skills and also skills from different fields of study, international cooperation, and communication skills. Benefits and challenges: After three years, it is possible to consider what the changes mean in the everyday work of the teachers - and of course – what it means to students and the learning process. The perspective is HAMK’s electrical and automation study programme: At first, the change always means more work. The routines born after many years and the course material used for years may not be valid anymore. Teachers are teaching in modules simultaneously and often with some subjects overlapping. Finding the time to plan the modules together is often difficult. The essential benefit is that the learning outcomes have improved. This can be seen in the feedback given by both the teachers and the students. Conclusions: A new type of working environment is being born. A team of teachers designs a module that matches the objectives and ponders the answers to such questions as what are the knowledge-based targets of the module? Which pedagogical solutions will achieve the desired results? At what point do multiple teachers instruct the class together? How is the module evaluated? How can the module be developed further for the next execution? The team discusses openly and finds the solutions. Collegiate responsibility and support are always present. These are strengthening factors of the new communal university teaching culture. They are also strong sources of pleasure of work.Keywords: pedagogical development, summer studies, team teaching, well-being at work
Procedia PDF Downloads 1082106 Evaluation of Iron Application Method to Remediate Coastal Marine Sediment
Authors: Ahmad Seiar Yasser
Abstract:
Sediment is an important habitat for organisms and act as a store house for nutrients in aquatic ecosystems. Hydrogen sulfide is produced by microorganisms in the water columns and sediments, which is highly toxic and fatal to benthic organisms. However, the irons have the capacity to regulate the formation of sulfide by poising the redox sequence and to form insoluble iron sulfide and pyrite compounds. Therefore, we conducted two experiments aimed to evaluate the remediation efficiency of iron application to organically enrich and improve sediments environment. Experiments carried out in the laboratory using intact sediment cores taken from Mikawa Bay, Japan at every month from June to September 2017 and October 2018. In Experiment 1, after cores were collected, the iron powder or iron hydroxide were applied to the surface sediment with 5 g/ m2 or 5.6 g/ m2, respectively. In Experiment 2, we experimentally investigated the removal of hydrogen sulfide using (2mm or less and 2 to 5mm) of the steelmaking slag. Experiments are conducted both in the laboratory with the same boundary conditions. The overlying water were replaced with deoxygenated filtered seawater, and cores were sealed a top cap to keep anoxic condition with a stirrer to circulate the overlying water gently. The incubation experiments have been set in three treatments included the control, and each treatment replicated and were conducted with the same temperature of the in-situ conditions. Water samples were collected to measure the dissolved sulfide concentrations in the overlying water at appropriate time intervals by the methylene blue method. Sediment quality was also analyzed after the completion of the experiment. After the 21 days incubation, experimental results using iron powder and ferric hydroxide revealed that application of these iron containing materials significantly reduced sulfide release flux from the sediment into the overlying water. The average dissolved sulfides concentration in the overlying water of the treatment group was significantly decrease (p = .0001). While no significant difference was observed between the control group after 21 day incubation. Therefore, the application of iron to the sediment is a promising method to remediate contaminated sediments in a eutrophic water body, although ferric hydroxide has better hydrogen sulfide removal effects. Experiments using the steelmaking slag also clarified the fact that capping with (2mm or less and 2 to 5mm) of slag steelmaking is an effective technique for remediation of bottom sediments enriched organic containing hydrogen sulfide because it leads to the induction of chemical reaction between Fe and sulfides occur in sediments which did not occur in conditions naturally. Although (2mm or less) of slag steelmaking has better hydrogen sulfide removal effects. Because of economic reasons, the application of steelmaking slag to the sediment is a promising method to remediate contaminated sediments in the eutrophic water body.Keywords: sedimentary, H2S, iron, iron hydroxide
Procedia PDF Downloads 1612105 Open Joint Surgery for Temporomandibular Joint Internal Derangement: Wilkes Stages III-V
Authors: T. N. Goh, M. Hashmi, O. Hussain
Abstract:
Temporomandibular joint (TMJ) dysfunction (TMD) is a condition that may affect patients via restricted mouth opening, significant pain during normal functioning, and/or reproducible joint noise. TMD includes myofascial pain, TMJ functional derangements (internal derangement, dislocation), and TMJ degenerative/inflammatory joint disease. Internal derangement (ID) is the most common cause of TMD-related clicking and locking. These patients are managed in a stepwise approach, from patient education (homecare advice and analgesia), splint therapy, physiotherapy, botulinum toxin treatment, to arthrocentesis. Arthrotomy is offered when the aforementioned treatment options fail to alleviate symptoms and improve quality of life. The aim of this prospective study was to review the outcomes of jaw joint open surgery in TMD patients. Patients who presented from 2015-2022 at the Oral and Maxillofacial Surgery Department in the Doncaster NHS Foundation Trust, UK, with a Wilkes classification of III -V were included. These patients underwent either i) discopexy with bone-anchoring suture (9); ii) intrapositional temporalis flap (ITF) with bone-anchoring suture (3); iii) eminoplasty and discopexy with suturing to the capsule (3); iii) discectomy + ITF with bone-anchoring suture (1); iv) discoplasty + bone-anchoring suture (1); v) ITF (1). Maximum incisal opening (MIO) was assessed pre-operatively and at each follow-up. Pain score, determined via the visual analogue scale (VAS, with 0 being no pain and 10 being the worst pain), was also recorded. A total of 18 eligible patients were identified with a mean age of 45 (range 22 - 79), of which 16 were female. The patients were scored by Wilkes Classification as III (14), IV (1), or V (4). Twelve patients had anterior disc displacement without reduction (66%) and six had degenerative/arthritic changes (33%) to the TMJ. The open joint procedure resulted in an increase in MIO and reduction in pain VAS and for the majority of patients, across all Wilkes Classifications. Pre-procedural MIO was 22.9 ± 7.4 mm and VAS was 7.8 ± 1.5. At three months post-procedure there was an increase in MIO to 34.4 ± 10.4 mm (p < 0.01) and a decrease in the VAS to 1.5 ± 2.9 (p < 0.01). Three patients were lost to follow-up prior to six months. Six were discharged at six month review and five patients were discharged at 12 months review as they were asymptomatic with good mouth opening. Four patients are still attending for annual botulinum toxin treatment. Two patients (Wilkes III and V) subsequently underwent TMJ replacement (11%). One of these patients (Wilkes III) had improvement initially to MIO of 40 mm, but subsequently relapsed to less than 20 mm due to lack of compliance with jaw rehabilitation device post-operatively. Clinical improvements in 89% of patients within the study group were found, with a return to near normal MIO range and reduced pain score. Intraoperatively, the operator found bone-anchoring suture used for discopexy/discoplasty more secure than the soft tissue anchoring suturing technique.Keywords: bone anchoring suture, open temporomandibular joint surgery, temporomandibular joint, temporomandibular joint dysfunction
Procedia PDF Downloads 1042104 Zero Energy Buildings in Hot-Humid Tropical Climates: Boundaries of the Energy Optimization Grey Zone
Authors: Nakul V. Naphade, Sandra G. L. Persiani, Yew Wah Wong, Pramod S. Kamath, Avinash H. Anantharam, Hui Ling Aw, Yann Grynberg
Abstract:
Achieving zero-energy targets in existing buildings is known to be a difficult task requiring important cuts in the building energy consumption, which in many cases clash with the functional necessities of the building wherever the on-site energy generation is unable to match the overall energy consumption. Between the building’s consumption optimization limit and the energy, target stretches a case-specific optimization grey zone, which requires tailored intervention and enhanced user’s commitment. In the view of the future adoption of more stringent energy-efficiency targets in the context of hot-humid tropical climates, this study aims to define the energy optimization grey zone by assessing the energy-efficiency limit in the state-of-the-art typical mid- and high-rise full AC office buildings, through the integration of currently available technologies. Energy models of two code-compliant generic office-building typologies were developed as a baseline, a 20-storey ‘high-rise’ and a 7-storey ‘mid-rise’. Design iterations carried out on the energy models with advanced market ready technologies in lighting, envelope, plug load management and ACMV systems and controls, lead to a representative energy model of the current maximum technical potential. The simulations showed that ZEB targets could be achieved in fully AC buildings under an average of seven floors only by compromising on energy-intense facilities (as full AC, unlimited power-supply, standard user behaviour, etc.). This paper argues that drastic changes must be made in tropical buildings to span the energy optimization grey zone and achieve zero energy. Fully air-conditioned areas must be rethought, while smart technologies must be integrated with an aggressive involvement and motivation of the users to synchronize with the new system’s energy savings goal.Keywords: energy simulation, office building, tropical climate, zero energy buildings
Procedia PDF Downloads 1832103 Ecodesign of Bioplastic Films for Food Packaging and Shelf-life Extension
Authors: Sónia Ribeiro, Diana Farinha, Elsa Pereira, Hélia Sales, Filipa Figueiredo, Rita Pontes, João Nunes
Abstract:
Conventional plastic impacts on Planet, natural resources contamination, human health as well as animals are the most attractive environmental and health attention. The lack of treatment in the end-of-life (EOL) phase and uncontrolled discard allows plastic to be found everywhere in the world. Food waste is increasing significantly, with a final destination to landfills. To face these difficulties, new packaging solutions are needed with the objective of prolonging the shelf-life of products as well as equipment solutions for the development of the mentioned packaging. FLUI project thus presents relevance and innovation to reach a new level of knowledge and industrial development focused in Ecodesign. Industrial equipment field for the manufacture of new packaging solutions based on biodegradable plastics films to apply in the food sector. With lesser environmental impacts and new solutions that make it possible to prevent food waste, reduce the production e consequent poor disposal of plastic of fossil origin. It will be a paradigm shift at different levels, from industry to waste treatment stations, passing through commercial agents and consumers. It can be achieved through the life cycle assessment (LCA) and ecodesign of the products, which integrates the environmental concerns in the design of the product as well as through the entire life cycle. The FLUI project aims to build a piece of new bio-PLA extrusion equipment with the incorporation of bioactive extracts through the production of flexible mono- and multi-layer functional films (FLUI systems). The biofunctional and biodegradable films will prompt the extension of packaged products’ shelf-life, reduce food waste and contribute to reducing the consumption of non-degradable fossil plastics, as well as the use of raw material from renewable sources.Keywords: food packing, bioplastics, ecodesign, circular economy
Procedia PDF Downloads 922102 Complementing Assessment Processes with Standardized Tests: A Work in Progress
Authors: Amparo Camacho
Abstract:
ABET accredited programs must assess the development of student learning outcomes (SOs) in engineering programs. Different institutions implement different strategies for this assessment, and they are usually designed “in house.” This paper presents a proposal for including standardized tests to complement the ABET assessment model in an engineering college made up of six distinct engineering programs. The engineering college formulated a model of quality assurance in education to be implemented throughout the six engineering programs to regularly assess and evaluate the achievement of SOs in each program offered. The model uses diverse techniques and sources of data to assess student performance and to implement actions of improvement based on the results of this assessment. The model is called “Assessment Process Model” and it includes SOs A through K, as defined by ABET. SOs can be divided into two categories: “hard skills” and “professional skills” (soft skills). The first includes abilities, such as: applying knowledge of mathematics, science, and engineering and designing and conducting experiments, as well as analyzing and interpreting data. The second category, “professional skills”, includes communicating effectively, and understanding professional and ethnical responsibility. Within the Assessment Process Model, various tools were used to assess SOs, related to both “hard” as well as “soft” skills. The assessment tools designed included: rubrics, surveys, questionnaires, and portfolios. In addition to these instruments, the Engineering College decided to use tools that systematically gather consistent quantitative data. For this reason, an in-house exam was designed and implemented, based on the curriculum of each program. Even though this exam was administered during various academic periods, it is not currently considered standardized. In 2017, the Engineering College included three standardized tests: one to assess mathematical and scientific reasoning and two more to assess reading and writing abilities. With these exams, the college hopes to obtain complementary information that can help better measure the development of both hard and soft skills of students in the different engineering programs. In the first semester of 2017, the three exams were given to three sample groups of students from the six different engineering programs. Students in the sample groups were either from the first, fifth, and tenth semester cohorts. At the time of submission of this paper, the engineering college has descriptive statistical data and is working with various statisticians to have a more in-depth and detailed analysis of the sample group of students’ achievement on the three exams. The overall objective of including standardized exams in the assessment model is to identify more precisely the least developed SOs in order to define and implement educational strategies necessary for students to achieve them in each engineering program.Keywords: assessment, hard skills, soft skills, standardized tests
Procedia PDF Downloads 2832101 A Cooperative Signaling Scheme for Global Navigation Satellite Systems
Authors: Keunhong Chae, Seokho Yoon
Abstract:
Recently, the global navigation satellite system (GNSS) such as Galileo and GPS is employing more satellites to provide a higher degree of accuracy for the location service, thus calling for a more efficient signaling scheme among the satellites used in the overall GNSS network. In that the network throughput is improved, the spatial diversity can be one of the efficient signaling schemes; however, it requires multiple antenna that could cause a significant increase in the complexity of the GNSS. Thus, a diversity scheme called the cooperative signaling was proposed, where the virtual multiple-input multiple-output (MIMO) signaling is realized with using only a single antenna in the transmit satellite of interest and with modeling the neighboring satellites as relay nodes. The main drawback of the cooperative signaling is that the relay nodes receive the transmitted signal at different time instants, i.e., they operate in an asynchronous way, and thus, the overall performance of the GNSS network could degrade severely. To tackle the problem, several modified cooperative signaling schemes were proposed; however, all of them are difficult to implement due to a signal decoding at the relay nodes. Although the implementation at the relay nodes could be simpler to some degree by employing the time-reversal and conjugation operations instead of the signal decoding, it would be more efficient if we could implement the operations of the relay nodes at the source node having more resources than the relay nodes. So, in this paper, we propose a novel cooperative signaling scheme, where the data signals are combined in a unique way at the source node, thus obviating the need of the complex operations such as signal decoding, time-reversal and conjugation at the relay nodes. The numerical results confirm that the proposed scheme provides the same performance in the cooperative diversity and the bit error rate (BER) as the conventional scheme, while reducing the complexity at the relay nodes significantly. Acknowledgment: This work was supported by the National GNSS Research Center program of Defense Acquisition Program Administration and Agency for Defense Development.Keywords: global navigation satellite network, cooperative signaling, data combining, nodes
Procedia PDF Downloads 2792100 Method for Auto-Calibrate Projector and Color-Depth Systems for Spatial Augmented Reality Applications
Authors: R. Estrada, A. Henriquez, R. Becerra, C. Laguna
Abstract:
Spatial Augmented Reality is a variation of Augmented Reality where the Head-Mounted Display is not required. This variation of Augmented Reality is useful in cases where the need for a Head-Mounted Display itself is a limitation. To achieve this, Spatial Augmented Reality techniques substitute the technological elements of Augmented Reality; the virtual world is projected onto a physical surface. To create an interactive spatial augmented experience, the application must be aware of the spatial relations that exist between its core elements. In this case, the core elements are referred to as a projection system and an input system, and the process to achieve this spatial awareness is called system calibration. The Spatial Augmented Reality system is considered calibrated if the projected virtual world scale is similar to the real-world scale, meaning that a virtual object will maintain its perceived dimensions when projected to the real world. Also, the input system is calibrated if the application knows the relative position of a point in the projection plane and the RGB-depth sensor origin point. Any kind of projection technology can be used, light-based projectors, close-range projectors, and screens, as long as it complies with the defined constraints; the method was tested on different configurations. The proposed procedure does not rely on a physical marker, minimizing the human intervention on the process. The tests are made using a Kinect V2 as an input sensor and several projection devices. In order to test the method, the constraints defined were applied to a variety of physical configurations; once the method was executed, some variables were obtained to measure the method performance. It was demonstrated that the method obtained can solve different arrangements, giving the user a wide range of setup possibilities.Keywords: color depth sensor, human computer interface, interactive surface, spatial augmented reality
Procedia PDF Downloads 1222099 Distinguishing Substance from Spectacle in Violent Extremist Propaganda through Frame Analysis
Authors: John Hardy
Abstract:
Over the last decade, the world has witnessed an unprecedented rise in the quality and availability of violent extremist propaganda. This phenomenon has been fueled primarily by three interrelated trends: rapid adoption of online content mediums by creators of violent extremist propaganda, increasing sophistication of violent extremist content production, and greater coordination of content and action across violent extremist organizations. In particular, the self-styled ‘Islamic State’ attracted widespread attention from its supporters and detractors alike by mixing shocking video and imagery content in with substantive ideological and political content. Although this practice was widely condemned for its brutality, it proved to be effective at engaging with a variety of international audiences and encouraging potential supporters to seek further information. The reasons for the noteworthy success of this kind of shock-value propaganda content remain unclear, despite many governments’ attempts to produce counterpropaganda. This study examines violent extremist propaganda distributed by five terrorist organizations between 2010 and 2016, using material released by the Al Hayat Media Center of the Islamic State, Boko Haram, Al Qaeda, Al Qaeda in the Arabian Peninsula, and Al Qaeda in the Islamic Maghreb. The time period covers all issues of the infamous publications Inspire and Dabiq, as well as the most shocking video content released by the Islamic State and its affiliates. The study uses frame analysis to distinguish thematic from symbolic content in violent extremist propaganda by contrasting the ways that substantive ideology issues were framed against the use of symbols and violence to garner attention and to stylize propaganda. The results demonstrate that thematic content focuses significantly on diagnostic frames, which explain violent extremist groups’ causes, and prognostic frames, which propose solutions to addressing or rectifying the cause shared by groups and their sympathizers. Conversely, symbolic violence is primarily stylistic and rarely linked to thematic issues or motivational framing. Frame analysis provides a useful preliminary tool in disentangling substantive ideological and political content from stylistic brutality in violent extremist propaganda. This provides governments and researchers a method for better understanding the framing and content used to design narratives and propaganda materials used to promote violent extremism around the world. Increased capacity to process and understand violent extremist narratives will further enable governments and non-governmental organizations to develop effective counternarratives which promote non-violent solutions to extremists’ grievances.Keywords: countering violent extremism, counternarratives, frame analysis, propaganda, terrorism, violent extremism
Procedia PDF Downloads 1732098 The Impacts of Export in Stimulating Economic Growth in Ethiopia: ARDL Model Analysis
Authors: Natnael Debalklie Teshome
Abstract:
The purpose of the study was to empirically investigate the impacts of export performance and its volatility on economic growth in the Ethiopian economy. To do so, time-series data of the sample period from 1974/75 – 2017/18 were collected from databases and annual reports of IMF, WB, NBE, MoFED, UNCTD, and EEA. The extended Cobb-Douglas production function of the neoclassical growth model framed under the endogenous growth theory was used to consider both the performance and instability aspects of export. First, the unit root test was conducted using ADF and PP tests, and data were found in stationery with a mix of I(0) and I(1). Then, the bound test and Wald test were employed, and results showed that there exists long-run co-integration among study variables. All the diagnostic test results also reveal that the model fulfills the criteria of the best-fitted model. Therefore, the ARDL model and VECM were applied to estimate the long-run and short-run parameters, while the Granger causality test was used to test the causality between study variables. The empirical findings of the study reveal that only export and coefficient of variation had significant positive and negative impacts on RGDP in the long run, respectively, while other variables were found to have an insignificant impact on the economic growth of Ethiopia. In the short run, except for gross capital formation and coefficients of variation, which have a highly significant positive impact, all other variables have a strongly significant negative impact on RGDP. This shows exports had a strong, significant impact in both the short-run and long-run periods. However, its positive and statistically significant impact is observed only in the long run. Similarly, there was a highly significant export fluctuation in both periods, while significant commodity concentration (CCI) was observed only in the short run. Moreover, the Granger causality test reveals that unidirectional causality running from export performance to RGDP exists in the long run and from both export and RGDP to CCI in the short run. Therefore, the export-led growth strategy should be sustained and strengthened. In addition, boosting the industrial sector is vital to bring structural transformation. Hence, the government has to give different incentive schemes and supportive measures to exporters to extract the spillover effects of exports. Greater emphasis on price-oriented diversification and specialization on major primary products that the country has a comparative advantage should also be given to reduce value-based instability in the export earnings of the country. The government should also strive to increase capital formation and human capital development via enhancing investments in technology and quality of education to accelerate the economic growth of the country.Keywords: export, economic growth, export diversification, instability, co-integration, granger causality, Ethiopian economy
Procedia PDF Downloads 772097 The Role of Religion in the Foundation of State [Pakistan]
Authors: Hafiz Atif Iqbal
Abstract:
It is a confirmed historical fact that Pakistan is an ideological state, and religion has played a very important and vital role in the establishment of Pakistan. This is the reason why the slogan "What does Pakistan mean is "la ilaha illa Allah" is embedded in the heart of every Muslim. This slogan became so popular in the dimensions of India that Movement of Pakistan and this slogan became inseparable, and that is why Quaid-e-Azam said: "Twenty-five percent share in Movement of Pakistan belongs to the creator of this slogan, Asghar Soudai Sialkoti." This slogan later formed the basis of the two-nation theory, whereby the Hindus and Muslims of the sub-continent were declared to be two separate and complete nations, completely different from each other in terms of their religion, affairs, dress, lifestyle, and values. In this regard, on March 23, 1940, at the historic meeting of the Muslim League in Lahore, in which the Lahore Resolution was passed, Quaid-e-Azam said: Islam and Hinduism are not just religions, but actually two different social systems. Therefore, this desire should be called a dream and a dream that Hindus and Muslims will be able to create a common nationality together. These people do not marry each other, nor do they eat at the same table. I say in a nutshell that they belong to two different civilizations, and these civilizations are based on concepts and facts that contradict each other and are against each other. Quaid-e-Azam, while addressing Peshawar in January 1948, said: "We did not demand Pakistan just to get a separate piece of land, but we wanted to get a laboratory where we can test the principles of Islam. The distinction of the concept of Islamic government should be kept in mind that the authority of obedience and loyalty in it is God Almighty, whose practical means of compliance are the rules and principles of the Holy Quran. Only the rules of the Holy Quran can determine the limits of our freedom and restrictions in the state and society. In other words, the Islamic government is the government of Quranic principles and rules. All these facts make it clear that religion has played a fundamental and important role in the establishment of Pakistan.Keywords: la ilaha illa allah, asghar soudai sialkoti, lahore resolution, quaid-e-azam
Procedia PDF Downloads 982096 Clinical Advice Services: Using Lean Chassis to Optimize Nurse-Driven Telephonic Triage of After-Hour Calls from Patients
Authors: Eric Lee G. Escobedo-Wu, Nidhi Rohatgi, Fouzel Dhebar
Abstract:
It is challenging for patients to navigate through healthcare systems after-hours. This leads to delays in care, patient/provider dissatisfaction, inappropriate resource utilization, readmissions, and higher costs. It is important to provide patients and providers with effective clinical decision-making tools to allow seamless connectivity and coordinated care. In August 2015, patient-centric Stanford Health Care established Clinical Advice Services (CAS) to provide clinical decision support after-hours. CAS is founded on key Lean principles: Value stream mapping, empathy mapping, waste walk, takt time calculations, standard work, plan-do-check-act cycles, and active daily management. At CAS, Clinical Assistants take the initial call and manage all non-clinical calls (e.g., appointments, directions, general information). If the patient has a clinical symptom, the CAS nurses take the call and utilize standardized clinical algorithms to triage the patient to home, clinic, urgent care, emergency department, or 911. Nurses may also contact the on-call physician based on the clinical algorithm for further direction and consultation. Since August 2015, CAS has managed 228,990 calls from 26 clinical specialties. Reporting is built into the electronic health record for analysis and data collection. 65.3% of the after-hours calls are clinically related. Average clinical algorithm adherence rate has been 92%. An average of 9% of calls was escalated by CAS nurses to the physician on call. An average of 5% of patients was triaged to the Emergency Department by CAS. Key learnings indicate that a seamless connectivity vision, cascading, multidisciplinary ownership of the problem, and synergistic enterprise improvements have contributed to this success while striving for continuous improvement.Keywords: after hours phone calls, clinical advice services, nurse triage, Stanford Health Care
Procedia PDF Downloads 1742095 Identification of Toxic Metal Deposition in Food Cycle and Its Associated Public Health Risk
Authors: Masbubul Ishtiaque Ahmed
Abstract:
Food chain contamination by heavy metals has become a critical issue in recent years because of their potential accumulation in bio systems through contaminated water, soil and irrigation water. Industrial discharge, fertilizers, contaminated irrigation water, fossil fuels, sewage sludge and municipality wastes are the major sources of heavy metal contamination in soils and subsequent uptake by crops. The main objectives of this project were to determine the levels of minerals, trace elements and heavy metals in major foods and beverages consumed by the poor and non-poor households of Dhaka city and assess the dietary risk exposure to heavy metal and trace metal contamination and potential health implications as well as recommendations for action. Heavy metals are naturally occurring elements that have a high atomic weight and a density of at least 5 times greater than that of water. Their multiple industrial, domestic, agricultural, medical and technological applications have led to their wide distribution in the environment; raising concerns over their potential effects on human health and the environment. Their toxicity depends on several factors including the dose, route of exposure, and chemical species, as well as the age, gender, genetics, and nutritional status of exposed individuals. Because of their high degree of toxicity, arsenic, cadmium, chromium, lead, and mercury rank among the priority metals that are of public health significance. These metallic elements are considered systemic toxicants that are known to induce multiple organ damage, even at lower levels of exposure. This review provides an analysis of their environmental occurrence, production and use, potential for human exposure, and molecular mechanisms of toxicity, and carcinogenicity.Keywords: food chain, determine the levels of minerals, trace elements, heavy metals, production and use, human exposure, toxicity, carcinogenicity
Procedia PDF Downloads 2852094 Aerial Photogrammetry-Based Techniques to Rebuild the 30-Years Landform Changes of a Landslide-Dominated Watershed in Taiwan
Authors: Yichin Chen
Abstract:
Taiwan is an island characterized by an active tectonics and high erosion rates. Monitoring the dynamic landscape of Taiwan is an important issue for disaster mitigation, geomorphological research, and watershed management. Long-term and high spatiotemporal landform data is essential for quantifying and simulating the geomorphological processes and developing warning systems. Recently, the advances in unmanned aerial vehicle (UAV) and computational photogrammetry technology have provided an effective way to rebuild and monitor the topography changes in high spatio-temporal resolutions. This study rebuilds the 30-years landform change in the Aiyuzi watershed in 1986-2017 by using the aerial photogrammetry-based techniques. The Aiyuzi watershed, located in central Taiwan and has an area of 3.99 Km², is famous for its frequent landslide and debris flow disasters. This study took the aerial photos by using UAV and collected multi-temporal historical, stereo photographs, taken by the Aerial Survey Office of Taiwan’s Forestry Bureau. To rebuild the orthoimages and digital surface models (DSMs), Pix4DMapper, a photogrammetry software, was used. Furthermore, to control model accuracy, a set of ground control points was surveyed by using eGPS. The results show that the generated DSMs have the ground sampling distance (GSD) of ~10 cm and ~0.3 cm from the UAV’s and historical photographs, respectively, and vertical error of ~1 m. By comparing the DSMs, there are many deep-seated landslides (with depth over 20 m) occurred on the upstream in the Aiyuzi watershed. Even though a large amount of sediment is delivered from the landslides, the steep main channel has sufficient capacity to transport sediment from the channel and to erode the river bed to ~20 m in depth. Most sediments are transported to the outlet of watershed and deposits on the downstream channel. This case study shows that UAV and photogrammetry technology are useful for topography change monitoring effectively.Keywords: aerial photogrammetry, landslide, landform change, Taiwan
Procedia PDF Downloads 1552093 Development of a Quick On-Site Pass/Fail Test for the Evaluation of Fresh Concrete Destined for Application as Exposed Concrete
Authors: Laura Kupers, Julie Piérard, Niki Cauberg
Abstract:
The use of exposed concrete (sometimes referred to as architectural concrete), keeps gaining popularity. Exposed concrete has the advantage to combine the structural properties of concrete with an aesthetic finish. However, for a successful aesthetic finish, much attention needs to be paid to the execution (formwork, release agent, curing, weather conditions…), the concrete composition (choice of the raw materials and mix proportions) as well as to its fresh properties. For the latter, a simple on-site pass/fail test could halt the casting of concrete not suitable for architectural concrete and thus avoid expensive repairs later. When architects opt for an exposed concrete, they usually want a smooth, uniform and nearly blemish-free surface. For this choice, a standard ‘construction’ concrete does not suffice. An aesthetic surface finishing requires the concrete to contain a minimum content of fines to minimize the risk of segregation and to allow complete filling of more complex shaped formworks. The concrete may neither be too viscous as this makes it more difficult to compact and it increases the risk of blow holes blemishing the surface. On the other hand, too much bleeding may cause color differences on the concrete surface. An easy pass/fail test, which can be performed on the site just before the casting, could avoid these problems. In case the fresh concrete fails the test, the concrete can be rejected. Only in case the fresh concrete passes the test, the concrete would be cast. The pass/fail tests are intended for a concrete with a consistency class S4. Five tests were selected as possible onsite pass/fail test. Two of these tests already exist: the K-slump test (ASTM C1362) and the Bauer Filter Press Test. The remaining three tests were developed by the BBRI in order to test the segregation resistance of fresh concrete on site: the ‘dynamic sieve stability test’, the ‘inverted cone test’ and an adapted ‘visual stability index’ (VSI) for the slump and flow test. These tests were inspired by existing tests for self-compacting concrete, for which the segregation resistance is of great importance. The suitability of the fresh concrete mixtures was also tested by means of a laboratory reference test (resistance to segregation) and by visual inspection (blow holes, structure…) of small test walls. More than fifteen concrete mixtures of different quality were tested. The results of the pass/fail tests were compared with the results of this laboratory reference test and the test walls. The preliminary laboratory results indicate that concrete mixtures ‘suitable’ for placing as exposed concrete (containing sufficient fines, a balanced grading curve etc.) can be distinguished from ‘inferior’ concrete mixtures. Additional laboratory tests, as well as tests on site, will be conducted to confirm these preliminary results and to set appropriate pass/fail values.Keywords: exposed concrete, testing fresh concrete, segregation resistance, bleeding, consistency
Procedia PDF Downloads 423