Search results for: remaining at work
1248 An Operational Model for eMarketing Technology Deployment in Higher Education in the UK
Authors: Amitave Banik
Abstract:
The terms “eMarketing,” “online marketing,” and “Internet marketing” are frequently interchanged and can often be considered synonymous. eMarketing technologies, tactics, tools and strategies can help UK universities to achieve potential competitive benefits. In UK universities, the uptake of eMarketing has been relatively limited, and the complexity of managing eMarketing has become more challenging. Many UK universities are only at an early stage of developing their online marketing capabilities and have not yet to identify their core digital marketing tools and techniques. This research investigates eMarketing adoption and deployment initiatives and provides insights into how to successfully develop and implement these initiatives in UK universities. Moreover, this research puts forward a provisional conceptual framework for eMarketing strategy implementation that relates strategy objectives and operational requirements to technology utilization. The research conducted the epistemological assumptions relate to “how things really are” and “how things really work” in an assumed reality. The methodological assumptions relate to the process of building the conceptual framework and assessing what it can provide about the “real” world. Based on the concept, the framework recognizes the various eMarketing channels, eMarketing techniques and eMarketing strategies that are used to reach the widest student base. A qualitative research method, based on narrative in-depth case studies, includes an empirical investigation at the University of Gloucestershire, University of Wales Trinity St David, University of Westminster, and London Metropolitan Business school. The selection of case/ university provides additional value because there is no previous study studied at this level. Questionnaires and semi-structured interviews have been conducted to gather data from selected universities’ academics and professional services staff. Narrative inquiry has been employed as a tool for analysis of conversations and interviews. Framework analysis used to identify common themes to build/ innovate an operational model from the original provisional conceptual framework. The proposed operational model will provide appropriate eMarketing strategies that create and sustain a competitive business development (business expansion and market growth). Besides, it will offer to one or several segments of customers and its network of partners for creating, marketing and building up relationships to generate profitable and sustainable revenue streams. In this context, the operational model will serve as an instructional-technological interactions roadmap, outlining essential components to guide the eMarketing technological deployment in UK universities.Keywords: eMarketing, digital technologies, marketing mix, eMarketing plan, strategies, tactics, conceptual framework, operational model, higher education organizations
Procedia PDF Downloads 61247 Surface Defect-engineered Ceo₂−x by Ultrasound Treatment for Superior Photocatalytic H₂ Production and Water Treatment
Authors: Nabil Al-Zaqri
Abstract:
Semiconductor photocatalysts with surface defects display incredible light absorption bandwidth, and these defects function as highly active sites for oxidation processes by interacting with the surface band structure. Accordingly, engineering the photocatalyst with surface oxygen vacancies will enhance the semiconductor nanostructure's photocatalytic efficiency. Herein, a CeO2₋ₓ nanostructure is designed under the influence of low-frequency ultrasonic waves to create surface oxygen vacancies. This approach enhances the photocatalytic efficiency compared to many heterostructures while keeping the intrinsiccrystal structure intact. Ultrasonic waves induce the acoustic cavitation effect leading to the dissemination of active elements on the surface, which results in vacancy formation in conjunction with larger surface area and smaller particle size. The structural analysis of CeO₂₋ₓ revealed higher crystallinity, as well as morphological optimization, and the presence of oxygen vacancies is verified through Raman, X-rayphotoelectron spectroscopy, temperature-programmed reduction, photoluminescence, and electron spinresonance analyses. Oxygen vacancies accelerate the redox cycle between Ce₄+ and Ce₃+ by prolongingphotogenerated charge recombination. The ultrasound-treated pristine CeO₂ sample achieved excellenthydrogen production showing a quantum efficiency of 1.125% and efficient organic degradation. Ourpromising findings demonstrated that ultrasonic treatment causes the formation of surface oxygenvacancies and improves photocatalytic hydrogen evolution and pollution degradation. Conclusion: Defect engineering of the ceria nanoparticles with oxygen vacancies was achieved for the first time using low-frequency ultrasound treatment. The U-CeO₂₋ₓsample showed high crystallinity, and morphological changes were observed. Due to the acoustic cavitation effect, a larger surface area and small particle size were observed. The ultrasound treatment causes particle aggregation and surface defects leading to oxygen vacancy formation. The XPS, Raman spectroscopy, PL spectroscopy, and ESR results confirm the presence of oxygen vacancies. The ultrasound-treated sample was also examined for pollutant degradation, where 1O₂was found to be the major active species. Hence, the ultrasound treatment influences efficient photocatalysts for superior hydrogen evolution and an excellent photocatalytic degradation of contaminants. The prepared nanostructure showed excellent stability and recyclability. This work could pave the way for a unique post-synthesis strategy intended for efficient photocatalytic nanostructures.Keywords: surface defect, CeO₂₋ₓ, photocatalytic, water treatment, H₂ production
Procedia PDF Downloads 1431246 Upward Spread Forced Smoldering Phenomenon: Effects and Applications
Authors: Akshita Swaminathan, Vinayak Malhotra
Abstract:
Smoldering is one of the most persistent types of combustion which can take place for very long periods (hours, days, months) if there is an abundance of fuel. It causes quite a notable number of accidents and is one of the prime suspects for fire and safety hazards. It can be ignited with weaker ignition and is more difficult to suppress than flaming combustion. Upward spread smoldering is the case in which the air flow is parallel to the direction of the smoldering front. This type of smoldering is quite uncontrollable, and hence, there is a need to study this phenomenon. As compared to flaming combustion, a smoldering phenomenon often goes unrecognised and hence is a cause for various fire accidents. A simplified experimental setup was raised to study the upward spread smoldering, its effects due to varying forced flow and its effects when it takes place in the presence of external heat sources and alternative energy sources such as acoustic energy. Linear configurations were studied depending on varying forced flow effects on upward spread smoldering. Effect of varying forced flow on upward spread smoldering was observed and studied: (i) in the presence of external heat source (ii) in the presence of external alternative energy sources (acoustic energy). The role of ash removal was observed and studied. Results indicate that upward spread forced smoldering was affected by various key controlling parameters such as the speed of the forced flow, surface orientation, interspace distance (distance between forced flow and the pilot fuel). When an external heat source was placed on either side of the pilot fuel, it was observed that the smoldering phenomenon was affected. The surface orientation and interspace distance between the external heat sources and the pilot fuel were found to play a huge role in altering the regression rate. Lastly, by impinging an alternative energy source in the form of acoustic energy on the smoldering front, it was observed that varying frequencies affected the smoldering phenomenon in different ways. The surface orientation also played an important role. This project highlights the importance of fire and safety hazard and means of better combustion for all kinds of scientific research and practical applications. The knowledge acquired from this work can be applied to various engineering systems ranging from aircrafts, spacecrafts and even to buildings fires, wildfires and help us in better understanding and hence avoiding such widespread fires. Various fire disasters have been recorded in aircrafts due to small electric short circuits which led to smoldering fires. These eventually caused the engine to catch fire that cost damage to life and property. Studying this phenomenon can help us to control, if not prevent, such disasters.Keywords: alternative energy sources, flaming combustion, ignition, regression rate, smoldering
Procedia PDF Downloads 1461245 Complementing Assessment Processes with Standardized Tests: A Work in Progress
Authors: Amparo Camacho
Abstract:
ABET accredited programs must assess the development of student learning outcomes (SOs) in engineering programs. Different institutions implement different strategies for this assessment, and they are usually designed “in house.” This paper presents a proposal for including standardized tests to complement the ABET assessment model in an engineering college made up of six distinct engineering programs. The engineering college formulated a model of quality assurance in education to be implemented throughout the six engineering programs to regularly assess and evaluate the achievement of SOs in each program offered. The model uses diverse techniques and sources of data to assess student performance and to implement actions of improvement based on the results of this assessment. The model is called “Assessment Process Model” and it includes SOs A through K, as defined by ABET. SOs can be divided into two categories: “hard skills” and “professional skills” (soft skills). The first includes abilities, such as: applying knowledge of mathematics, science, and engineering and designing and conducting experiments, as well as analyzing and interpreting data. The second category, “professional skills”, includes communicating effectively, and understanding professional and ethnical responsibility. Within the Assessment Process Model, various tools were used to assess SOs, related to both “hard” as well as “soft” skills. The assessment tools designed included: rubrics, surveys, questionnaires, and portfolios. In addition to these instruments, the Engineering College decided to use tools that systematically gather consistent quantitative data. For this reason, an in-house exam was designed and implemented, based on the curriculum of each program. Even though this exam was administered during various academic periods, it is not currently considered standardized. In 2017, the Engineering College included three standardized tests: one to assess mathematical and scientific reasoning and two more to assess reading and writing abilities. With these exams, the college hopes to obtain complementary information that can help better measure the development of both hard and soft skills of students in the different engineering programs. In the first semester of 2017, the three exams were given to three sample groups of students from the six different engineering programs. Students in the sample groups were either from the first, fifth, and tenth semester cohorts. At the time of submission of this paper, the engineering college has descriptive statistical data and is working with various statisticians to have a more in-depth and detailed analysis of the sample group of students’ achievement on the three exams. The overall objective of including standardized exams in the assessment model is to identify more precisely the least developed SOs in order to define and implement educational strategies necessary for students to achieve them in each engineering program.Keywords: assessment, hard skills, soft skills, standardized tests
Procedia PDF Downloads 2881244 Influence of Intra-Yarn Permeability on Mesoscale Permeability of Plain Weave and 3D Fabrics
Authors: Debabrata Adhikari, Mikhail Matveev, Louise Brown, Andy Long, Jan Kočí
Abstract:
A good understanding of mesoscale permeability of complex architectures in fibrous porous preforms is of particular interest in order to achieve efficient and cost-effective resin impregnation of liquid composite molding (LCM). Fabrics used in structural reinforcements are typically woven or stitched. However, 3D fabric reinforcement is of particular interest because of the versatility in the weaving pattern with the binder yarn and in-plain yarn arrangements to manufacture thick composite parts, overcome the limitation in delamination, improve toughness etc. To predict the permeability based on the available pore spaces between the inter yarn spaces, unit cell-based computational fluid dynamics models have been using the Stokes Darcy model. Typically, the preform consists of an arrangement of yarns with spacing in the order of mm, wherein each yarn consists of thousands of filaments with spacing in the order of μm. The fluid flow during infusion exchanges the mass between the intra and inter yarn channels, meaning there is no dead-end of flow between the mesopore in the inter yarn space and the micropore in the yarn. Several studies have employed the Brinkman equation to take into account the flow through dual-scale porosity reinforcement to estimate their permeability. Furthermore, to reduce the computational effort of dual scale flow, scale separation criteria based on the ratio between yarn permeability to the yarn spacing was also proposed to quantify the dual scale and negligible micro-scale flow regime for the prediction of mesoscale permeability. In the present work, the key parameter to identify the influence of intra yarn permeability on the mesoscale permeability has been investigated with the systematic study of weft and warp yarn spacing on the plane weave as well as the position of binder yarn and number of in-plane yarn layers on 3D weave fabric. The permeability tensor has been estimated using an OpenFOAM-based model for the various weave pattern with idealized geometry of yarn implemented using open-source software TexGen. Additionally, scale separation criterion has been established based on the various configuration of yarn permeability for the 3D fabric with both the isotropic and anisotropic yarn from Gebart’s model. It was observed that the variation of mesoscale permeability Kxx within 30% when the isotropic porous yarn is considered for a 3D fabric with binder yarn. Furthermore, the permeability model developed in this study will be used for multi-objective optimizations of the preform mesoscale geometry in terms of yarn spacing, binder pattern, and a number of layers with an aim to obtain improved permeability and reduced void content during the LCM process.Keywords: permeability, 3D fabric, dual-scale flow, liquid composite molding
Procedia PDF Downloads 991243 Use of Cassava Waste and Its Energy Potential
Authors: I. Inuaeyen, L. Phil, O. Eni
Abstract:
Fossil fuels have been the main source of global energy for many decades, accounting for about 80% of global energy need. This is beginning to change however with increasing concern about greenhouse gas emissions which comes mostly from fossil fuel combustion. Greenhouse gases such as carbon dioxide are responsible for stimulating climate change. As a result, there has been shift towards more clean and renewable energy sources of energy as a strategy for stemming greenhouse gas emission into the atmosphere. The production of bio-products such as bio-fuel, bio-electricity, bio-chemicals, and bio-heat etc. using biomass materials in accordance with the bio-refinery concept holds a great potential for reducing high dependence on fossil fuel and their resources. The bio-refinery concept promotes efficient utilisation of biomass material for the simultaneous production of a variety of products in order to minimize or eliminate waste materials. This will ultimately reduce greenhouse gas emissions into the environment. In Nigeria, cassava solid waste from cassava processing facilities has been identified as a vital feedstock for bio-refinery process. Cassava is generally a staple food in Nigeria and one of the most widely cultivated foodstuff by farmers across Nigeria. As a result, there is an abundant supply of cassava waste in Nigeria. In this study, the aim is to explore opportunities for converting cassava waste to a range of bio-products such as butanol, ethanol, electricity, heat, methanol, furfural etc. using a combination of biochemical, thermochemical and chemical conversion routes. . The best process scenario will be identified through the evaluation of economic analysis, energy efficiency, life cycle analysis and social impact. The study will be carried out by developing a model representing different process options for cassava waste conversion to useful products. The model will be developed using Aspen Plus process simulation software. Process economic analysis will be done using Aspen Icarus software. So far, comprehensive survey of literature has been conducted. This includes studies on conversion of cassava solid waste to a variety of bio-products using different conversion techniques, cassava waste production in Nigeria, modelling and simulation of waste conversion to useful products among others. Also, statistical distribution of cassava solid waste production in Nigeria has been established and key literatures with useful parameters for developing different cassava waste conversion process has been identified. In the future work, detailed modelling of the different process scenarios will be carried out and the models validated using data from literature and demonstration plants. A techno-economic comparison of the various process scenarios will be carried out to identify the best scenario using process economics, life cycle analysis, energy efficiency and social impact as the performance indexes.Keywords: bio-refinery, cassava waste, energy, process modelling
Procedia PDF Downloads 3771242 Building an Opinion Dynamics Model from Experimental Data
Authors: Dino Carpentras, Paul J. Maher, Caoimhe O'Reilly, Michael Quayle
Abstract:
Opinion dynamics is a sub-field of agent-based modeling that focuses on people’s opinions and their evolutions over time. Despite the rapid increase in the number of publications in this field, it is still not clear how to apply these models to real-world scenarios. Indeed, there is no agreement on how people update their opinion while interacting. Furthermore, it is not clear if different topics will show the same dynamics (e.g., more polarized topics may behave differently). These problems are mostly due to the lack of experimental validation of the models. Some previous studies started bridging this gap in the literature by directly measuring people’s opinions before and after the interaction. However, these experiments force people to express their opinion as a number instead of using natural language (and then, eventually, encoding it as numbers). This is not the way people normally interact, and it may strongly alter the measured dynamics. Another limitation of these studies is that they usually average all the topics together, without checking if different topics may show different dynamics. In our work, we collected data from 200 participants on 5 unpolarized topics. Participants expressed their opinions in natural language (“agree” or “disagree”). We also measured the certainty of their answer, expressed as a number between 1 and 10. However, this value was not shown to other participants to keep the interaction based on natural language. We then showed the opinion (and not the certainty) of another participant and, after a distraction task, we repeated the measurement. To make the data compatible with opinion dynamics models, we multiplied opinion and certainty to obtain a new parameter (here called “continuous opinion”) ranging from -10 to +10 (using agree=1 and disagree=-1). We firstly checked the 5 topics individually, finding that all of them behaved in a similar way despite having different initial opinions distributions. This suggested that the same model could be applied for different unpolarized topics. We also observed that people tend to maintain similar levels of certainty, even when they changed their opinion. This is a strong violation of what is suggested from common models, where people starting at, for example, +8, will first move towards 0 instead of directly jumping to -8. We also observed social influence, meaning that people exposed with “agree” were more likely to move to higher levels of continuous opinion, while people exposed with “disagree” were more likely to move to lower levels. However, we also observed that the effect of influence was smaller than the effect of random fluctuations. Also, this configuration is different from standard models, where noise, when present, is usually much smaller than the effect of social influence. Starting from this, we built an opinion dynamics model that explains more than 80% of data variance. This model was also able to show the natural conversion of polarization from unpolarized states. This experimental approach offers a new way to build models grounded on experimental data. Furthermore, the model offers new insight into the fundamental terms of opinion dynamics models.Keywords: experimental validation, micro-dynamics rule, opinion dynamics, update rule
Procedia PDF Downloads 1111241 Convectory Policing-Reconciling Historic and Contemporary Models of Police Service Delivery
Authors: Mark Jackson
Abstract:
Description: This paper is based on an theoretical analysis of the efficacy of the dominant model of policing in western jurisdictions. Those results are then compared with a similar analysis of a traditional reactive model. It is found that neither model provides for optimal delivery of services. Instead optimal service can be achieved by a synchronous hybrid model, termed the Convectory Policing approach. Methodology and Findings: For over three decades problem oriented policing (PO) has been the dominant model for western police agencies. Initially based on the work of Goldstein during the 1970s the problem oriented framework has spawned endless variants and approaches, most of which embrace a problem solving rather than a reactive approach to policing. This has included the Area Policing Concept (APC) applied in many smaller jurisdictions in the USA, the Scaled Response Policing Model (SRPM) currently under trial in Western Australia and the Proactive Pre-Response Approach (PPRA) which has also seen some success. All of these, in some way or another, are largely based on a model that eschews a traditional reactive model of policing. Convectory Policing (CP) is an alternative model which challenges the underpinning assumptions which have seen proliferation of the PO approach in the last three decades and commences by questioning the economics on which PO is based. It is argued that in essence, the PO relies on an unstated, and often unrecognised assumption that resources will be available to meet demand for policing services, while at the same time maintaining the capacity to deploy staff to develop solutions to the problems which were ultimately manifested in those same calls for service. The CP model relies on the observations from a numerous western jurisdictions to challenge the validity of that underpinning assumption, particularly in fiscally tight environment. In deploying staff to pursue and develop solutions to underpinning problems, there is clearly an opportunity cost. Those same staff cannot be allocated to alternative duties while engaged in a problem solution role. At the same time, resources in use responding to calls for service are unavailable, while committed to that role, to pursue solutions to the problems giving rise to those same calls for service. The two approaches, reactive and PO are therefore dichotomous. One cannot be optimised while the other is being pursued. Convectory Policing is a pragmatic response to the schism between the competing traditional and contemporary models. If it is not possible to serve either model with any real rigour, it becomes necessary to taper an approach to deliver specific outcomes against which success or otherwise might be measured. CP proposes that a structured roster-driven approach to calls for service, combined with the application of what is termed a resource-effect response capacity has the potential to resolve the inherent conflict between traditional and models of policing and the expectations of the community in terms of community policing based problem solving models.Keywords: policing, reactive, proactive, models, efficacy
Procedia PDF Downloads 4851240 Genetic Variations of Two Casein Genes among Maghrabi Camels Reared in Egypt
Authors: Othman E. Othman, Amira M. Nowier, Medhat El-Denary
Abstract:
Camels play an important socio-economic role within the pastoral and agricultural system in the dry and semidry zones of Asia and Africa. Camels are economically important animals in Egypt where they are dual purpose animals (meat and milk). The analysis of chemical composition of camel milk showed that the total protein contents ranged from 2.4% to 5.3% and it is divided into casein and whey proteins. The casein fraction constitutes 52% to 89% of total camel milk protein and it divided into 4 fractions namely αs1, αs2, β and κ-caseins which are encoded by four tightly genes. In spite of the important role of casein genes and the effects of their genetic polymorphisms on quantitative traits and technological properties of milk, the studies for the detection of genetic polymorphism of camel milk genes are still limited. Due to this fact, this work focused - using PCR-RFP and sequencing analysis - on the identification of genetic polymorphisms and SNPs of two casein genes in Maghrabi camel breed which is a dual purpose camel breed in Egypt. The amplified fragments at 488-bp of the camel κ-CN gene were digested with AluI endonuclease. The results showed the appearance of three different genotypes in the tested animals; CC with three digested fragments at 203-, 127- and 120-bp, TT with three digested fragments at 203-, 158- and 127-bp and CT with four digested fragments at 203-, 158-, 127- and 120-bp. The frequencies of three detected genotypes were 11.0% for CC, 48.0% for TT and 41.0% for CT genotypes. The sequencing analysis of the two different alleles declared the presence of a single nucleotide polymorphism (C→T) at position 121 in the amplified fragments which is responsible for the destruction of a restriction site (AG/CT) in allele T and resulted in the presence of two different alleles C and T in tested animals. The nucleotide sequences of κ-CN alleles C and T were submitted to GenBank with the accession numbers; KU055605 and KU055606, respectively. The primers used in this study amplified 942-bp fragments spanning from exon 4 to exon 6 of camel αS1-Casein gene. The amplified fragments were digested with two different restriction enzymes; SmlI and AluI. The results of SmlI digestion did not show any restriction site whereas the digestion with AluI endonuclease revealed the presence of two restriction sites AG^CT at positions 68^69 and 631^632 yielding the presence of three digested fragments with sizes 68-, 563- and 293-bp.The nucleotide sequences of this fragment from camel αS1-Casein gene were submitted to GenBank with the accession number KU145820. In conclusion, the genetic characterization of quantitative traits genes which are associated with the production traits like milk yield and composition is considered an important step towards the genetic improvement of livestock species through the selection of superior animals depending on the favorable alleles and genotypes; marker assisted selection (MAS).Keywords: genetic polymorphism, SNP polymorphism, Maghrabi camels, κ-Casein gene, αS1-Casein gene
Procedia PDF Downloads 6141239 Homeostatic Analysis of the Integrated Insulin and Glucagon Signaling Network: Demonstration of Bistable Response in Catabolic and Anabolic States
Authors: Pramod Somvanshi, Manu Tomar, K. V. Venkatesh
Abstract:
Insulin and glucagon are responsible for homeostasis of key plasma metabolites like glucose, amino acids and fatty acids in the blood plasma. These hormones act antagonistically to each other during the secretion and signaling stages. In the present work, we analyze the effect of macronutrients on the response from integrated insulin and glucagon signaling pathways. The insulin and glucagon pathways are connected by DAG (a calcium signaling component which is part of the glucagon signaling module) which activates PKC and inhibits IRS (insulin signaling component) constituting a crosstalk. AKT (insulin signaling component) inhibits cAMP (glucagon signaling component) through PDE3 forming the other crosstalk between the two signaling pathways. Physiological level of anabolism and catabolism is captured through a metric quantified by the activity levels of AKT and PKA in their phosphorylated states, which represent the insulin and glucagon signaling endpoints, respectively. Under resting and starving conditions, the phosphorylation metric represents homeostasis indicating a balance between the anabolic and catabolic activities in the tissues. The steady state analysis of the integrated network demonstrates the presence of a bistable response in the phosphorylation metric with respect to input plasma glucose levels. This indicates that two steady state conditions (one in the homeostatic zone and other in the anabolic zone) are possible for a given glucose concentration depending on the ON or OFF path. When glucose levels rise above normal, during post-meal conditions, the bistability is observed in the anabolic space denoting the dominance of the glycogenesis in liver. For glucose concentrations lower than the physiological levels, while exercising, metabolic response lies in the catabolic space denoting the prevalence of glycogenolysis in liver. The non-linear positive feedback of AKT on IRS in insulin signaling module of the network is the main cause of the bistable response. The span of bistability in the phosphorylation metric increases as plasma fatty acid and amino acid levels rise and eventually the response turns monostable and catabolic representing diabetic conditions. In the case of high fat or protein diet, fatty acids and amino acids have an inhibitory effect on the insulin signaling pathway by increasing the serine phosphorylation of IRS protein via the activation of PKC and S6K, respectively. Similar analysis was also performed with respect to input amino acid and fatty acid levels. This emergent property of bistability in the integrated network helps us understand why it becomes extremely difficult to treat obesity and diabetes when blood glucose level rises beyond a certain value.Keywords: bistability, diabetes, feedback and crosstalk, obesity
Procedia PDF Downloads 2771238 Economic Impact of Rana Plaza Collapse
Authors: Md. Omar Bin Harun Khan
Abstract:
The collapse of the infamous Rana Plaza, a multi-storeyed commercial building in Savar, near Dhaka, Bangladesh has brought with it a plethora of positive and negative consequences. Bangladesh being a key player in the export of clothing, found itself amidst a wave of economic upheaval following this tragic incident that resulted in numerous Bangladeshis, most of whom were factory workers. This paper compares the consequences that the country’s Ready Made Garments (RMG) sector is facing now, two years into the incident. The paper presents a comparison of statistical data from study reports and brings forward perspectives from all dimensions of Labour, Employment and Industrial Relations in Bangladesh following the event. The paper brings across the viewpoint of donor organizations and donor countries, the impacts of several initiatives taken by foreign organizations like the International Labour Organization, and local entities like the Bangladesh Garment Manufacturers and Exporters Association (BGMEA) in order to reinforce compliance and stabilize the shaky foundation that the RMG sector had found itself following the collapse. Focus of the paper remains on the stance taken by the suppliers in Bangladesh, with inputs from buying houses and factories, and also on the reaction of foreign brands. The paper also focuses on the horrific physical, mental and financial implications sustained by the victims and their families, and the consequent uproar from workers in general regarding compliance with work safety and workers’ welfare conditions. The purpose is to get across both sides of the scenario: the economic impact that suppliers / factories/ sellers/ buying houses/exporters have faced in Bangladesh as a result of complete loss of reliability on them regarding working standards; and also to cover the aftershock felt on the other end of the spectrum by the importers/ buyers, particularly the foreign entities, in terms of the sudden accountability of being affiliated with non- compliant factories. The collapse of Rana Plaza has received vast international attention and strong criticism. Nevertheless, the almost immediate strengthening of labourrights and the wholesale reform undertaken on all sides of the supply chain, evidence a move of all local and foreign stakeholders towards greater compliance and taking of precautionary steps for prevention of further disasters. The tragedy that Rana Plaza embodies served as a much-needed epiphany for the soaring RMG Sector of Bangladesh. Prompt co-operation on the part of all stakeholders and regulatory bodies now show a move towards sustainable development, which further ensures safeguarding against any future irregularities and pave the way for steady economic growth.Keywords: economy, employment standards, Rana Plaza, RMG
Procedia PDF Downloads 3421237 Myanmar Consonants Recognition System Based on Lip Movements Using Active Contour Model
Authors: T. Thein, S. Kalyar Myo
Abstract:
Human uses visual information for understanding the speech contents in noisy conditions or in situations where the audio signal is not available. The primary advantage of visual information is that it is not affected by the acoustic noise and cross talk among speakers. Using visual information from the lip movements can improve the accuracy and robustness of automatic speech recognition. However, a major challenge with most automatic lip reading system is to find a robust and efficient method for extracting the linguistically relevant speech information from a lip image sequence. This is a difficult task due to variation caused by different speakers, illumination, camera setting and the inherent low luminance and chrominance contrast between lip and non-lip region. Several researchers have been developing methods to overcome these problems; the one is lip reading. Moreover, it is well known that visual information about speech through lip reading is very useful for human speech recognition system. Lip reading is the technique of a comprehensive understanding of underlying speech by processing on the movement of lips. Therefore, lip reading system is one of the different supportive technologies for hearing impaired or elderly people, and it is an active research area. The need for lip reading system is ever increasing for every language. This research aims to develop a visual teaching method system for the hearing impaired persons in Myanmar, how to pronounce words precisely by identifying the features of lip movement. The proposed research will work a lip reading system for Myanmar Consonants, one syllable consonants (င (Nga)၊ ည (Nya)၊ မ (Ma)၊ လ (La)၊ ၀ (Wa)၊ သ (Tha)၊ ဟ (Ha)၊ အ (Ah) ) and two syllable consonants ( က(Ka Gyi)၊ ခ (Kha Gway)၊ ဂ (Ga Nge)၊ ဃ (Ga Gyi)၊ စ (Sa Lone)၊ ဆ (Sa Lain)၊ ဇ (Za Gwe) ၊ ဒ (Da Dway)၊ ဏ (Na Gyi)၊ န (Na Nge)၊ ပ (Pa Saug)၊ ဘ (Ba Gone)၊ ရ (Ya Gaug)၊ ဠ (La Gyi) ). In the proposed system, there are three subsystems, the first one is the lip localization system, which localizes the lips in the digital inputs. The next one is the feature extraction system, which extracts features of lip movement suitable for visual speech recognition. And the final one is the classification system. In the proposed research, Two Dimensional Discrete Cosine Transform (2D-DCT) and Linear Discriminant Analysis (LDA) with Active Contour Model (ACM) will be used for lip movement features extraction. Support Vector Machine (SVM) classifier is used for finding class parameter and class number in training set and testing set. Then, experiments will be carried out for the recognition accuracy of Myanmar consonants using the only visual information on lip movements which are useful for visual speech of Myanmar languages. The result will show the effectiveness of the lip movement recognition for Myanmar Consonants. This system will help the hearing impaired persons to use as the language learning application. This system can also be useful for normal hearing persons in noisy environments or conditions where they can find out what was said by other people without hearing voice.Keywords: feature extraction, lip reading, lip localization, Active Contour Model (ACM), Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), Two Dimensional Discrete Cosine Transform (2D-DCT)
Procedia PDF Downloads 2861236 Interface Designer as Cultural Producer: A Dialectic Materialist Approach to the Role of Visual Designer in the Present Digital Era
Authors: Cagri Baris Kasap
Abstract:
In this study, how interface designers can be viewed as producers of culture in the current era will be interrogated from a critical theory perspective. Walter Benjamin was a German Jewish literary critical theorist who, during 1930s, was engaged in opposing and criticizing the Nazi use of art and media. ‘The Author as Producer’ is an essay that Benjamin has read at the Communist Institute for the Study of Fascism in Paris. In this article, Benjamin relates directly to the dialectics between base and superstructure and argues that authors, normally placed within the superstructure should consider how writing and publishing is production and directly related to the base. Through it, he discusses what it could mean to see author as producer of his own text, as a producer of writing, understood as an ideological construct that rests on the apparatus of production and distribution. So Benjamin concludes that the author must write in ways that relate to the conditions of production, he must do so in order to prepare his readers to become writers and even make this possible for them by engineering an ‘improved apparatus’ and must work toward turning consumers to producers and collaborators. In today’s world, it has become a leading business model within Web 2.0 services of multinational Internet technologies and culture industries like Amazon, Apple and Google, to transform readers, spectators, consumers or users into collaborators and co-producers through platforms such as Facebook, YouTube and Amazon’s CreateSpace Kindle Direct Publishing print-on-demand, e-book and publishing platforms. However, the way this transformation happens is tightly controlled and monitored by combinations of software and hardware. In these global-market monopolies, it has become increasingly difficult to get insight into how one’s writing and collaboration is used, captured, and capitalized as a user of Facebook or Google. In the lens of this study, it could be argued that this criticism could very well be considered by digital producers or even by the mass of collaborators in contemporary social networking software. How do software and design incorporate users and their collaboration? Are they truly empowered, are they put in a position where they are able to understand the apparatus and how their collaboration is part of it? Or has the apparatus become a means against the producers? Thus, when using corporate systems like Google and Facebook, iPhone and Kindle without any control over the means of production, which is closed off by opaque interfaces and licenses that limit our rights of use and ownership, we are already the collaborators that Benjamin calls for. For example, the iPhone and the Kindle combine a specific use of technology to distribute the relations between the ‘authors’ and the ‘prodUsers’ in ways that secure their monopolistic business models by limiting the potential of the technology.Keywords: interface designer, cultural producer, Walter Benjamin, materialist aesthetics, dialectical thinking
Procedia PDF Downloads 1451235 Drug Delivery Cationic Nano-Containers Based on Pseudo-Proteins
Authors: Sophio Kobauri, Temur Kantaria, Nina Kulikova, David Tugushi, Ramaz Katsarava
Abstract:
The elaboration of effective drug delivery vehicles is still topical nowadays since targeted drug delivery is one of the most important challenges of the modern nanomedicine. The last decade has witnessed enormous research focused on synthetic cationic polymers (CPs) due to their flexible properties, in particular as non-viral gene delivery systems, facile synthesis, robustness, not oncogenic and proven gene delivery efficiency. However, the toxicity is still an obstacle to the application in pharmacotherapy. For overcoming the problem, creation of new cationic compounds including the polymeric nano-size particles – nano-containers (NCs) loading with different pharmaceuticals and biologicals is still relevant. In this regard, a variety of NCs-based drug delivery systems have been developed. We have found that amino acid-based biodegradable polymers called as pseudo-proteins (PPs), which can be cleared from the body after the fulfillment of their function are highly suitable for designing pharmaceutical NCs. Among them, one of the most promising are NCs made of biodegradable Cationic PPs (CPPs). For preparing new cationic NCs (CNCs), we used CPPs composed of positively charged amino acid L-arginine (R). The CNCs were fabricated by two approaches using: (1) R-based homo-CPPs; (2) Blends of R-based CPPs with regular (neutral) PPs. According to the first approach NCs we prepared from CPPs 8R3 (composed of R, sebacic acid and 1,3-propanediol) and 8R6 (composed of R, sebacic acid and 1,6-hexanediol). The NCs prepared from these CPPs were 72-101 nm in size with zeta potential within +30 ÷ +35 mV at a concentration 6 mg/mL. According to the second approach, CPPs 8R6 was blended in organic phase with neutral PPs 8L6 (composed of leucine, sebacic acid and 1,6-hexanediol). The NCs prepared from the blends were 130-140 nm in size with zeta potential within +20 ÷ +28 mV depending on 8R6/8L6 ratio. The stability studies of fabricated NCs showed that no substantial change of the particle size and distribution and no big particles’ formation is observed after three months storage. In vitro biocompatibility study of the obtained NPs with four different stable cell lines: A549 (human), U-937 (human), RAW264.7 (murine), Hepa 1-6 (murine) showed both type cathionic NCs are biocompatible. The obtained data allow concluding that the obtained CNCs are promising for the application as biodegradable drug delivery vehicles. This work was supported by the joint grant from the Science and Technology Center in Ukraine and Shota Rustaveli National Science Foundation of Georgia #6298 'New biodegradable cationic polymers composed of arginine and spermine-versatile biomaterials for various biomedical applications'.Keywords: biodegradable polymers, cationic pseudo-proteins, nano-containers, drug delivery vehicles
Procedia PDF Downloads 1561234 Kawasaki Disease in a Two Months Kuwaiti Girl: A Case Report and Literature Review.
Authors: Hanan Bin Nakhi, Asaad M. Albadrawi, Maged Al Shahat, Entesar Mandani
Abstract:
Background: Kawasaki disease (KD) is one of the most common vasculitis of childhood. It is considered the leading cause of acquired heart disease in children. The peak age of occurrence is 6 to 24 months, with 80% of affected children being less than 5 years old. There are only a few reports of KD in infants younger than 6 months. Infants had a higher incidence of atypical KD and of coronary artery complications. This case report from Kuwait will reinforce considering atypical KD in case of sepsis like condition with negative cultures and unresponding to systemic antibiotics. Early diagnosis allows early treatment with intravenous immune globulin (IVIG) and so decreases the incidence of cardiac aneurysm. Case Report: A 2 month old female infant, product of full term normal delivery to consanguineous parents, presented with fever and poor feeding. She was admitted and treated as urinary tract infection as her urine routine revealed pyurea. The baby continued to have persistent fever and hypoactivity inspite of using intravenous antibiotics. Latter, she developed non purulent conjunctivitis, skin mottling, oedema of the face / lower limb and was treated in intensive care unit as a case of septic shock. In spite of her partial general improvement, she continued to look unwell, hypoactive and had persistent fever. Septic work up, metabolic, and immunologic screen were negative. KD was suspected when the baby developed polymorphic erythematous rash and noticed to have peeling of skin at perianal area and periangular area of the fingers of the hand and feet. IVIG was given in dose of 2 gm/kg/day in single dose and aspirin 100 mg/kg/day in four divided doses. The girl showed marked clinical improvement. The fever subsided dramatically and the level acute phase reactant markedly decreased but the platelets count increased to 1600000/mm3. Echo cardiography showed mild dilatation of mid right coronary artery. Aspirin was continued in a dose of 5 mg/kg/d till repeating cardiac echo. Conclusion: A high index of suspicion of KD must be maintained in young infants with prolonged unexplained fever. Accepted criteria should be less restrictive to allow early diagnosis of a typical KD in infants less than 6 months of age. Timely appropriate treatment with IVIG is essential to avoid severe coronary sequels.Keywords: Kawasaki disease, atypical Kawasaki disease, infantile Kawasaki disease, hypo activity
Procedia PDF Downloads 3221233 On-Ice Force-Velocity Modeling Technical Considerations
Authors: Dan Geneau, Mary Claire Geneau, Seth Lenetsky, Ming -Chang Tsai, Marc Klimstra
Abstract:
Introduction— Horizontal force-velocity profiling (HFVP) involves modeling an athletes linear sprint kinematics to estimate valuable maximum force and velocity metrics. This approach to performance modeling has been used in field-based team sports and has recently been introduced to ice-hockey as a forward skating performance assessment. While preliminary data has been collected on ice, distance constraints of the on-ice test restrict the ability of the athletes to reach their maximal velocity which result in limits of the model to effectively estimate athlete performance. This is especially true of more elite athletes. This report explores whether athletes on-ice are able to reach a velocity plateau similar to what has been seen in overground trials. Fourteen male Major Junior ice-hockey players (BW= 83.87 +/- 7.30 kg, height = 188 ± 3.4cm cm, age = 18 ± 1.2 years n = 14) were recruited. For on-ice sprints, participants completed a standardized warm-up consisting of skating and dynamic stretching and a progression of three skating efforts from 50% to 95%. Following the warm-up, participants completed three on ice 45m sprints, with three minutes of rest in between each trial. For overground sprints, participants completed a similar dynamic warm-up to that of on-ice trials. Following the warm-up participants completed three 40m overground sprint trials. For each trial (on-ice and overground), radar was used to collect instantaneous velocity (Stalker ATS II, Texas, USA) aimed at the participant’s waist. Sprint velocities were modelled using custom Python (version 3.2) script using a mono-exponential function, similar to previous work. To determine if on-ice tirals were achieving a maximum velocity (plateau), minimum acceleration values of the modeled data at the end of the sprint were compared (using paired t-test) between on-ice and overground trials. Significant differences (P<0.001) between overground and on-ice minimum accelerations were observed. It was found that on-ice trials consistently reported higher final acceleration values, indicating a maximum maintained velocity (plateau) had not been reached. Based on these preliminary findings, it is suggested that reliable HFVP metrics cannot yet be collected from all ice-hockey populations using current methods. Elite male populations were not able to achieve a velocity plateau similar to what has been seen in overground trials, indicating the absence of a maximum velocity measure. With current velocity and acceleration modeling techniques, including a dependency of a velocity plateau, these results indicate the potential for error in on-ice HFVP measures. Therefore, these findings suggest that a greater on-ice sprint distance may be required or the need for other velocity modeling techniques, where maximal velocity is not required for a complete profile.Keywords: ice-hockey, sprint, skating, power
Procedia PDF Downloads 1041232 Effective Service Provision and Multi-Agency Working in Service Providers for Children and Young People with Special Educational Needs and Disabilities: A Mixed Methods Systematic Review
Authors: Natalie Tyldesley-Marshall, Janette Parr, Anna Brown, Yen-Fu Chen, Amy Grove
Abstract:
It is widely recognised in policy and research that the provision of services for children and young people (CYP) with Special Educational Needs and Disabilities (SEND) is enhanced when health and social care, and education services collaborate and interact effectively. In the UK, there have been significant changes to policy and provisions which support and improve collaboration. However, professionals responsible for implementing these changes face multiple challenges, including a lack of specific implementation guidance or framework to illustrate how effective multi-agency working could or should work. This systematic review will identify the key components of effective multi-agency working in services for CYP with SEND; and the most effective forms of partnership working in this setting. The review highlights interventions that lead to service improvements; and the conditions in the local area that support and encourage success. A protocol was written and registered with PROSPERO registration: CRD42022352194. Searches were conducted on several health, care, education, and applied social science databases from the year 2012 onwards. Citation chaining has been undertaken, as well as broader grey literature searching to enrich the findings. Qualitative, quantitative, mixed methods studies and systematic reviews were included, assessed independently, and critically appraised or assessed for risk of bias using appropriate tools based on study design. Data were extracted in NVivo software and checked by a more experienced researcher. A convergent segregated approach to synthesis and integration was used in which the quantitative and qualitative data were synthesised independently and then integrated using a joint display integration matrix. Findings demonstrate the key ingredients for effective partnership working for services delivering SEND. Interventions deemed effective are described, and lessons learned across interventions are summarised. Results will be of interest to educators and health and social care professionals that provide services to those with SEND. These will also be used to develop policy recommendations for how UK healthcare, social care, and education services for CYP with SEND aged 0-25 can most effectively collaborate and achieve service improvement. The review will also identify any gaps in the literature to recommend areas for future research. Funding for this review was provided by the Department for Education.Keywords: collaboration, joint commissioning, service delivery, service improvement
Procedia PDF Downloads 1121231 Bituminous Geomembranes: Sustainable Products for Road Construction and Maintenance
Authors: Ines Antunes, Andrea Massari, Concetta Bartucca
Abstract:
Greenhouse gasses (GHG) role in the atmosphere has been well known since the 19th century; however, researchers have begun to relate them to climate changes only in the second half of the following century. From this moment, scientists started to correlate the presence of GHG such as CO₂ with the global warming phenomena. This has raised the awareness not only of those who were experts in this field but also of public opinion, which is becoming more and more sensitive to environmental pollution and sustainability issues. Nowadays the reduction of GHG emissions is one of the principal objectives of EU nations. The target is an 80% reduction of emissions in 2050 and to reach the important goal of carbon neutrality. Road sector is responsible for an important amount of those emissions (about 20%). The most part is due to traffic, but a good contribution is also given directly or indirectly from road construction and maintenance. Raw material choice and reuse of post-consumer plastic rather than a cleverer design of roads have an important contribution to reducing carbon footprint. Bituminous membranes can be successfully used as reinforcement systems in asphalt layers to improve road pavement performance against cracking. Composite materials coupling membranes with grids and/or fabrics should be able to combine improved tensile properties of the reinforcement with stress absorbing and waterproofing effects of membranes. Polyglass, with its brand dedicated to road construction and maintenance called Polystrada, has done more than this. The company's target was not only to focus sustainability on the final application but also to implement a greener mentality from the cradle to the grave. Starting from production, Polyglass has made important improvements finalized to increase efficiency and minimize waste. The installation of a trigeneration plant and the usage of selected production scraps inside the products as well as the reduction of emissions into the environment, are one of the main efforts of the company to reduce impact during final product build-up. Moreover, the benefit given by installing Polystrada products brings a significant improvement in road lifetime. This has an impact not only on the number of maintenance or renewal that needs to be done (build less) but also on traffic density due to works and road deviation in case of operations. During the end of the life of a road, Polystrada products can be 100% recycled and milled with classical systems used without changing the normal maintenance procedures. In this work, all these contributions were quantified in terms of CO₂ emission thanks to an LCA analysis. The data obtained were compared with a classical system or a standard production of a membrane. What it is possible to see is that the usage of Polyglass products for street maintenance and building gives a significant reduction of emissions in case of membrane installation under the road wearing course.Keywords: CO₂ emission, LCA, maintenance, sustainability
Procedia PDF Downloads 681230 Financial Burden of Occupational Slip and Fall Incidences in Taiwan
Authors: Kai Way Li, Lang Gan
Abstract:
Slip &Fall are common in Taiwan. They could result in injuries and even fatalities. Official statistics indicate that more than 15% of all occupational incidences were slip/fall related. All the workers in Taiwan are required by the law to join the worker’s insurance program administered by the Bureau of Labor Insurance (BLI). The BLI is a government agency under the supervision of the Ministry of Labor. Workers claim with the BLI for insurance compensations when they suffer fatalities or injuries at work. Injuries statistics based on worker’s compensation claims were rarely studied. The objective of this study was to quantify the injury statistics and financial cost due to slip-fall incidences based on the BLI compensation records. Compensation records in the BLI during 2007 to 2013 were retrieved. All the original application forms, approval opinions, results for worker’s compensations were in hardcopy and were stored in the BLI warehouses. Xerox copies of the claims, excluding the personal information of the applicants (or the victim if passed away), were obtained. The content in the filing forms were coded in an Excel worksheet for further analyses. Descriptive statistics were performed to analyze the data. There were a total of 35,024 claims including 82 deaths, 878 disabilities, and 34,064 injuries/illnesses which were slip/fall related. It was found that the average losses for the death cases were 40 months. The total dollar amount for these cases paid was 86,913,195 NTD. For the disability cases, the average losses were 367.36 days. The total dollar amount for these cases paid was almost 2.6 times of those for the death cases (233,324,004 NTD). For the injury/illness cases, the average losses for the illness cases were 58.78 days. The total dollar amount for these cases paid was approximately 13 times of those of the death cases (1134,850,821 NTD). For the applicants/victims, 52.3% were males. There were more males than females for the deaths, disability, and injury/illness cases. Most (57.8%) of the female victims were between 45 to 59 years old. Most of the male victims (62.6%) were, on the other hand, between 25 to 39 years old. Most of the victims were in manufacturing industry (26.41%), next the construction industry (22.20%), and next the retail industry (13.69%). For the fatality cases, head injury was the main problem for immediate or eventual death (74.4%). For the disability case, foot (17.46%) and knee (9.05%) injuries were the leading problems. The compensation claims other than fatality and disability were mainly associated with injuries of the foot (18%), hand (12.87%), knee (10.42%), back (8.83%), and shoulder (6.77%). The slip/fall cases studied indicate that the ratios among the death, disability, and injury/illness counts were 1:10:415. The ratios of dollar amount paid by the BLI for the three categories were 1:2.6:13. Such results indicate the significance of slip-fall incidences resulting in different severity. Such information should be incorporated in to slip-fall prevention program in industry.Keywords: epidemiology, slip and fall, social burden, workers’ compensation
Procedia PDF Downloads 3251229 STML: Service Type-Checking Markup Language for Services of Web Components
Authors: Saqib Rasool, Adnan N. Mian
Abstract:
Web components are introduced as the latest standard of HTML5 for writing modular web interfaces for ensuring maintainability through the isolated scope of web components. Reusability can also be achieved by sharing plug-and-play web components that can be used as off-the-shelf components by other developers. A web component encapsulates all the required HTML, CSS and JavaScript code as a standalone package which must be imported for integrating a web component within an existing web interface. It is then followed by the integration of web component with the web services for dynamically populating its content. Since web components are reusable as off-the-shelf components, these must be equipped with some mechanism for ensuring their proper integration with web services. The consistency of a service behavior can be verified through type-checking. This is one of the popular solutions for improving the quality of code in many programming languages. However, HTML does not provide type checking as it is a markup language and not a programming language. The contribution of this work is to introduce a new extension of HTML called Service Type-checking Markup Language (STML) for adding support of type checking in HTML for JSON based REST services. STML can be used for defining the expected data types of response from JSON based REST services which will be used for populating the content within HTML elements of a web component. Although JSON has five data types viz. string, number, boolean, object and array but STML is made to supports only string, number and object. This is because of the fact that both object and array are considered as string, when populated in HTML elements. In order to define the data type of any HTML element, developer just needs to add the custom STML attributes of st-string, st-number and st-boolean for string, number and boolean respectively. These all annotations of STML are used by the developer who is writing a web component and it enables the other developers to use automated type-checking for ensuring the proper integration of their REST services with the same web component. Two utilities have been written for developers who are using STML based web components. One of these utilities is used for automated type-checking during the development phase. It uses the browser console for showing the error description if integrated web service is not returning the response with expected data type. The other utility is a Gulp based command line utility for removing the STML attributes before going in production. This ensures the delivery of STML free web pages in the production environment. Both of these utilities have been tested to perform type checking of REST services through STML based web components and results have confirmed the feasibility of evaluating service behavior only through HTML. Currently, STML is designed for automated type-checking of integrated REST services but it can be extended to introduce a complete service testing suite based on HTML only, and it will transform STML from Service Type-checking Markup Language to Service Testing Markup Language.Keywords: REST, STML, type checking, web component
Procedia PDF Downloads 2571228 A Methodology Based on Image Processing and Deep Learning for Automatic Characterization of Graphene Oxide
Authors: Rafael do Amaral Teodoro, Leandro Augusto da Silva
Abstract:
Originated from graphite, graphene is a two-dimensional (2D) material that promises to revolutionize technology in many different areas, such as energy, telecommunications, civil construction, aviation, textile, and medicine. This is possible because its structure, formed by carbon bonds, provides desirable optical, thermal, and mechanical characteristics that are interesting to multiple areas of the market. Thus, several research and development centers are studying different manufacturing methods and material applications of graphene, which are often compromised by the scarcity of more agile and accurate methodologies to characterize the material – that is to determine its composition, shape, size, and the number of layers and crystals. To engage in this search, this study proposes a computational methodology that applies deep learning to identify graphene oxide crystals in order to characterize samples by crystal sizes. To achieve this, a fully convolutional neural network called U-net has been trained to segment SEM graphene oxide images. The segmentation generated by the U-net is fine-tuned with a standard deviation technique by classes, which allows crystals to be distinguished with different labels through an object delimitation algorithm. As a next step, the characteristics of the position, area, perimeter, and lateral measures of each detected crystal are extracted from the images. This information generates a database with the dimensions of the crystals that compose the samples. Finally, graphs are automatically created showing the frequency distributions by area size and perimeter of the crystals. This methodological process resulted in a high capacity of segmentation of graphene oxide crystals, presenting accuracy and F-score equal to 95% and 94%, respectively, over the test set. Such performance demonstrates a high generalization capacity of the method in crystal segmentation, since its performance considers significant changes in image extraction quality. The measurement of non-overlapping crystals presented an average error of 6% for the different measurement metrics, thus suggesting that the model provides a high-performance measurement for non-overlapping segmentations. For overlapping crystals, however, a limitation of the model was identified. To overcome this limitation, it is important to ensure that the samples to be analyzed are properly prepared. This will minimize crystal overlap in the SEM image acquisition and guarantee a lower error in the measurements without greater efforts for data handling. All in all, the method developed is a time optimizer with a high measurement value, considering that it is capable of measuring hundreds of graphene oxide crystals in seconds, saving weeks of manual work.Keywords: characterization, graphene oxide, nanomaterials, U-net, deep learning
Procedia PDF Downloads 1621227 Evaluating the Effect of 'Terroir' on Volatile Composition of Red Wines
Authors: María Luisa Gonzalez-SanJose, Mihaela Mihnea, Vicente Gomez-Miguel
Abstract:
The zoning methodology currently recommended by the OIVV as official methodology to carry out viticulture zoning studies and to define and delimit the ‘terroirs’ has been applied in this study. This methodology has been successfully applied on the most significant an important Spanish Oenological D.O. regions, such as Ribera de Duero, Rioja, Rueda and Toro, but also it have been applied around the world in Portugal, different countries of South America, and so on. This is a complex methodology that uses edaphoclimatic data but also other corresponding to vineyards and other soils’ uses The methodology is useful to determine Homogeneous Soil Units (HSU) to different scale depending on the interest of each study, and has been applied from viticulture regions to particular vineyards. It seems that this methodology is an appropriate method to delimit correctly the medium in order to enhance its uses and to obtain the best viticulture and oenological products. The present work is focused on the comparison of volatile composition of wines made from grapes grown in different HSU that coexist in a particular viticulture region of Castile-Lion cited near to Burgos. Three different HSU were selected for this study. They represented around of 50% of the global area of vineyards of the studied region. Five different vineyards on each HSU under study were chosen. To reduce variability factors, other criteria were also considered as grape variety, clone, rootstocks, vineyard’s age, training systems and cultural practices. This study was carried out during three consecutive years, then wine from three different vintage were made and analysed. Different red wines were made from grapes harvested in the different vineyards under study. Grapes were harvested to ‘Technological maturity’, which are correlated with adequate levels of sugar, acidity, phenolic content (nowadays named phenolic maturity), good sanitary stages and adequate levels of aroma precursors. Results of the volatile profile of the wines produced from grapes of each HSU showed significant differences among them pointing out a direct effect of the edaphoclimatic characteristic of each UHT on the composition of the grapes and then on the volatile composition of the wines. Variability induced by HSU co-existed with the well-known inter-annual variability correlated mainly with the specific climatic conditions of each vintage, however was most intense, so the wine of each HSU were perfectly differenced. A discriminant analysis allowed to define the volatiles with discriminant capacities which were 21 of the 74 volatiles analysed. Detected discriminant volatiles were chemical different, although .most of them were esters, followed by were superior alcohols and fatty acid of short chain. Only one lactone and two aldehydes were selected as discriminant variable, and no varietal aroma compounds were selected, which agree with the fact that all the wine were made from the same grape variety.Keywords: viticulture zoning, terroir, wine, volatile profile
Procedia PDF Downloads 2231226 Experimental and Computational Fluid Dynamic Modeling of a Progressing Cavity Pump Handling Newtonian Fluids
Authors: Deisy Becerra, Edwar Perez, Nicolas Rios, Miguel Asuaje
Abstract:
Progressing Cavity Pump (PCP) is a type of positive displacement pump that is being awarded greater importance as capable artificial lift equipment in the heavy oil field. The most commonly PCP used is driven single lobe pump that consists of a single external helical rotor turning eccentrically inside a double internal helical stator. This type of pump was analyzed by the experimental and Computational Fluid Dynamic (CFD) approach from the DCAB031 model located in a closed-loop arrangement. Experimental measurements were taken to determine the pressure rise and flow rate with a flow control valve installed at the outlet of the pump. The flowrate handled was measured by a FLOMEC-OM025 oval gear flowmeter. For each flowrate considered, the pump’s rotational speed and power input were controlled using an Invertek Optidrive E3 frequency driver. Once a steady-state operation was attained, pressure rise measurements were taken with a Sper Scientific wide range digital pressure meter. In this study, water and three Newtonian oils of different viscosities were tested at different rotational speeds. The CFD model implementation was developed on Star- CCM+ using an Overset Mesh that includes the relative motion between rotor and stator, which is one of the main contributions of the present work. The simulations are capable of providing detailed information about the pressure and velocity fields inside the device in laminar and unsteady regimens. The simulations have a good agreement with the experimental data due to Mean Squared Error (MSE) in under 21%, and the Grid Convergence Index (GCI) was calculated for the validation of the mesh, obtaining a value of 2.5%. In this case, three different rotational speeds were evaluated (200, 300, 400 rpm), and it is possible to show a directly proportional relationship between the rotational speed of the rotor and the flow rate calculated. The maximum production rates for the different speeds for water were 3.8 GPM, 4.3 GPM, and 6.1 GPM; also, for the oil tested were 1.8 GPM, 2.5 GPM, 3.8 GPM, respectively. Likewise, an inversely proportional relationship between the viscosity of the fluid and pump performance was observed, since the viscous oils showed the lowest pressure increase and the lowest volumetric flow pumped, with a degradation around of 30% of the pressure rise, between performance curves. Finally, the Productivity Index (PI) remained approximately constant for the different speeds evaluated; however, between fluids exist a diminution due to the viscosity.Keywords: computational fluid dynamic, CFD, Newtonian fluids, overset mesh, PCP pressure rise
Procedia PDF Downloads 1291225 Living with Functional Movement Disorder: An Exploratory Study of the Lived Experience of Five Individuals with Functional Movement Disorder
Authors: Stephanie Zuba-Bates
Abstract:
Purpose: This qualitative research study explored the lived experience of people with functional movement disorder (FMD) including how it impacts their quality of life and participation in life activities. It aims to educate health care professionals about FMD from the perspective of those living with the disorder. Background: Functional movement disorder is characterized by abnormal motor movements including tremors, abnormal gait, paresis, and dystonia with no known underlying pathophysiological cause. Current research estimates that FMD may account for 2-20% of clients seen by neurologists. Getting a diagnosis of FMD is typically long and difficult. In addition, many healthcare professionals are unfamiliar with the disorder which may delay treatment. People living with FMD face great disruption in major areas of life including activities of daily living (ADLs), work, leisure, and community participation. OT practitioners have expertise in working with people with both physical disabilities as well as mental illness and this expertise has the potential to guide treatment and become part of the standard of care. In order for occupational therapists to provide these services, they must be aware of the disorder and must advocate for clients to be referred to OT services. In addition, referring physicians and other health professionals need to understand how having FMD impacts the daily functioning of people living with the disorder and how OT services can intervene to improve their quality of life. This study aimed to answer the following research questions: 1) What is the lived experience of individuals with FMD?; 2) How has FMD impacted their participation in major areas of life?; and, 3) What treatment have they found to be effective in improving their quality of life? Method: A naturalistic approach was used to collect qualitative data through semi-structured telephone interviews of five individuals living with FMD. Subjects were recruited from social media websites and resources for people with FMD. Data was analyzed for common themes among participants. Results: Common themes including the variability of symptoms of the disorder; challenges to receiving a diagnosis; frustrations with and distrust of health care professionals; the impact of FMD on the participant’s ability to perform daily activities; and, strategies for living with the symptoms of FMD. Conclusion: All of the participants in the study had to modify their daily activities, roles and routines as a result of the disorder. This is an area where occupational therapists may intervene to improve the quality of life of these individuals. Additionally, participants reported frustration with the medical community regarding the awareness of the disorder and how they were treated by medical professionals. Much more research and awareness of the disorder is in order.Keywords: functional movement disorder, occupational therapy, participation, quality of life
Procedia PDF Downloads 1701224 The Effectiveness of Psychosocial Interventions for Survivors of Natural Disasters: A Systematic Review
Authors: Santhani M. Selveindran
Abstract:
Background: Natural disasters are traumatic global events that are becoming increasing more common, with significant psychosocial impact on survivors. This impact results not only in psychosocial distress but, for many, can lead to psychosocial disorders and chronic psychopathology. While there are currently available interventions that seek to prevent and treat these psychosocial sequelae, their effectiveness is uncertain. The evidence-base is emerging with more primary studies evaluating the effectiveness of various psychosocial interventions for survivors of natural disasters, which remains to be synthesized. Aim of Review: To identify, critically appraise and synthesize the current evidence-base on the effectiveness of psychosocial interventions in preventing or treating Post-Traumatic Stress Disorder (PTSD), Major Depressive Disorder (MDD) and/or Generalized Anxiety Disorder (GAD) in adults and children who are survivors of natural disasters. Methods: A protocol was developed as a guide to carry out this review. A systematic search was conducted in eight international electronic databases, three grey literature databases, one dissertation and thesis repository, websites of six humanitarian and non-governmental organizations renowned for their work on natural disasters, as well as bibliographic and citation searching for eligible articles. Papers meeting the specific inclusion criteria underwent quality assessment using the Downs and Black checklist. Data were extracted from the included papers and analysed by way of narrative synthesis. Results: Database and website searching returned 3777 papers where 31 met the criteria for inclusion. Additional 2 papers were obtained through bibliographic and citation searching. Methodological quality of most papers was fair. Twenty-five studies evaluated psychological interventions, five, social interventions whereas three studies evaluated ‘mixed’ psychological and social interventions. All studies, irrespective of methodological quality, reported post-intervention reductions in symptom scores for PTSD, depression and/or anxiety and where assessed, reduced diagnosis of PTSD and MDD, and produced improvements in self-efficacy and quality of life. Statistically significant results were seen in 27 studies. However, three studies demonstrated that the evaluated interventions may not have been very beneficial. Conclusions: The overall positive results suggest that any psychosocial interventions are favourable and should be delivered to all natural disaster survivors, irrespective of age, country, and phase of disaster. Yet, heterogeneity and methodological shortcomings of the current evidence-base makes it difficult to draw definite conclusions needed to formulate categorical guidance or frameworks. Further, rigorously conducted research is needed in this area, although the feasibility of such, given the context and nature of the problem, is also recognized.Keywords: psychosocial interventions, natural disasters, survivors, effectiveness
Procedia PDF Downloads 1561223 Emotion-Convolutional Neural Network for Perceiving Stress from Audio Signals: A Brain Chemistry Approach
Authors: Anup Anand Deshmukh, Catherine Soladie, Renaud Seguier
Abstract:
Emotion plays a key role in many applications like healthcare, to gather patients’ emotional behavior. Unlike typical ASR (Automated Speech Recognition) problems which focus on 'what was said', it is equally important to understand 'how it was said.' There are certain emotions which are given more importance due to their effectiveness in understanding human feelings. In this paper, we propose an approach that models human stress from audio signals. The research challenge in speech emotion detection is finding the appropriate set of acoustic features corresponding to an emotion. Another difficulty lies in defining the very meaning of emotion and being able to categorize it in a precise manner. Supervised Machine Learning models, including state of the art Deep Learning classification methods, rely on the availability of clean and labelled data. One of the problems in affective computation is the limited amount of annotated data. The existing labelled emotions datasets are highly subjective to the perception of the annotator. We address the first issue of feature selection by exploiting the use of traditional MFCC (Mel-Frequency Cepstral Coefficients) features in Convolutional Neural Network. Our proposed Emo-CNN (Emotion-CNN) architecture treats speech representations in a manner similar to how CNN’s treat images in a vision problem. Our experiments show that Emo-CNN consistently and significantly outperforms the popular existing methods over multiple datasets. It achieves 90.2% categorical accuracy on the Emo-DB dataset. We claim that Emo-CNN is robust to speaker variations and environmental distortions. The proposed approach achieves 85.5% speaker-dependant categorical accuracy for SAVEE (Surrey Audio-Visual Expressed Emotion) dataset, beating the existing CNN based approach by 10.2%. To tackle the second problem of subjectivity in stress labels, we use Lovheim’s cube, which is a 3-dimensional projection of emotions. Monoamine neurotransmitters are a type of chemical messengers in the brain that transmits signals on perceiving emotions. The cube aims at explaining the relationship between these neurotransmitters and the positions of emotions in 3D space. The learnt emotion representations from the Emo-CNN are mapped to the cube using three component PCA (Principal Component Analysis) which is then used to model human stress. This proposed approach not only circumvents the need for labelled stress data but also complies with the psychological theory of emotions given by Lovheim’s cube. We believe that this work is the first step towards creating a connection between Artificial Intelligence and the chemistry of human emotions.Keywords: deep learning, brain chemistry, emotion perception, Lovheim's cube
Procedia PDF Downloads 1571222 Inter-Personal and Inter-Organizational Relationships in Supply Chain Integration: A Resource Orchestration Perspective
Authors: Bill Wang, Paul Childerhouse, Yuanfei Kang
Abstract:
Purpose: The research is to extend resource orchestration theory (ROT) into supply chain management (SCM) area to investigate the dyadic relationships at both individual and organizational levels in supply chain integration (SCI). Also, we try to explore the interaction mechanism between inter-personal relationships (IPRs) and inter-organizational (IORs) during the whole SCI process. Methodology/approach: The research employed an exploratory multiple case study approach of four New Zealand companies. The data was collected via semi-structured interviews with top, middle, and lower level managers and operators from different departments of both suppliers and customers triangulated with company archival data. Findings: The research highlights the important role of both IPRs and IORs in the whole SCI process. Both IPRs and IORs are valuable, inimitable resources but IORs are formal and exterior while IPRs are informal and subordinated. In the initial stage of SCI process, IPRs are seen as key resources antecedents to IOR building while three IPRs dimensions work differently: personal credibility acts as an icebreaker to strengthen the confidence forming IORs, and personal affection acts as a gatekeeper, whilst personal communication expedites the IORs process. In the maintenance and development stage, IORs and IPRs interact each other continuously: good interaction between IPRs and IORs can facilitate SCI process while the bad interaction between IPRs can damage the SCI process. On the other hand, during the life-cycle of SCI process, IPRs can facilitate the formation, development of IORs while IORs development can cultivate the ties of IPRs. Out of the three dimensions of IPRs, Personal communication plays a more important role to develop IORs than personal credibility and personal affection. Originality/value: This research contributes to ROT in supply chain management literature by highlighting the interaction of IPRs and IORs in SCI. The intangible resources and capabilities of three dimensions of IPRs need to be orchestrated and nurtured to achieve efficient and effective IORs in SCI. Also, IPRs and IORs need to be orchestrated in terms of breadth, depth, and life-cycle of whole SCI process. Our study provides further insight into the rarely explored inter-personal level of SCI. Managerial implications: Our research provides top management with further evidence of the significance roles of IPRs at different levels when working with trading partners. This highlights the need to actively manage and develop these soft IPRs skills as an intangible competitive resource. Further, the research identifies when staff with specific skills and connections should be utilized during the different stages of building and maintaining inter-organizational ties. More importantly, top management needs to orchestrate and balance the resources of IPRs and IORs.Keywords: case study, inter-organizational relationships, inter-personal relationships, resource orchestration, supply chain integration
Procedia PDF Downloads 2351221 The Gypsy Community Facing the Sexual Orientation: An Empirical Approach to the Attitudes of the Gypsy Population of Granada Towards Homosexual Sex-Affective Relationships
Authors: Elena Arquer Cuenca
Abstract:
The gypsy community has been a mistreated and rejected group since its arrival in the Iberian Peninsula in the 15th century. At present, despite being the largest ethnic minority group in Spain as well as in Europe, the different legal and social initiatives in favour of equality continue to suffer discrimination by the general society. This has fostered a strengthening of the endogroup accompanied by cultural conservatism as a form of self-protection. Despite the current trend of normalization of sexual diversity in modern societies, LGB people continue to suffer discrimination, especially in more traditional environments or communities. This rejection for reasons of sexual orientation within the family or community can hinder the free development of the person and compromise peaceful coexistence. The present work is intended as an approach to the attitudes of the gypsy population towards non-heterosexual sexual orientation. The objective is none other than ‘to know the appreciation that the gypsy population has about homosexual sex-affective relationships, in order to assess whether this has any impact on family and community coexistence’. The following specific objectives are derived from this general objective: ‘To find out whether there is a relationship between the dichotomous Roma gender system and the acceptance/rejection of homosexuality’; ‘to analyse whether sexual orientation has an impact on the coexistence of the Roman family and community’; ‘to analyse whether the historical discrimination suffered by the Roman population favours the maintenance of the patriarchal heterosexual reproductive family’; and lastly ‘to explore whether ICTs have promoted the process of normalisation and/or acceptance of homosexuality within the Roma community’. In order to achieve these objectives, a bibliographical and documentary review has been used, as well as the semi-structured interview technique, in which 4 gypsy people participated (2 women and 2 men of different ages). One of the main findings was the inappropriateness of the use of the homogenising category "Gypsy People" at present, given the great diversity among the Roma communities. Moreover, the difficulty in accepting homosexuality seems to be related to the fact that the heterosexual reproductive family has been the main survival mechanism of Roma communities over centuries. However, it will be concluded that attitudes towards homosexuality will vary depending on the socio-economic and cultural context and factors such as age or professed religion. Three main contributions of this research are: firstly, the inclusion of sexual orientation as a variable to be considered when analysing peaceful coexistence; secondly socio-historical dynamics and structures of inequality have been taken into account when analysing Roma attitudes towards homosexuality; and finally, the processual nature of socio-cultural changes has also been considered.Keywords: gender, homosexuality, ICTs, peaceful coexistence, Roma community, sexual orientation
Procedia PDF Downloads 881220 Properties of the CsPbBr₃ Quantum Dots Treated by O₃ Plasma for Integration in the Perovskite Solar Cell
Authors: Sh. Sousani, Z. Shadrokh, M. Hofbauerová, J. Kollár, M. Jergel, P. Nádaždy, M. Omastová, E. Majková
Abstract:
Perovskite quantum dots (PQDs) have the potential to increase the performance of the perovskite solar cell (PSCs). The integration of PQDs into PSCs can extend the absorption range and enhance photon harvesting and device efficiency. In addition, PQDs can stabilize the device structure by passivating surface defects and traps in the perovskite layer and enhance its stability. The integration of PQDs into PSCs is strongly affected by the type of ligands on the surface of PQDs. The ligands affect the charge transport properties of PQDs, as well as the formation of well-defined interfaces and stability of PSCs. In this work, the CsPbBr₃ QDs were synthesized by the conventional hot-injection method using cesium oleate, PbBr₂ and two different ligands, namely oleic acid (OA) oleylamine (OAm) and didodecyldimethylammonium bromide (DDAB). The STEM confirmed regular shape and relatively monodisperse cubic structure with an average size of about 10-14 nm of the prepared CsPbBr₃ QDs. Further, the photoluminescent (PL) properties of the PQDs/perovskite bilayer with the ligand OA, OAm and DDAB were studied. For this purpose, ITO/PQDs as well as ITO/PQDs/MAPI perovskite structures were prepared by spin coating and the effect of the ligand and oxygen plasma treatment was analyzed. The plasma treatment of the PQDs layer could be beneficial for the deposition of the MAPI perovskite layer and the formation of a well-defined PQDs/MAPI interface. The absorption edge in UV-Vis absorption spectra for OA, OAm CsPbBr₃ QDs is placed around 513 nm (the band gap 2.38 eV); for DDAB CsPbBr₃ QDs, it is located at 490 nm (the band gap 2.33 eV). The photoluminescence (PL) spectra of CsPbBr₃ QDs show two peaks located around 514 nm (503 nm) and 718 nm (708 nm) for OA, OAm (DDAB). The peak around 500 nm corresponds to the PL of PQDs, and the peak close to 710 nm belongs to the surface states of PQDs for both types of ligands. These surface states are strongly affected by the O₃ plasma treatment. For PQDs with DDAB ligand, the O₃ exposure (5, 10, 15 s) results in the blue shift of the PQDs peak and a non-monotonous change of the amplitude of the surface states' peak. For OA, OAm ligand, the O₃ exposition did not cause any shift of the PQDs peak, and the intensity of the PL peak related to the surface states is lower by one order of magnitude in comparison with DDAB, being affected by O₃ plasma treatment. The PL results indicate the possibility of tuning the position of the PL maximum by the ligand of the PQDs. Similar behavior of the PQDs layer was observed for the ITO/QDs/MAPI samples, where an additional strong PL peak at 770 nm coming from the perovskite layer was observed; for the sample with PQDs with DDAB ligands, a small blue shift of the perovskite PL maximum was observed independently of the plasma treatment. These results suggest the possibility of affecting the PL maximum position and the surface states of the PQDs by the combination of a suitable ligand and the O₃ plasma treatment.Keywords: perovskite quantum dots, photoluminescence, O₃ plasma., Perovskite Solar Cells
Procedia PDF Downloads 671219 Properties of the CsPbBr₃ Quantum Dots Treated by O₃ Plasma for Integration in the Perovskite Solar Cell
Authors: Sh. Sousani, Z. Shadrokh, M. Hofbauerová, J. Kollár, M. Jergel, P. Nádaždy, M. Omastová, E. Majková
Abstract:
Perovskite quantum dots (PQDs) have the potential to increase the performance of the perovskite solar cells (PSCs). The integration of PQDs into PSCs can extend the absorption range and enhance photon harvesting and device efficiency. In addition, PQDs can stabilize the device structure by passivating surface defects and traps in the perovskite layer and enhance its stability. The integration of PQDs into PSCs is strongly affected by the type of ligands on the surface of PQDs. The ligands affect the charge transport properties of PQDs, as well as the formation of well-defined interfaces and stability of PSCs. In this work, the CsPbBr₃ QDs were synthesized by the conventional hot-injection method using cesium oleate, PbBr₂, and two different ligands, namely oleic acid (OA)@oleylamine (OAm) and didodecyldimethylammonium bromide (DDAB). The STEM confirmed regular shape and relatively monodisperse cubic structure with an average size of about 10-14 nm of the prepared CsPbBr₃ QDs. Further, the photoluminescent (PL) properties of the PQDs/perovskite bilayer with the ligand OA@OAm and DDAB were studied. For this purpose, ITO/PQDs, as well as ITO/PQDs/MAPI perovskite structures, were prepared by spin coating, and the effect of the ligand and oxygen plasma treatment was analysed. The plasma treatment of the PQDs layer could be beneficial for the deposition of the MAPI perovskite layer and the formation of a well-defined PQDs/MAPI interface. The absorption edge in UV-Vis absorption spectra for OA@OAm CsPbBr₃ QDs is placed around 513 nm (the band gap 2.38 eV); for DDAB CsPbBr₃ QDs, it is located at 490 nm (the band gap 2.33 eV). The photoluminescence (PL) spectra of CsPbBr₃ QDs show two peaks located around 514 nm (503 nm) and 718 nm (708 nm) for OA@OAm (DDAB). The peak around 500 nm corresponds to the PL of PQDs, and the peak close to 710 nm belongs to the surface states of PQDs for both types of ligands. These surface states are strongly affected by the O₃ plasma treatment. For PQDs with DDAB ligand, the O₃ exposure (5, 10, 15 s) results in the blue shift of the PQDs peak and a non-monotonous change of the amplitude of the surface states' peak. For OA@OAm ligand, the O₃ exposition did not cause any shift of the PQDs peak, and the intensity of the PL peak related to the surface states is lower by one order of magnitude in comparison with DDAB, being affected by O₃ plasma treatment. The PL results indicate the possibility of tuning the position of the PL maximum by the ligand of the PQDs. Similar behaviour of the PQDs layer was observed for the ITO/QDs/MAPI samples, where an additional strong PL peak at 770 nm coming from the perovskite layer was observed; for the sample with PQDs with DDAB ligands, a small blue shift of the perovskite PL maximum was observed independently of the plasma treatment. These results suggest the possibility of affecting the PL maximum position and the surface states of the PQDs by the combination of a suitable ligand and the O₃ plasma treatment.Keywords: perovskite quantum dots, photoluminescence, O₃ plasma., perovskite solar cells
Procedia PDF Downloads 73