Search results for: regression models drone
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9392

Search results for: regression models drone

392 Modeling Sorption and Permeation in the Separation of Benzene/ Cyclohexane Mixtures through Styrene-Butadiene Rubber Crosslinked Membranes

Authors: Hassiba Benguergoura, Kamal Chanane, Sâad Moulay

Abstract:

Pervaporation (PV), a membrane-based separation technology, has gained much attention because of its energy saving capability and low-cost, especially for separation of azeotropic or close-boiling liquid mixtures. There are two crucial issues for industrial application of pervaporation process. The first is developing membrane material and tailoring membrane structure to obtain high pervaporation performances. The second is modeling pervaporation transport to better understand of the above-mentioned structure–pervaporation relationship. Many models were proposed to predict the mass transfer process, among them, solution-diffusion model is most widely used in describing pervaporation transport including preferential sorption, diffusion and evaporation steps. For modeling pervaporation transport, the permeation flux, which depends on the solubility and diffusivity of components in the membrane, should be obtained first. Traditionally, the solubility was calculated according to the Flory–Huggins theory. Separation of the benzene (Bz)/cyclohexane (Cx) mixture is industrially significant. Numerous papers have been focused on the Bz/Cx system to assess the PV properties of membrane materials. Membranes with both high permeability and selectivity are desirable for practical application. Several new polymers have been prepared to get both high permeability and selectivity. Styrene-butadiene rubbers (SBR), dense membranes cross-linked by chloromethylation were used in the separation of benzene/cyclohexane mixtures. The impact of chloromethylation reaction as a new method of cross-linking SBR on the pervaporation performance have been reported. In contrast to the vulcanization with sulfur, the cross-linking takes places on styrene units of polymeric chains via a methylene bridge. The partial pervaporative (PV) fluxes of benzene/cyclohexane mixtures in styrene-butadiene rubber (SBR) were predicted using Fick's first law. The predicted partial fluxes and the PV separation factor agreed well with the experimental data by integrating Fick's law over the benzene concentration. The effects of feed concentration and operating temperature on the predicted permeation flux by this proposed model are investigated. The predicted permeation fluxes are in good agreement with experimental data at lower benzene concentration in feed, but at higher benzene concentration, the model overestimated permeation flux. The predicted and experimental permeation fluxes all increase with operating temperature increasing. Solvent sorption levels for benzene/ cyclohexane mixtures in a SBR membrane were determined experimentally. The results showed that the solvent sorption levels were strongly affected by the feed composition. The Flory- Huggins equation generates higher R-square coefficient for the sorption selectivity.

Keywords: benzene, cyclohexane, pervaporation, permeation, sorption modeling, SBR

Procedia PDF Downloads 327
391 Boussinesq Model for Dam-Break Flow Analysis

Authors: Najibullah M, Soumendra Nath Kuiry

Abstract:

Dams and reservoirs are perceived for their estimable alms to irrigation, water supply, flood control, electricity generation, etc. which civilize the prosperity and wealth of society across the world. Meantime the dam breach could cause devastating flood that can threat to the human lives and properties. Failures of large dams remain fortunately very seldom events. Nevertheless, a number of occurrences have been recorded in the world, corresponding in an average to one to two failures worldwide every year. Some of those accidents have caused catastrophic consequences. So it is decisive to predict the dam break flow for emergency planning and preparedness, as it poses high risk to life and property. To mitigate the adverse impact of dam break, modeling is necessary to gain a good understanding of the temporal and spatial evolution of the dam-break floods. This study will mainly deal with one-dimensional (1D) dam break modeling. Less commonly used in the hydraulic research community, another possible option for modeling the rapidly varied dam-break flows is the extended Boussinesq equations (BEs), which can describe the dynamics of short waves with a reasonable accuracy. Unlike the Shallow Water Equations (SWEs), the BEs taken into account the wave dispersion and non-hydrostatic pressure distribution. To capture the dam-break oscillations accurately it is very much needed of at least fourth-order accurate numerical scheme to discretize the third-order dispersion terms present in the extended BEs. The scope of this work is therefore to develop an 1D fourth-order accurate in both space and time Boussinesq model for dam-break flow analysis by using finite-volume / finite difference scheme. The spatial discretization of the flux and dispersion terms achieved through a combination of finite-volume and finite difference approximations. The flux term, was solved using a finite-volume discretization whereas the bed source and dispersion term, were discretized using centered finite-difference scheme. Time integration achieved in two stages, namely the third-order Adams Basforth predictor stage and the fourth-order Adams Moulton corrector stage. Implementation of the 1D Boussinesq model done using PYTHON 2.7.5. Evaluation of the performance of the developed model predicted as compared with the volume of fluid (VOF) based commercial model ANSYS-CFX. The developed model is used to analyze the risk of cascading dam failures similar to the Panshet dam failure in 1961 that took place in Pune, India. Nevertheless, this model can be used to predict wave overtopping accurately compared to shallow water models for designing coastal protection structures.

Keywords: Boussinesq equation, Coastal protection, Dam-break flow, One-dimensional model

Procedia PDF Downloads 232
390 Social and Educational AI for Diversity: Research on Democratic Values to Develop Artificial Intelligence Tools to Guarantee Access for all to Educational Tools and Public Services

Authors: Roberto Feltrero, Sara Osuna-Acedo

Abstract:

Responsible Research and Innovation have to accomplish one fundamental aim: everybody has to participate in the benefits of innovation, but also innovation has to be democratic; that is to say, everybody may have the possibility to participate in the decisions in the innovation process. Particularly, a democratic and inclusive model of social participation and innovation includes persons with disabilities and people at risk of discrimination. Innovations on Artificial Intelligence for social development have to accomplish the same dual goal: improving equality for accessing fields of public interest like education, training and public services, as well as improving civic and democratic participation in the process of developing such innovations for all. This research aims to develop innovations, policies and policy recommendations to apply and disseminate such artificial intelligence and social model for making educational and administrative processes more accessible. First, designing a citizen participation process to engage citizens in the designing and use of artificial intelligence tools for public services. This will result in improving trust in democratic institutions contributing to enhancing the transparency, effectiveness, accountability and legitimacy of public policy-making and allowing people to participate in the development of ethical standards for the use of such technologies. Second, improving educational tools for lifelong learning with AI models to improve accountability and educational data management. Dissemination, education and social participation will be integrated, measured and evaluated in innovative educational processes to make accessible all the educational technologies and content developed on AI about responsible and social innovation. A particular case will be presented regarding access for all to educational tools and public services. This accessibility requires cognitive adaptability because, many times, legal or administrative language is very complex. Not only for people with cognitive disabilities but also for old people or citizens at risk of educational or social discrimination. Artificial Intelligence natural language processing technologies can provide tools to translate legal, administrative, or educational texts to a more simple language that can be accessible to everybody. Despite technological advances in language processing and machine learning, this becomes a huge project if we really want to respect ethical and legal consequences because that kinds of consequences can only be achieved with civil and democratic engagement in two realms: 1) to democratically select texts that need and can be translated and 2) to involved citizens, experts and nonexperts, to produce and validate real examples of legal texts with cognitive adaptations to feed artificial intelligence algorithms for learning how to translate those texts to a more simple and accessible language, adapted to any kind of population.

Keywords: responsible research and innovation, AI social innovations, cognitive accessibility, public participation

Procedia PDF Downloads 90
389 An Integrated Framework for Wind-Wave Study in Lakes

Authors: Moien Mojabi, Aurelien Hospital, Daniel Potts, Chris Young, Albert Leung

Abstract:

The wave analysis is an integral part of the hydrotechnical assessment carried out during the permitting and design phases for coastal structures, such as marinas. This analysis aims in quantifying: i) the Suitability of the coastal structure design against Small Craft Harbour wave tranquility safety criterion; ii) Potential environmental impacts of the structure (e.g., effect on wave, flow, and sediment transport); iii) Mooring and dock design and iv) Requirements set by regulatory agency’s (e.g., WSA section 11 application). While a complex three-dimensional hydrodynamic modelling approach can be applied on large-scale projects, the need for an efficient and reliable wave analysis method suitable for smaller scale marina projects was identified. As a result, Tetra Tech has developed and applied an integrated analysis framework (hereafter TT approach), which takes the advantage of the state-of-the-art numerical models while preserving the level of simplicity that fits smaller scale projects. The present paper aims to describe the TT approach and highlight the key advantages of using this integrated framework in lake marina projects. The core of this methodology is made by integrating wind, water level, bathymetry, and structure geometry data. To respond to the needs of specific projects, several add-on modules have been added to the core of the TT approach. The main advantages of this method over the simplified analytical approaches are i) Accounting for the proper physics of the lake through the modelling of the entire lake (capturing real lake geometry) instead of a simplified fetch approach; ii) Providing a more realistic representation of the waves by modelling random waves instead of monochromatic waves; iii) Modelling wave-structure interaction (e.g. wave transmission/reflection application for floating structures and piles amongst others); iv) Accounting for wave interaction with the lakebed (e.g. bottom friction, refraction, and breaking); v) Providing the inputs for flow and sediment transport assessment at the project site; vi) Taking in consideration historical and geographical variations of the wind field; and vii) Independence of the scale of the reservoir under study. Overall, in comparison with simplified analytical approaches, this integrated framework provides a more realistic and reliable estimation of wave parameters (and its spatial distribution) in lake marinas, leading to a realistic hydrotechnical assessment accessible to any project size, from the development of a new marina to marina expansion and pile replacement. Tetra Tech has successfully utilized this approach since many years in the Okanagan area.

Keywords: wave modelling, wind-wave, extreme value analysis, marina

Procedia PDF Downloads 84
388 Re-Orienting Fashion: Fashionable Modern Muslim Women beyond Western Modernity

Authors: Amany Abdelrazek

Abstract:

Fashion is considered the main feature of modern and postmodern capitalist and consumerist society. Consumer historians maintain that fashion, namely, a sector of people embracing a prevailing clothing style for a short period, started during the Middle Ages but gained popularity later. It symbolised the transition from a medieval society with its solid fixed religious values into a modern society with its secular consumer dynamic culture. Renaissance society was a modern secular society concerning its preoccupation with daily life and changing circumstances. Yet, the late 18th-century industrial revolution revolutionised thought and ideology in Europe. The Industrial Revolution reinforced the Western belief in rationality and strengthened the position of science. In such a rational Western society, modernity, with its new ideas, came to challenge the whole idea of old fixed norms, reflecting the modern secular, rational culture and renouncing the medieval pious consumer. In modern society, supported by the industrial revolution and mass production, fashion encouraged broader sectors of society to integrate into fashion reserved for the aristocracy and royal courts. Moreover, the fashion project emphasizes the human body and its beauty, contradicting Judeo-Christian culture, which tends to abhor and criticize interest in sensuality and hedonism. In mainstream Western discourse, fashionable dress differentiates between emancipated stylish consumerist secular modern female and the assumed oppressed traditional modest religious female. Opposing this discourse, I look at the controversy over what has been called "Islamic fashion" that started during the 1980s and continued to gain popularity in contemporary Egyptian society. I discuss the challenges of being a fashionable and Muslim practicing female in light of two prominent models for female "Islamic fashion" in postcolonial Egypt; Jasmin Mohshen, the first hijabi model in Egypt and Manal Rostom, the first Muslim woman to represent the Nike campaign in the Middle East. The research employs fashion and postcolonial theories to rethink current Muslim women's position on women's emancipation, Western modernity and practising faith in postcolonial Egypt. The paper argues that Muslim women's current innovative and fashionable dress can work as a counter-discourse to the Orientalist and exclusive representation of non-Western Muslim culture as an inherently inert timeless culture. Furthermore, "Islamic" fashionable dress as an aesthetic medium for expressing ideas and convictions in contemporary Egypt interrogates the claim of universal secular modernity and Western fashion theorists' reluctance to consider Islamic fashion as fashion.

Keywords: fashion, muslim women, modernity, secularism

Procedia PDF Downloads 129
387 Approaching a Tat-Rev Independent HIV-1 Clone towards a Model for Research

Authors: Walter Vera-Ortega, Idoia Busnadiego, Sam J. Wilson

Abstract:

Introduction: Human Immunodeficiency Virus type 1 (HIV-1) is responsible for the acquired immunodeficiency syndrome (AIDS), a leading cause of death worldwide infecting millions of people each year. Despite intensive research in vaccine development, therapies against HIV-1 infection are not curative, and the huge genetic variability of HIV-1 challenges to drug development. Current animal models for HIV-1 research present important limitations, impairing the progress of in vivo approaches. Macaques require a CD8+ depletion to progress to AIDS, and the maintenance cost is high. Mice are a cheaper alternative but need to be 'humanized,' and breeding is not possible. The development of an HIV-1 clone able to replicate in mice is a challenging proposal. The lack of human co-factors in mice impedes the function of the HIV-1 accessory proteins, Tat and Rev, hampering HIV-1 replication. However, Tat and Rev function can be replaced by constitutive/chimeric promoters, codon-optimized proteins and the constitutive transport element (CTE), generating a novel HIV-1 clone able to replicate in mice without disrupting the amino acid sequence of the virus. By minimally manipulating the genomic 'identity' of the virus, we propose the generation of an HIV-1 clone able to replicate in mice to assist in antiviral drug development. Methods: i) Plasmid construction: The chimeric promoters and CTE copies were cloned by PCR using lentiviral vectors as templates (pCGSW and pSIV-MPCG). Tat mutants were generated from replication competent HIV-1 plasmids (NHG and NL4-3). ii) Infectivity assays: Retroviral vectors were generated by transfection of human 293T cells and murine NIH 3T3 cells. Virus titre was determined by flow cytometry measuring GFP expression. Human B-cells (AA-2) and Hela cells (TZMbl) were used for infectivity assays. iii) Protein analysis: Tat protein expression was determined by TZMbl assay and HIV-1 capsid by western blot. Results: We have determined that NIH 3T3 cells are able to generate HIV-1 particles. However, they are not infectious, and further analysis needs to be performed. Codon-optimized HIV-1 constructs are efficiently made in 293T cells in a Tat and Rev independent manner and capable of packaging a competent genome in trans. CSGW is capable of generating infectious particles in the absence of Tat and Rev in human cells when 4 copies of the CTE are placed preceding the 3’LTR. HIV-1 Tat mutant clones encoding different promoters are functional during the first cycle of replication when Tat is added in trans. Conclusion: Our findings suggest that the development of an HIV-1 Tat-Rev independent clone is challenging but achievable aim. However, further investigations need to be developed prior presenting our HIV-1 clone as a candidate model for research.

Keywords: codon-optimized, constitutive transport element, HIV-1, long terminal repeats, research model

Procedia PDF Downloads 308
386 Blue Hydrogen Production Via Catalytic Aquathermolysis Coupled with Direct Carbon Dioxide Capture Via Adsorption

Authors: Sherif Fakher

Abstract:

Hydrogen has been gaining a lot of global attention as an uprising contributor in the energy sector. Labeled as an energy carrier, hydrogen is used in many industries and can be used to generate electricity via fuel cells. Blue hydrogen involves the production of hydrogen from hydrocarbons using different processes that emit CO₂. However, the CO₂ is captured and stored. Hence, very little environmental damage occurs during the hydrogen production process. This research investigates the ability to use different catalysts for the production of hydrogen from different hydrocarbon sources, including coal, oil, and gas, using a two-step Aquathermolysis reaction. The research presents the results of experiments conducted to evaluate different catalysts and also highlights the main advantages of this process over other blue hydrogen production methods, including methane steam reforming, autothermal reforming, and oxidation. Two methods of hydrogen generation were investigated including partial oxidation and aquathermolysis. For those two reactions, the reaction kinetics, thermodynamics, and medium were all investigated. Following this, experiments were conducted to test the hydrogen generation potential from both methods. The porous media tested were sandstone, ash, and prozzolanic material. The spent oils used were spent motor oil and spent vegetable oil from cooking. Experiments were conducted at temperatures up to 250 C and pressures up to 3000 psi. Based on the experimental results, mathematical models were developed to predict the hydrogen generation potential at higher thermodynamic conditions. Since both partial oxidation and aquathermolysis require relatively high temperatures to undergo, it was important to devise a method by which these high temperatures can be generated at a low cost. This was done by investigating two factors, including the porous media used and the reliance on the spent oil. Of all the porous media used, the ash had the highest thermal conductivity. The second step was the partial combustion of part of the spent oil to generate the heat needed to reach the high temperatures. This reduced the cost of the heat generation significantly. For the partial oxidation reaction, the spent oil was burned in the presence of a limited oxygen concentration to generate carbon monoxide. The main drawback of this process was the need for burning. This resulted in the generation of other harmful and environmentally damaging gases. Aquathermolysis does not rely on burning, which makes it the cleaner alternative. However, it needs much higher temperatures to run the reaction. When comparing the hydrogen generation potential for both using gas chromatography, aquathermolysis generated 23% more hydrogen using the same volume of spent oil compared to partial oxidation. This research introduces the concept of using spent oil for hydrogen production. This can be a very promising method to produce a clean source of energy using a waste product. This can also help reduce the reliance on freshwater for hydrogen generation which can divert the usage of freshwater to other more important applications.

Keywords: blue hydrogen production, catalytic aquathermolysis, direct carbon dioxide capture, CCUS

Procedia PDF Downloads 31
385 Modelling of Meandering River Dynamics in Colombia: A Case Study of the Magdalena River

Authors: Laura Isabel Guarin, Juliana Vargas, Philippe Chang

Abstract:

The analysis and study of Open Channel flow dynamics for River applications has been based on flow modelling using discreet numerical models based on hydrodynamic equations. The overall spatial characteristics of rivers, i.e. its length to depth to width ratio generally allows one to correctly disregard processes occurring in the vertical or transverse dimensions thus imposing hydrostatic pressure conditions and considering solely a 1D flow model along the river length. Through a calibration process an accurate flow model may thus be developed allowing for channel study and extrapolation of various scenarios. The Magdalena River in Colombia is a large river basin draining the country from South to North with 1550 km with 0.0024 average slope and 275 average width across. The river displays high water level fluctuation and is characterized by a series of meanders. The city of La Dorada has been affected over the years by serious flooding in the rainy and dry seasons. As the meander is evolving at a steady pace repeated flooding has endangered a number of neighborhoods. This study has been undertaken in pro of correctly model flow characteristics of the river in this region in order to evaluate various scenarios and provide decision makers with erosion control measures options and a forecasting tool. Two field campaigns have been completed over the dry and rainy seasons including extensive topographical and channel survey using Topcon GR5 DGPS and River Surveyor ADCP. Also in order to characterize the erosion process occurring through the meander, extensive suspended and river bed samples were retrieved as well as soil perforation over the banks. Hence based on DEM ground digital mapping survey and field data a 2DH flow model was prepared using the Iber freeware based on the finite volume method in a non-structured mesh environment. The calibration process was carried out comparing available historical data of nearby hydrologic gauging station. Although the model was able to effectively predict overall flow processes in the region, its spatial characteristics and limitations related to pressure conditions did not allow for an accurate representation of erosion processes occurring over specific bank areas and dwellings. As such a significant helical flow has been observed through the meander. Furthermore, the rapidly changing channel cross section as a consequence of severe erosion has hindered the model’s ability to provide decision makers with a valid up to date planning tool.

Keywords: erosion, finite volume method, flow dynamics, flow modelling, meander

Procedia PDF Downloads 319
384 The Role of Emotions in Addressing Social and Environmental Issues in Ethical Decision Making

Authors: Kirsi Snellman, Johannes Gartner, , Katja Upadaya

Abstract:

A transition towards a future where the economy serves society so that it evolves within the safe operating space of the planet calls for fundamental changes in the way managers think, feel and act, and make decisions that relate to social and environmental issues. Sustainable decision-making in organizations are often challenging tasks characterized by trade-offs between environmental, social and financial aspects, thus often bringing forth ethical concerns. Although there have been significant developments in incorporating uncertainty into environmental decision-making and measuring constructs and dimensions in ethical behavior in organizations, the majority of sustainable decision-making models are rationalist-based. Moreover, research in psychology indicates that one’s readiness to make a decision depends on the individual’s state of mind, the feasibility of the implied change, and the compatibility of strategies and tactics of implementation. Although very informative, most of this extant research is limited in the sense that it often directs attention towards the rational instead of the emotional. Hence, little is known about the role of emotions in sustainable decision making, especially in situations where decision-makers evaluate a variety of options and use their feelings as a source of information in tackling the uncertainty. To fill this lacuna, and to embrace the uncertainty and perceived risk involved in decisions that touch upon social and environmental aspects, it is important to add emotion to the evaluation when aiming to reach the one right and good ethical decision outcome. This analysis builds on recent findings in moral psychology that associate feelings and intuitions with ethical decisions and suggests that emotions can sensitize the manager to evaluate the rightness or wrongness of alternatives if ethical concerns are present in sustainable decision making. Capturing such sensitive evaluation as triggered by intuitions, we suggest that rational justification can be complemented by using emotions as a tool to tune in to what feels right in making sustainable decisions. This analysis integrates ethical decision-making theories with recent advancements in emotion theories. It determines the conditions under which emotions play a role in sustainability decisions by contributing to a personal equilibrium in which intuition and rationality are both activated and in accord. It complements the rationalist ethics view according to which nothing fogs the mind in decision making so thoroughly as emotion, and the concept of cheater’s high that links unethical behavior with positive affect. This analysis contributes to theory with a novel theoretical model that specifies when and why managers, who are more emotional, are, in fact, more likely to make ethical decisions than those managers who are more rational. It also proposes practical advice on how emotions can convert the manager’s preferences into choices that benefit both common good and one’s own good throughout the transition towards a more sustainable future.

Keywords: emotion, ethical decision making, intuition, sustainability

Procedia PDF Downloads 132
383 Exploration into Bio Inspired Computing Based on Spintronic Energy Efficiency Principles and Neuromorphic Speed Pathways

Authors: Anirudh Lahiri

Abstract:

Neuromorphic computing, inspired by the intricate operations of biological neural networks, offers a revolutionary approach to overcoming the limitations of traditional computing architectures. This research proposes the integration of spintronics with neuromorphic systems, aiming to enhance computational performance, scalability, and energy efficiency. Traditional computing systems, based on the Von Neumann architecture, struggle with scalability and efficiency due to the segregation of memory and processing functions. In contrast, the human brain exemplifies high efficiency and adaptability, processing vast amounts of information with minimal energy consumption. This project explores the use of spintronics, which utilizes the electron's spin rather than its charge, to create more energy-efficient computing systems. Spintronic devices, such as magnetic tunnel junctions (MTJs) manipulated through spin-transfer torque (STT) and spin-orbit torque (SOT), offer a promising pathway to reducing power consumption and enhancing the speed of data processing. The integration of these devices within a neuromorphic framework aims to replicate the efficiency and adaptability of biological systems. The research is structured into three phases: an exhaustive literature review to build a theoretical foundation, laboratory experiments to test and optimize the theoretical models, and iterative refinements based on experimental results to finalize the system. The initial phase focuses on understanding the current state of neuromorphic and spintronic technologies. The second phase involves practical experimentation with spintronic devices and the development of neuromorphic systems that mimic synaptic plasticity and other biological processes. The final phase focuses on refining the systems based on feedback from the testing phase and preparing the findings for publication. The expected contributions of this research are twofold. Firstly, it aims to significantly reduce the energy consumption of computational systems while maintaining or increasing processing speed, addressing a critical need in the field of computing. Secondly, it seeks to enhance the learning capabilities of neuromorphic systems, allowing them to adapt more dynamically to changing environmental inputs, thus better mimicking the human brain's functionality. The integration of spintronics with neuromorphic computing could revolutionize how computational systems are designed, making them more efficient, faster, and more adaptable. This research aligns with the ongoing pursuit of energy-efficient and scalable computing solutions, marking a significant step forward in the field of computational technology.

Keywords: material science, biological engineering, mechanical engineering, neuromorphic computing, spintronics, energy efficiency, computational scalability, synaptic plasticity.

Procedia PDF Downloads 45
382 The Impact of Emotional Intelligence on Organizational Performance

Authors: El Ghazi Safae, Cherkaoui Mounia

Abstract:

Within companies, emotions have been forgotten as key elements of successful management systems. Seen as factors which disturb judgment, make reckless acts or affect negatively decision-making. Since management systems were influenced by the Taylorist worker image, that made the work regular and plain, and considered employees as executing machines. However, recently, in globalized economy characterized by a variety of uncertainties, emotions are proved as useful elements, even necessary, to attend high-level management. The work of Elton Mayo and Kurt Lewin reveals the importance of emotions. Since then emotions start to attract considerable attention. These studies have shown that emotions influence, directly or indirectly, many organization processes. For example, the quality of interpersonal relationships, job satisfaction, absenteeism, stress, leadership, performance and team commitment. Emotions became fundamental and indispensable to individual yield and so on to management efficiency. The idea that a person potential is associated to Intellectual Intelligence, measured by the IQ as the main factor of social, professional and even sentimental success, was the main problematic that need to be questioned. The literature on emotional intelligence has made clear that success at work does not only depend on intellectual intelligence but also other factors. Several researches investigating emotional intelligence impact on performance showed that emotionally intelligent managers perform more, attain remarkable results, able to achieve organizational objectives, impact the mood of their subordinates and create a friendly work environment. An improvement in the emotional intelligence of managers is therefore linked to the professional development of the organization and not only to the personal development of the manager. In this context, it would be interesting to question the importance of emotional intelligence. Does it impact organizational performance? What is the importance of emotional intelligence and how it impacts organizational performance? The literature highlighted that measurement and conceptualization of emotional intelligence are difficult to define. Efforts to measure emotional intelligence have identified three models that are more prominent: the mixed model, the ability model, and the trait model. The first is considered as cognitive skill, the second relates to the mixing of emotional skills with personality-related aspects and the latter is intertwined with personality traits. But, despite strong claims about the importance of emotional intelligence in the workplace, few studies have empirically examined the impact of emotional intelligence on organizational performance, because even though the concept of performance is at the heart of all evaluation processes of companies and organizations, we observe that performance remains a multidimensional concept and many authors insist about the vagueness that surrounds the concept. Given the above, this article provides an overview of the researches related to emotional intelligence, particularly focusing on studies that investigated the impact of emotional intelligence on organizational performance to contribute to the emotional intelligence literature and highlight its importance and show how it impacts companies’ performance.

Keywords: emotions, performance, intelligence, firms

Procedia PDF Downloads 108
381 Fe3O4 Decorated ZnO Nanocomposite Particle System for Waste Water Remediation: An Absorptive-Photocatalytic Based Approach

Authors: Prateek Goyal, Archini Paruthi, Superb K. Misra

Abstract:

Contamination of water resources has been a major concern, which has drawn attention to the need to develop new material models for treatment of effluents. Existing conventional waste water treatment methods remain ineffective sometimes and uneconomical in terms of remediating contaminants like heavy metal ions (mercury, arsenic, lead, cadmium and chromium); organic matter (dyes, chlorinated solvents) and high salt concentration, which makes water unfit for consumption. We believe that nanotechnology based strategy, where we use nanoparticles as a tool to remediate a class of pollutants would prove to be effective due to its property of high surface area to volume ratio, higher selectivity, sensitivity and affinity. In recent years, scientific advancement has been made to study the application of photocatalytic (ZnO, TiO2 etc.) nanomaterials and magnetic nanomaterials in remediating contaminants (like heavy metals and organic dyes) from water/wastewater. Our study focuses on the synthesis and monitoring remediation efficiency of ZnO, Fe3O4 and Fe3O4 coated ZnO nanoparticulate system for the removal of heavy metals and dyes simultaneously. Multitude of ZnO nanostructures (spheres, rods and flowers) using multiple routes (microwave & hydrothermal approach) offers a wide range of light active photo catalytic property. The phase purity, morphology, size distribution, zeta potential, surface area and porosity in addition to the magnetic susceptibility of the particles were characterized by XRD, TEM, CPS, DLS, BET and VSM measurements respectively. Further on, the introduction of crystalline defects into ZnO nanostructures can also assist in light activation for improved dye degradation. Band gap of a material and its absorbance is a concrete indicator for photocatalytic activity of the material. Due to high surface area, high porosity and affinity towards metal ions and availability of active surface sites, iron oxide nanoparticles show promising application in adsorption of heavy metal ions. An additional advantage of having magnetic based nanocomposite is, it offers magnetic field responsive separation and recovery of the catalyst. Therefore, we believe that ZnO linked Fe3O4 nanosystem would be efficient and reusable. Improved photocatalytic efficiency in addition to adsorption for environmental remediation has been a long standing challenge, and the nano-composite system offers the best of features which the two individual metal oxides provide for nanoremediation.

Keywords: adsorption, nanocomposite, nanoremediation, photocatalysis

Procedia PDF Downloads 238
380 Baricitinib Lipid-based Nanosystems as a Topical Alternative for Atopic Dermatitis Treatment

Authors: N. Garrós, P. Bustos, N. Beirampour, R. Mohammadi, M. Mallandrich, A.C. Calpena, H. Colom

Abstract:

Atopic dermatitis (AD) is a persistent skin condition characterized by chronic inflammation caused by an autoimmune response. It is a prevalent clinical issue that requires continual treatment to enhance the patient's quality of life. Systemic therapy often involves the use of glucocorticoids or immunosuppressants to manage symptoms. Our objective was to create and assess topical liposomal formulations containing Baricitinib (BNB), a reversible inhibitor of Janus-associated kinase (JAK), which is involved in various immune responses. These formulations were intended to address flare-ups and improve treatment outcomes for AD. We created three distinct liposomal formulations by combining different amounts of 1-palmitoyl-2-oleoyl-glycero-3-phosphocholine (POPC), cholesterol (CHOL), and ceramide (CER): (i) pure POPC, (ii) POPC mixed with CHOL (at a ratio of 8:2, mol/mol), and (iii) POPC mixed with CHOL and CER (at a ratio of 3.6:2.4:4.0 mol/mol/mol). We conducted various tests to determine the formulations' skin tolerance, irritancy capacity, and their ability to cause erythema and edema on altered skin. We also assessed the transepidermal water loss (TEWL) and skin hydration of rabbits to evaluate the efficacy of the formulations. Histological analysis, the HET-CAM test, and the modified Draize test were all used in the evaluation process. The histological analysis revealed that liposome POPC and POPC:CHOL avoided any damage to the tissues structures. The HET-CAM test showed no irritation effect caused by any of the three liposomes, and the modified Draize test showed a good Draize score for erythema and edema. Liposome POPC effectively counteracted the impact of xylol on the skin, and no erythema or edema was observed during the study. TEWL values were constant for all the liposomes with similar values to the negative control (within the range 8 - 15 g/h·m2, which means a healthy value for rabbits), whereas the positive control showed a significant increase. The skin hydration values were constant and followed the trend of the negative control, while the positive control showed a steady increase during the tolerance study. In conclusion, the developed formulations containing BNB exhibited no harmful or irritating effects, they did not demonstrate any irritant potential in the HET-CAM test and liposomes POPC and POPC:CHOL did not cause any structural alteration according to the histological analysis. These positive findings suggest that additional research is necessary to evaluate the efficacy of these liposomal formulations in animal models of the disease, including mutant animals. Furthermore, before proceeding to clinical trials, biochemical investigations should be conducted to better understand the mechanisms of action involved in these formulations.

Keywords: baricitinib, HET-CAM test, histological study, JAK inhibitor, liposomes, modified draize test

Procedia PDF Downloads 92
379 Additive Manufacturing – Application to Next Generation Structured Packing (SpiroPak)

Authors: Biao Sun, Tejas Bhatelia, Vishnu Pareek, Ranjeet Utikar, Moses Tadé

Abstract:

Additive manufacturing (AM), commonly known as 3D printing, with the continuing advances in parallel processing and computational modeling, has created a paradigm shift (with significant radical thinking) in the design and operation of chemical processing plants, especially LNG plants. With the rising energy demands, environmental pressures, and economic challenges, there is a continuing industrial need for disruptive technologies such as AM, which possess capabilities that can drastically reduce the cost of manufacturing and operations of chemical processing plants in the future. However, the continuing challenge for 3D printing is its lack of adaptability in re-designing the process plant equipment coupled with the non-existent theory or models that could assist in selecting the optimal candidates out of the countless potential fabrications that are possible using AM. One of the most common packings used in the LNG process is structured packing in the packed column (which is a unit operation) in the process. In this work, we present an example of an optimum strategy for the application of AM to this important unit operation. Packed columns use a packing material through which the gas phase passes and comes into contact with the liquid phase flowing over the packing, typically performing the necessary mass transfer to enrich the products, etc. Structured packing consists of stacks of corrugated sheets, typically inclined between 40-70° from the plane. Computational Fluid Dynamics (CFD) was used to test and model various geometries to study the governing hydrodynamic characteristics. The results demonstrate that the costly iterative experimental process can be minimized. Furthermore, they also improve the understanding of the fundamental physics of the system at the multiscale level. SpiroPak, patented by Curtin University, represents an innovative structured packing solution currently at a technology readiness level (TRL) of 5~6. This packing exhibits remarkable characteristics, offering a substantial increase in surface area while significantly enhancing hydrodynamic and mass transfer performance. Recent studies have revealed that SpiroPak can reduce pressure drop by 50~70% compared to commonly used commercial packings, and it can achieve 20~50% greater mass transfer efficiency (particularly in CO2 absorption applications). The implementation of SpiroPak has the potential to reduce the overall size of columns and decrease power consumption, resulting in cost savings for both capital expenditure (CAPEX) and operational expenditure (OPEX) when applied to retrofitting existing systems or incorporated into new processes. Furthermore, pilot to large-scale tests is currently underway to further advance and refine this technology.

Keywords: Additive Manufacturing (AM), 3D printing, Computational Fluid Dynamics (CFD, structured packing (SpiroPak)

Procedia PDF Downloads 88
378 Targeted Delivery of Docetaxel Drug Using Cetuximab Conjugated Vitamin E TPGS Micelles Increases the Anti-Tumor Efficacy and Inhibit Migration of MDA-MB-231 Triple Negative Breast Cancer

Authors: V. K. Rajaletchumy, S. L. Chia, M. I. Setyawati, M. S. Muthu, S. S. Feng, D. T. Leong

Abstract:

Triple negative breast cancers (TNBC) can be classified as one of the most aggressive with a high rate of local recurrences and systematic metastases. TNBCs are insensitive to existing hormonal therapy or targeted therapies such as the use of monoclonal antibodies, due to the lack of oestrogen receptor (ER) and progesterone receptor (PR) and the absence of overexpression of human epidermal growth factor receptor 2 (HER2) compared with other types of breast cancers. The absence of targeted therapies for selective delivery of therapeutic agents into tumours, led to the search for druggable targets in TNBC. In this study, we developed a targeted micellar system of cetuximab-conjugated micelles of D-α-tocopheryl polyethylene glycol succinate (vitamin E TPGS) for targeted delivery of docetaxel as a model anticancer drug for the treatment of TNBCs. We examined the efficacy of our micellar system in xenograft models of triple negative breast cancers and explored the effect of the micelles on post-treatment tumours in order to elucidate the mechanism underlying the nanomedicine treatment in oncology. The targeting micelles were found preferentially accumulated in tumours immediately after the administration of the micelles compare to normal tissue. The fluorescence signal gradually increased up to 12 h at the tumour site and sustained for up to 24 h, reflecting the increases in targeted micelles (TPFC) micelles in MDA-MB-231/Luc cells. In comparison, for the non-targeting micelles (TPF), the fluorescence signal was evenly distributed all over the body of the mice. Only a slight increase in fluorescence at the chest area was observed after 24 h post-injection, reflecting the moderate uptake of micelles by the tumour. The successful delivery of docetaxel into tumour by the targeted micelles (TPDC) exhibited a greater degree of tumour growth inhibition than Taxotere® after 15 days of treatment. The ex vivo study has demonstrated that tumours treated with targeting micelles exhibit enhanced cell cycle arrest and attenuated proliferation compared with the control and with those treated non-targeting micelles. Furthermore, the ex vivo investigation revealed that both the targeting and non-targeting micellar formulations shows significant inhibition of cell migration with migration indices reduced by 0.098- and 0.28-fold, respectively, relative to the control. Overall, both the in vivo and ex vivo data increased the confidence that our micellar formulations effectively targeted and inhibited EGF-overexpressing MDA-MB-231 tumours.

Keywords: biodegradable polymers, cancer nanotechnology, drug targeting, molecular biomaterials, nanomedicine

Procedia PDF Downloads 281
377 The Use of Geographic Information System Technologies for Geotechnical Monitoring of Pipeline Systems

Authors: A. G. Akhundov

Abstract:

Issues of obtaining unbiased data on the status of pipeline systems of oil- and oil product transportation become especially important when laying and operating pipelines under severe nature and climatic conditions. The essential attention is paid here to researching exogenous processes and their impact on linear facilities of the pipeline system. Reliable operation of pipelines under severe nature and climatic conditions, timely planning and implementation of compensating measures are only possible if operation conditions of pipeline systems are regularly monitored, and changes of permafrost soil and hydrological operation conditions are accounted for. One of the main reasons for emergency situations to appear is the geodynamic factor. Emergency situations are proved by the experience to occur within areas characterized by certain conditions of the environment and to develop according to similar scenarios depending on active processes. The analysis of natural and technical systems of main pipelines at different stages of monitoring gives a possibility of making a forecast of the change dynamics. The integration of GIS technologies, traditional means of geotechnical monitoring (in-line inspection, geodetic methods, field observations), and remote methods (aero-visual inspection, aero photo shooting, air and ground laser scanning) provides the most efficient solution of the problem. The united environment of geo information system (GIS) is a comfortable way to implement the monitoring system on the main pipelines since it provides means to describe a complex natural and technical system and every element thereof with any set of parameters. Such GIS enables a comfortable simulation of main pipelines (both in 2D and 3D), the analysis of situations and selection of recommendations to prevent negative natural or man-made processes and to mitigate their consequences. The specifics of such systems include: a multi-dimensions simulation of facilities in the pipeline system, math modelling of the processes to be observed, and the use of efficient numeric algorithms and software packets for forecasting and analyzing. We see one of the most interesting possibilities of using the monitoring results as generating of up-to-date 3D models of a facility and the surrounding area on the basis of aero laser scanning, data of aerophotoshooting, and data of in-line inspection and instrument measurements. The resulting 3D model shall be the basis of the information system providing means to store and process data of geotechnical observations with references to the facilities of the main pipeline; to plan compensating measures, and to control their implementation. The use of GISs for geotechnical monitoring of pipeline systems is aimed at improving the reliability of their operation, reducing the probability of negative events (accidents and disasters), and at mitigation of consequences thereof if they still are to occur.

Keywords: databases, 3D GIS, geotechnical monitoring, pipelines, laser scaning

Procedia PDF Downloads 189
376 Self-Supervised Learning for Hate-Speech Identification

Authors: Shrabani Ghosh

Abstract:

Automatic offensive language detection in social media has become a stirring task in today's NLP. Manual Offensive language detection is tedious and laborious work where automatic methods based on machine learning are only alternatives. Previous works have done sentiment analysis over social media in different ways such as supervised, semi-supervised, and unsupervised manner. Domain adaptation in a semi-supervised way has also been explored in NLP, where the source domain and the target domain are different. In domain adaptation, the source domain usually has a large amount of labeled data, while only a limited amount of labeled data is available in the target domain. Pretrained transformers like BERT, RoBERTa models are fine-tuned to perform text classification in an unsupervised manner to perform further pre-train masked language modeling (MLM) tasks. In previous work, hate speech detection has been explored in Gab.ai, which is a free speech platform described as a platform of extremist in varying degrees in online social media. In domain adaptation process, Twitter data is used as the source domain, and Gab data is used as the target domain. The performance of domain adaptation also depends on the cross-domain similarity. Different distance measure methods such as L2 distance, cosine distance, Maximum Mean Discrepancy (MMD), Fisher Linear Discriminant (FLD), and CORAL have been used to estimate domain similarity. Certainly, in-domain distances are small, and between-domain distances are expected to be large. The previous work finding shows that pretrain masked language model (MLM) fine-tuned with a mixture of posts of source and target domain gives higher accuracy. However, in-domain performance of the hate classifier on Twitter data accuracy is 71.78%, and out-of-domain performance of the hate classifier on Gab data goes down to 56.53%. Recently self-supervised learning got a lot of attention as it is more applicable when labeled data are scarce. Few works have already been explored to apply self-supervised learning on NLP tasks such as sentiment classification. Self-supervised language representation model ALBERTA focuses on modeling inter-sentence coherence and helps downstream tasks with multi-sentence inputs. Self-supervised attention learning approach shows better performance as it exploits extracted context word in the training process. In this work, a self-supervised attention mechanism has been proposed to detect hate speech on Gab.ai. This framework initially classifies the Gab dataset in an attention-based self-supervised manner. On the next step, a semi-supervised classifier trained on the combination of labeled data from the first step and unlabeled data. The performance of the proposed framework will be compared with the results described earlier and also with optimized outcomes obtained from different optimization techniques.

Keywords: attention learning, language model, offensive language detection, self-supervised learning

Procedia PDF Downloads 106
375 Educational Institutional Approach for Livelihood Improvement and Sustainable Development

Authors: William Kerua

Abstract:

The PNG University of Technology (Unitech) has mandatory access to teaching, research and extension education. Given such function, the Agriculture Department has established the ‘South Pacific Institute of Sustainable Agriculture and Rural Development (SPISARD)’ in 2004. SPISARD is established as a vehicle to improve farming systems practiced in selected villages by undertaking pluralistic extension method through ‘Educational Institutional Approach’. Unlike other models, SPISARD’s educational institutional approach stresses on improving the whole farming systems practiced in a holistic manner and has a two-fold focus. The first is to understand the farming communities and improve the productivity of the farming systems in a sustainable way to increase income, improve nutrition and food security as well as livelihood enhancement trainings. The second is to enrich the Department’s curriculum through teaching, research, extension and getting inputs from farming community. SPISARD has established number of model villages in various provinces in Papua New Guinea (PNG) and with many positive outcome and success stories. Adaption of ‘educational institutional approach’ thus binds research, extension and training into one package with the use of students and academic staff through model village establishment in delivering development and extension to communities. This centre (SPISARD) coordinates the activities of the model village programs and linkages. The key to the development of the farming systems is establishing and coordinating linkages, collaboration, and developing partnerships both within and external institutions, organizations and agencies. SPISARD has a six-point step strategy for the development of sustainable agriculture and rural development. These steps are (i) establish contact and identify model villages, (ii) development of model village resource centres for research and trainings, (iii) conduct baseline surveys to identify problems/needs of model villages, (iv) development of solution strategies, (v) implementation and (vi) evaluation of impact of solution programs. SPISARD envisages that the farming systems practiced being improved if the villages can be made the centre of SPISARD activities. Therefore, SPISARD has developed a model village approach to channel rural development. The model village when established become the conduit points where teaching, training, research, and technology transfer takes place. This approach is again different and unique to the existing ones, in that, the development process take place in the farmers’ environment with immediate ‘real time’ feedback mechanisms based on the farmers’ perspective and satisfaction. So far, we have developed 14 model villages and have conducted 75 trainings in 21 different areas/topics in 8 provinces to a total of 2,832 participants of both sex. The aim of these trainings is to directly participate with farmers in the pursuit to improving their farming systems to increase productivity, income and to secure food security and nutrition, thus to improve their livelihood.

Keywords: development, educational institutional approach, livelihood improvement, sustainable agriculture

Procedia PDF Downloads 154
374 The Effect of the Construction Contract System by Simulating the Comparative Costs of Capital to the Financial Feasibility of the Construction of Toll Bali Mandara

Authors: Mas Pertiwi I. G. AG Istri, Sri Kristinayanti Wayan, Oka Aryawan I. Gede Made

Abstract:

Ability of government to meet the needs of infrastructure investment constrained by the size of the budget commitments for other sectors. Another barrier is the complexity of the process of land acquisition. Public Private Partnership can help bridge the investment gap by including the amount of funding from the private sector, shifted the responsibility of financing, construction of the asset, and the operation and post-project design and care to them. In principle, a construction project implementation always requires the investor as a party to provide resources in the form of funding which it must be contained in a successor agreement in the form of a contract. In general, construction contracts consist of contracts which passed in Indonesia and contract International. One source of funding used in the implementation of construction projects comes from funding that comes from the collaboration between the government and the private sector, for example with the system: BLT (Build Lease Transfer), BOT (Build Operate Transfer), BTO (Build Transfer Operate) and BOO (Build Operate Own). And form of payment under a construction contract can be distinguished several ways: monthly payment, payments based on progress and payment after completed projects (Turn Key). One of the tools used to analyze the feasibility of the investment is to use financial models. The financial model describes the relationship between different variables and assumptions used. From a financial model will be known how the cash flow structure of the project, which includes revenues, expenses, liabilities to creditors and the payment of taxes to the government. Net cash flow generated from the project will be used as a basis for analyzing the feasibility of investment source of project financing Public Private Partnership could come from equity or debt. The proportion of funding according to its source is a comparison of a number of investment funds originating from each source of financing for a total investment cost during the construction period by selected the contract system and several alternative financing percentage ratio determined according to sources will generate cash flow structure that is different. Of the various possibilities for the structure of the cash flow generated will be analyzed by software is to test T Paired to compared the contract system used by various alternatives comparison of financing to determine the effect of the contract system and the comparison of such financing for the feasibility of investment toll road construction project for the economic life of 20 (twenty) years. In this use case studies of toll road contruction project Bali Mandara. And in this analysis only covered two systems contracts, namely Build Operate Transfer and Turn Key. Based on the results obtained by analysis of the variable investment feasibility of the NPV, BCR and IRR between the contract system Build Operate Transfer and contract system Turn Key on the interest rate of 9%, 12% and 15%.

Keywords: contract system, financing, internal rate of return, net present value

Procedia PDF Downloads 227
373 Stochastic Nuisance Flood Risk for Coastal Areas

Authors: Eva L. Suarez, Daniel E. Meeroff, Yan Yong

Abstract:

The U.S. Federal Emergency Management Agency (FEMA) developed flood maps based on experts’ experience and estimates of the probability of flooding. Current flood-risk models evaluate flood risk with regional and subjective measures without impact from torrential rain and nuisance flooding at the neighborhood level. Nuisance flooding occurs in small areas in the community, where a few streets or blocks are routinely impacted. This type of flooding event occurs when torrential rainstorm combined with high tide and sea level rise temporarily exceeds a given threshold. In South Florida, this threshold is 1.7 ft above Mean Higher High Water (MHHW). The National Weather Service defines torrential rain as rain deposition at a rate greater than 0.3-inches per hour or three inches in a single day. Data from the Florida Climate Center, 1970 to 2020, shows 371 events with more than 3-inches of rain in a day in 612 months. The purpose of this research is to develop a data-driven method to determine comprehensive analytical damage-avoidance criteria that account for nuisance flood events at the single-family home level. The method developed uses the Failure Mode and Effect Analysis (FMEA) method from the American Society of Quality (ASQ) to estimate the Damage Avoidance (DA) preparation for a 1-day 100-year storm. The Consequence of Nuisance Flooding (CoNF) is estimated from community mitigation efforts to prevent nuisance flooding damage. The Probability of Nuisance Flooding (PoNF) is derived from the frequency and duration of torrential rainfall causing delays and community disruptions to daily transportation, human illnesses, and property damage. Urbanization and population changes are related to the U.S. Census Bureau's annual population estimates. Data collected by the United States Department of Agriculture (USDA) Natural Resources Conservation Service’s National Resources Inventory (NRI) and locally by the South Florida Water Management District (SFWMD) track the development and land use/land cover changes with time. The intent is to include temporal trends in population density growth and the impact on land development. Results from this investigation provide the risk of nuisance flooding as a function of CoNF and PoNF for coastal areas of South Florida. The data-based criterion provides awareness to local municipalities on their flood-risk assessment and gives insight into flood management actions and watershed development.

Keywords: flood risk, nuisance flooding, urban flooding, FMEA

Procedia PDF Downloads 100
372 Characterization of Thin Woven Composites Used in Printed Circuit Boards by Combining Numerical and Experimental Approaches

Authors: Gautier Girard, Marion Martiny, Sebastien Mercier, Mohamad Jrad, Mohamed-Slim Bahi, Laurent Bodin, Francois Lechleiter, David Nevo, Sophie Dareys

Abstract:

Reliability of electronic devices has always been of highest interest for Aero-MIL and space applications. In any electronic device, Printed Circuit Board (PCB), providing interconnection between components, is a key for reliability. During the last decades, PCB technologies evolved to sustain and/or fulfill increased original equipment manufacturers requirements and specifications, higher densities and better performances, faster time to market and longer lifetime, newer material and mixed buildups. From the very beginning of the PCB industry up to recently, qualification, experiments and trials, and errors were the most popular methods to assess system (PCB) reliability. Nowadays OEM, PCB manufacturers and scientists are working together in a close relationship in order to develop predictive models for PCB reliability and lifetime. To achieve that goal, it is fundamental to characterize precisely base materials (laminates, electrolytic copper, …), in order to understand failure mechanisms and simulate PCB aging under environmental constraints by means of finite element method for example. The laminates are woven composites and have thus an orthotropic behaviour. The in-plane properties can be measured by combining classical uniaxial testing and digital image correlation. Nevertheless, the out-of-plane properties cannot be evaluated due to the thickness of the laminate (a few hundred of microns). It has to be noted that the knowledge of the out-of-plane properties is fundamental to investigate the lifetime of high density printed circuit boards. A homogenization method combining analytical and numerical approaches has been developed in order to obtain the complete elastic orthotropic behaviour of a woven composite from its precise 3D internal structure and its experimentally measured in-plane elastic properties. Since the mechanical properties of the resin surrounding the fibres are unknown, an inverse method is proposed to estimate it. The methodology has been applied to one laminate used in hyperfrequency spatial applications in order to get its elastic orthotropic behaviour at different temperatures in the range [-55°C; +125°C]. Next; numerical simulations of a plated through hole in a double sided PCB are performed. Results show the major importance of the out-of-plane properties and the temperature dependency of these properties on the lifetime of a printed circuit board. Acknowledgements—The support of the French ANR agency through the Labcom program ANR-14-LAB7-0003-01, support of CNES, Thales Alenia Space and Cimulec is acknowledged.

Keywords: homogenization, orthotropic behaviour, printed circuit board, woven composites

Procedia PDF Downloads 204
371 Soybean Seed Composition Prediction From Standing Crops Using Planet Scope Satellite Imagery and Machine Learning

Authors: Supria Sarkar, Vasit Sagan, Sourav Bhadra, Meghnath Pokharel, Felix B.Fritschi

Abstract:

Soybean and their derivatives are very important agricultural commodities around the world because of their wide applicability in human food, animal feed, biofuel, and industries. However, the significance of soybean production depends on the quality of the soybean seeds rather than the yield alone. Seed composition is widely dependent on plant physiological properties, aerobic and anaerobic environmental conditions, nutrient content, and plant phenological characteristics, which can be captured by high temporal resolution remote sensing datasets. Planet scope (PS) satellite images have high potential in sequential information of crop growth due to their frequent revisit throughout the world. In this study, we estimate soybean seed composition while the plants are in the field by utilizing PlanetScope (PS) satellite images and different machine learning algorithms. Several experimental fields were established with varying genotypes and different seed compositions were measured from the samples as ground truth data. The PS images were processed to extract 462 hand-crafted vegetative and textural features. Four machine learning algorithms, i.e., partial least squares (PLSR), random forest (RFR), gradient boosting machine (GBM), support vector machine (SVM), and two recurrent neural network architectures, i.e., long short-term memory (LSTM) and gated recurrent unit (GRU) were used in this study to predict oil, protein, sucrose, ash, starch, and fiber of soybean seed samples. The GRU and LSTM architectures had two separate branches, one for vegetative features and the other for textures features, which were later concatenated together to predict seed composition. The results show that sucrose, ash, protein, and oil yielded comparable prediction results. Machine learning algorithms that best predicted the six seed composition traits differed. GRU worked well for oil (R-Squared: of 0.53) and protein (R-Squared: 0.36), whereas SVR and PLSR showed the best result for sucrose (R-Squared: 0.74) and ash (R-Squared: 0.60), respectively. Although, the RFR and GBM provided comparable performance, the models tended to extremely overfit. Among the features, vegetative features were found as the most important variables compared to texture features. It is suggested to utilize many vegetation indices for machine learning training and select the best ones by using feature selection methods. Overall, the study reveals the feasibility and efficiency of PS images and machine learning for plot-level seed composition estimation. However, special care should be given while designing the plot size in the experiments to avoid mixed pixel issues.

Keywords: agriculture, computer vision, data science, geospatial technology

Procedia PDF Downloads 138
370 Study of the Impact of Quality Management System on Chinese Baby Dairy Product Industries

Authors: Qingxin Chen, Liben Jiang, Andrew Smith, Karim Hadjri

Abstract:

Since 2007, the Chinese food industry has undergone serious food contamination in the baby dairy industry, especially milk powder contamination. One of the milk powder products was found to contain melamine and a significant number (294,000) of babies were affected by kidney stones. Due to growing concerns among consumers about food safety and protection, and high pressure from central government, companies must take radical action to ensure food quality protection through the use of an appropriate quality management system. Previously, though researchers have investigated the health and safety aspects of food industries and products, quality issues concerning food products in China have been largely over-looked. Issues associated with baby dairy products and their quality issues have not been discussed in depth. This paper investigates the impact of quality management systems on the Chinese baby dairy product industry. A literature review was carried out to analyse the use of quality management systems within the Chinese milk power market. Moreover, quality concepts, relevant standards, laws, regulations and special issues (such as Melamine, Flavacin M1 contamination) have been analysed in detail. A qualitative research approach is employed, whereby preliminary analysis was conducted by interview, and data analysis based on interview responses from four selected Chinese baby dairy product companies was carried out. Through the analysis of literature review and data findings, it has been revealed that for quality management system that has been designed by many practitioners, many theories, models, conceptualisation, and systems are present. These standards and procedures should be followed in order to provide quality products to consumers, but the implementation is lacking in the Chinese baby dairy industry. Quality management systems have been applied by the selected companies but the implementation still needs improvement. For instance, the companies have to take measures to improve their processes and procedures with relevant standards. The government need to make more interventions and take a greater supervisory role in the production process. In general, this research presents implications for the regulatory bodies, Chinese Government and dairy food companies. There are food safety laws prevalent in China but they have not been widely practiced by companies. Regulatory bodies must take a greater role in ensuring compliance with laws and regulations. The Chinese government must also play a special role in urging companies to implement relevant quality control processes. The baby dairy companies not only have to accept the interventions from the regulatory bodies and government, they also need to ensure that production, storage, distribution and other processes will follow the relevant rules and standards.

Keywords: baby dairy product, food quality, milk powder contamination, quality management system

Procedia PDF Downloads 473
369 Hybrid Solutions in Physicochemical Processes for the Removal of Turbidity in Andean Reservoirs

Authors: María Cárdenas Gaudry, Gonzalo Ramces Fano Miranda

Abstract:

Sediment removal is very important in the purification of water, not only for reasons of visual perception but also because of its association with odor and taste problems. The Cuchoquesera reservoir, which is in the Andean region of Ayacucho (Peru) at an altitude of 3,740 meters above sea level, visually presents suspended particles and organic impurities indicating that it contains water of dubious quality to deduce that it is suitable for direct consumption of human beings. In order to quantitatively know the degree of impurities, water quality monitoring was carried out from February to August 2018, in which four sampling stations were established in the reservoir. The selected measured parameters were electrical conductivity, total dissolved solids, pH, color, turbidity, and sludge volume. The indicators of the studied parameters exceed the permissible limits except for electrical conductivity (190 μS/cm) and total dissolved solids (255 mg/L). In this investigation, the best combination and the optimal doses of reagents were determined that allowed the removal of sediments from the waters of the Cuchoquesera reservoir, through the physicochemical process of coagulation-flocculation. In order to improve this process during the rainy season, six combinations of reagents were evaluated, made up of three coagulants (ferric chloride, ferrous sulfate, and aluminum sulfate) and two natural flocculants: prickly pear powder (Opuntia ficus-indica) and tara gum (Caesalpinia spinoza). For each combination of reagents, jar tests were developed following the central composite experimental design (CCED), where the design factors were the doses of coagulant and flocculant and the initial turbidity. The results of the jar tests were adjusted to mathematical models, obtaining that to treat the water from the Cuchoquesera reservoir, with a turbidity of 150 UTN and a color of 137 U Pt-Co, 27.9 mg/L of the coagulant aluminum sulfate with 3 mg/L of the natural tara gum flocculant to produce a purified water quality of 1.7 UTN of turbidity and 3.2 U Pt-Co of apparent color. The estimated cost of the dose of coagulant and flocculant found was 0.22 USD/m³. This is how “grey-green” technologies can be used as a combination in nature-based solutions in water treatment, in this case, to achieve potability, making it more sustainable, especially economically, if green technology is available at the site of application of the nature-based hybrid solution. This research is a demonstration of the compatibility of natural coagulants/flocculants with other treatment technologies in the integrated/hybrid treatment process, such as the possibility of hybridizing natural coagulants with other types of coagulants.

Keywords: prickly pear powder, tara gum, nature-based solutions, aluminum sulfate, jar test, turbidity, coagulation, flocculation

Procedia PDF Downloads 109
368 Early Childhood Education for Bilingual Children: A Cross-Cultural Examination

Authors: Dina C. Castro, Rossana Boyd, Eugenia Papadaki

Abstract:

Immigration within and across continents is currently a global reality. The number of people leaving their communities in search for a better life for them and their families has increased dramatically during the last twenty years. Therefore, young children of the 21st century around the World are growing up in diverse communities, exposed to many languages and cultures. One consequence of these migration movements is the increased linguistic diversity in school settings. Depending on the linguistic history and the status of languages in the communities (i.e., minority-majority; majority-majority) the instructional approaches will differ. This session will discuss how bilingualism is addressed in early education programs in both minority-majority and majority-majority language communities, analyzing experiences in three countries with very distinct societal and demographic characteristics: Peru (South America), the United States (North America), and Italy (European Union). The ultimate goal is to identify commonalities and differences across the three experiences that could lead to a discussion of bilingualism in early education from a global perspective. From Peru, we will discuss current national language and educational policies that have lead to the design and implementation of bilingual and intercultural education for children in indigenous communities. We will also discuss how those practices are being implemented in preschool programs, the progress made and challenges encountered. From the United States, we will discuss the early education of Spanish-English bilingual preschoolers, including the national policy environment, as well as variations in language of instruction approaches currently being used with these children. From Italy, we will describe early education practices in the Bilingual School of Monza, in northern Italy, a school that has 20 years promoting bilingualism and multilingualism in education. While the presentations from Peru and the United States will discuss bilingualism in a majority-minority language environment, this presentation will lead to a discussion on the opportunities and challenges of promoting bilingualism in a majority-majority language environment. It is evident that innovative models and policies are necessary to prevent inequality of opportunities for bilingual children beginning in their earliest years. The cross-cultural examination of bilingual education experiences for young children in three part of the World will allow us to learn from our success and challenges. The session will end with a discussion of the following question: To what extent are early care and education programs being effective in promoting positive development and learning among all children, including those from diverse language, ethnic and cultural backgrounds? We expect to identify, with participants to our session, a set of recommendations for policy and program development that could ensure access to high quality early education for all bilingual children.

Keywords: early education for bilingual children, global perspectives in early education, cross-cultural, language policies

Procedia PDF Downloads 298
367 Bayesian Structural Identification with Systematic Uncertainty Using Multiple Responses

Authors: André Jesus, Yanjie Zhu, Irwanda Laory

Abstract:

Structural health monitoring is one of the most promising technologies concerning aversion of structural risk and economic savings. Analysts often have to deal with a considerable variety of uncertainties that arise during a monitoring process. Namely the widespread application of numerical models (model-based) is accompanied by a widespread concern about quantifying the uncertainties prevailing in their use. Some of these uncertainties are related with the deterministic nature of the model (code uncertainty) others with the variability of its inputs (parameter uncertainty) and the discrepancy between a model/experiment (systematic uncertainty). The actual process always exhibits a random behaviour (observation error) even when conditions are set identically (residual variation). Bayesian inference assumes that parameters of a model are random variables with an associated PDF, which can be inferred from experimental data. However in many Bayesian methods the determination of systematic uncertainty can be problematic. In this work systematic uncertainty is associated with a discrepancy function. The numerical model and discrepancy function are approximated by Gaussian processes (surrogate model). Finally, to avoid the computational burden of a fully Bayesian approach the parameters that characterise the Gaussian processes were estimated in a four stage process (modular Bayesian approach). The proposed methodology has been successfully applied on fields such as geoscience, biomedics, particle physics but never on the SHM context. This approach considerably reduces the computational burden; although the extent of the considered uncertainties is lower (second order effects are neglected). To successfully identify the considered uncertainties this formulation was extended to consider multiple responses. The efficiency of the algorithm has been tested on a small scale aluminium bridge structure, subjected to a thermal expansion due to infrared heaters. Comparison of its performance with responses measured at different points of the structure and associated degrees of identifiability is also carried out. A numerical FEM model of the structure was developed and the stiffness from its supports is considered as a parameter to calibrate. Results show that the modular Bayesian approach performed best when responses of the same type had the lowest spatial correlation. Based on previous literature, using different types of responses (strain, acceleration, and displacement) should also improve the identifiability problem. Uncertainties due to parametric variability, observation error, residual variability, code variability and systematic uncertainty were all recovered. For this example the algorithm performance was stable and considerably quicker than Bayesian methods that account for the full extent of uncertainties. Future research with real-life examples is required to fully access the advantages and limitations of the proposed methodology.

Keywords: bayesian, calibration, numerical model, system identification, systematic uncertainty, Gaussian process

Procedia PDF Downloads 326
366 From Design, Experience and Play Framework to Common Design Thinking Tools: Using Serious Modern Board Games

Authors: Micael Sousa

Abstract:

Board games (BGs) are thriving as new designs emerge from the hobby community to greater audiences all around the world. Although digital games are gathering most of the attention in game studies and serious games research fields, the post-digital movement helps to explain why in the world dominated by digital technologies, the analog experiences are still unique and irreplaceable to users, allowing innovation in new hybrid environments. The BG’s new designs are part of these post-digital and hybrid movements because they result from the use of powerful digital tools that enable the production and knowledge sharing about the BGs and their face-to-face unique social experiences. These new BGs, defined as modern by many authors, are providing innovative designs and unique game mechanics that are still not yet fully explored by the main serious games (SG) approaches. Even the most established frameworks settled to address SG, as fun games implemented to achieve predefined goals need more development, especially when considering modern BGs. Despite the many anecdotic perceptions, researchers are only now starting to rediscover BGs and demonstrating their potentials. They are proving that BGs are easy to adapt and to grasp by non-expert players in experimental approaches, with the possibility of easy-going adaptation to players’ profiles and serious objectives even during gameplay. Although there are many design thinking (DT) models and practices, their relations with SG frameworks are also underdeveloped, mostly because this is a new research field, lacking theoretical development and the systematization of the experimental practices. Using BG as case studies promise to help develop these frameworks. Departing from the Design, Experience, and Play (DPE) framework and considering the Common Design Think Tools (CDST), this paper proposes a new experimental framework for the adaptation and development of modern BG design for DT: the Design, Experience, and Play for Think (DPET) experimental framework. This is done through the systematization of the DPE and CDST approaches applied in two case studies, where two different sequences of adapted BG were employed to establish a DT collaborative process. These two sessions occurred with different participants and in different contexts, also using different sequences of games for the same DT approach. The first session took place at the Faculty of Economics at the University of Coimbra in a training session of serious games for project development. The second session took place in the Casa do Impacto through The Great Village Design Jam light. Both sessions had the same duration and were designed to progressively achieve DT goals, using BGs as SGs in a collaborative process. The results from the sessions show that a sequence of BGs, when properly adapted to address the DPET framework, can generate a viable and innovative process of collaborative DT that is productive, fun, and engaging. The DPET proposed framework intents to help establish how new SG solutions could be defined for new goals through flexible DT. Applications in other areas of research and development can also benefit from these findings.

Keywords: board games, design thinking, methodology, serious games

Procedia PDF Downloads 112
365 An Integrated HCV Testing Model as a Method to Improve Identification and Linkage to Care in a Network of Community Health Centers in Philadelphia, PA

Authors: Catelyn Coyle, Helena Kwakwa

Abstract:

Objective: As novel and better tolerated therapies become available, effective HCV testing and care models become increasingly necessary to not only identify individuals with active infection but also link them to HCV providers for medical evaluation and treatment. Our aim is to describe an effective HCV testing and linkage to care model piloted in a network of five community health centers located in Philadelphia, PA. Methods: In October 2012, National Nursing Centers Consortium piloted a routine opt-out HCV testing model in a network of community health centers, one of which treats HCV, HIV, and co-infected patients. Key aspects of the model were medical assistant initiated testing, the use of laboratory-based reflex test technology, and electronic medical record modifications to prompt, track, report and facilitate payment of test costs. Universal testing on all adult patients was implemented at health centers serving patients at high-risk for HCV. The other sites integrated high-risk based testing, where patients meeting one or more of the CDC testing recommendation risk factors or had a history of homelessness were eligible for HCV testing. Mid-course adjustments included the integration of dual HIV testing, development of a linkage to care coordinator position to facilitate the transition of HIV and/or HCV-positive patients from primary to specialist care, and the transition to universal HCV testing across all testing sites. Results: From October 2012 to June 2015, the health centers performed 7,730 HCV tests and identified 886 (11.5%) patients with a positive HCV-antibody test. Of those with positive HCV-antibody tests, 838 (94.6%) had an HCV-RNA confirmatory test and 590 (70.4%) progressed to current HCV infection (overall prevalence=7.6%); 524 (88.8%) received their RNA-positive test result; 429 (72.7%) were referred to an HCV care specialist and 271 (45.9%) were seen by the HCV care specialist. The best linkage to care results were seen at the test and treat the site, where of the 333 patients were current HCV infection, 175 (52.6%) were seen by an HCV care specialist. Of the patients with active HCV infection, 349 (59.2%) were unaware of their HCV-positive status at the time of diagnosis. Since the integration of dual HCV/HIV testing in September 2013, 9,506 HIV tests were performed, 85 (0.9%) patients had positive HIV tests, 81 (95.3%) received their confirmed HIV test result and 77 (90.6%) were linked to HIV care. Dual HCV/HIV testing increased the number of HCV tests performed by 362 between the 9 months preceding dual testing and first 9 months after dual testing integration, representing a 23.7% increment. Conclusion: Our HCV testing model shows that integrated routine testing and linkage to care is feasible and improved detection and linkage to care in a primary care setting. We found that prevalence of current HCV infection was higher than that seen in locally in Philadelphia and nationwide. Intensive linkage services can increase the number of patients who successfully navigate the HCV treatment cascade. The linkage to care coordinator position is an important position that acts as a trusted intermediary for patients being linked to care.

Keywords: HCV, routine testing, linkage to care, community health centers

Procedia PDF Downloads 357
364 Human Beta Defensin 1 as Potential Antimycobacterial Agent against Active and Dormant Tubercle Bacilli

Authors: Richa Sharma, Uma Nahar, Sadhna Sharma, Indu Verma

Abstract:

Counteracting the deadly pathogen Mycobacterium tuberculosis (M. tb) effectively is still a global challenge. Scrutinizing alternative weapons like antimicrobial peptides to strengthen existing tuberculosis artillery is urgently required. Considering the antimycobacterial potential of Human Beta Defensin 1 (HBD-1) along with isoniazid, the present study was designed to explore the ability of HBD-1 to act against active and dormant M. tb. HBD-1 was screened in silico using antimicrobial peptide prediction servers to identify its short antimicrobial motif. The activity of both HBD-1 and its selected motif (Pep B) was determined at different concentrations against actively growing M. tb in vitro and ex vivo in monocyte derived macrophages (MDMs). Log phase M. tb was grown along with HBD-1 and Pep B for 7 days. M. tb infected MDMs were treated with HBD-1 and Pep B for 72 hours. Thereafter, colony forming unit (CFU) enumeration was performed to determine activity of both peptides against actively growing in vitro and intracellular M. tb. The dormant M. tb models were prepared by following two approaches and treated with different concentrations of HBD-1 and Pep B. Firstly, 20-22 days old M. tbH37Rv was grown in potassium deficient Sauton media for 35 days. The presence of dormant bacilli was confirmed by Nile red staining. Dormant bacilli were further treated with rifampicin, isoniazid, HBD-1 and its motif for 7 days. The effect of both peptides on latent bacilli was assessed by colony forming units (CFU) and most probable number (MPN) enumeration. Secondly, human PBMC granuloma model was prepared by infecting PBMCs seeded on collagen matrix with M. tb(MOI 0.1) for 10 days. Histopathology was done to confirm granuloma formation. The granuloma thus formed was incubated for 72 hours with rifampicin, HBD-1 and Pep B individually. Difference in bacillary load was determined by CFU enumeration. The minimum inhibitory concentrations of HBD-1 and Pep B restricting growth of mycobacteria in vitro were 2μg/ml and 20μg/ml respectively. The intracellular mycobacterial load was reduced significantly by HBD-1 and Pep B at 1μg/ml and 5μg/ml respectively. Nile red positive bacterial population, high MPN/ low CFU count and tolerance to isoniazid, confirmed the formation of potassium deficienybaseddormancy model. HBD-1 (8μg/ml) showed 96% and 99% killing and Pep B (40μg/ml) lowered dormant bacillary load by 68.89% and 92.49% based on CFU and MPN enumeration respectively. Further, H&E stained aggregates of macrophages and lymphocytes, acid fast bacilli surrounded by cellular aggregates and rifampicin resistance, indicated the formation of human granuloma dormancy model. HBD-1 (8μg/ml) led to 81.3% reduction in CFU whereas its motif Pep B (40μg/ml) showed only 54.66% decrease in bacterial load inside granuloma. Thus, the present study indicated that HBD-1 and its motif are effective antimicrobial players against both actively growing and dormant M. tb. They should be further explored to tap their potential to design a powerful weapon for combating tuberculosis.

Keywords: antimicrobial peptides, dormant, human beta defensin 1, tuberculosis

Procedia PDF Downloads 263
363 The Misuse of Free Cash and Earnings Management: An Analysis of the Extent to Which Board Tenure Mitigates Earnings Management

Authors: Michael McCann

Abstract:

Managerial theories propose that, in joint stock companies, executives may be tempted to waste excess free cash on unprofitable projects to keep control of resources. In order to conceal their projects' poor performance, they may seek to engage in earnings management. On the one hand, managers may manipulate earnings upwards in order to post ‘good’ performances and safeguard their position. On the other, since managers pursuit of unrewarding investments are likely to lead to low long-term profitability, managers will use negative accruals to reduce current year’s earnings, smoothing earnings over time in order to conceal the negative effects. Agency models argue that boards of directors are delegated by shareholders to ensure that companies are governed properly. Part of that responsibility is ensuring the reliability of financial information. Analyses of the impact of board characteristics, particularly board independence on the misuse of free cash flow and earnings management finds conflicting evidence. However, existing characterizations of board independence do not account for such directors gaining firm-specific knowledge over time, influencing their monitoring ability. Further, there is little analysis of the influence of the relative experience of independent directors and executives on decisions surrounding the use of free cash. This paper contributes to this literature regarding the heterogeneous characteristics of boards by investigating the influence of independent director tenure on earnings management and the relative tenures of independent directors and Chief Executives. A balanced panel dataset comprising 51 companies across 11 annual periods from 2005 to 2015 is used for the analysis. In each annual period, firms were classified as conducting earnings management if they had discretionary accruals in the bottom quartile (downwards) and top quartile (upwards) of the distributed values for the sample. Logistical regressions were conducted to determine the marginal impact of independent board tenure and a number of control variables on the probability of conducting earnings management. The findings indicate that both absolute and relative measures of board independence and experience do not have a significant impact on the likelihood of earnings management. It is the level of free cash flow which is the major influence on the probability of earnings management. Higher free cash flow increases the probability of earnings management significantly. The research also investigates whether board monitoring of earnings management is contingent on the level of free cash flow. However, the results suggest that board monitoring is not amplified when free cash flow is higher. This suggests that the extent of earnings management in companies is determined by a range of company, industry and situation-specific factors.

Keywords: corporate governance, boards of directors, agency theory, earnings management

Procedia PDF Downloads 233