Search results for: product-service systems
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9401

Search results for: product-service systems

6521 Evaluation of Classification Algorithms for Diagnosis of Asthma in Iranian Patients

Authors: Taha SamadSoltani, Peyman Rezaei Hachesu, Marjan GhaziSaeedi, Maryam Zolnoori

Abstract:

Introduction: Data mining defined as a process to find patterns and relationships along data in the database to build predictive models. Application of data mining extended in vast sectors such as the healthcare services. Medical data mining aims to solve real-world problems in the diagnosis and treatment of diseases. This method applies various techniques and algorithms which have different accuracy and precision. The purpose of this study was to apply knowledge discovery and data mining techniques for the diagnosis of asthma based on patient symptoms and history. Method: Data mining includes several steps and decisions should be made by the user which starts by creation of an understanding of the scope and application of previous knowledge in this area and identifying KD process from the point of view of the stakeholders and finished by acting on discovered knowledge using knowledge conducting, integrating knowledge with other systems and knowledge documenting and reporting.in this study a stepwise methodology followed to achieve a logical outcome. Results: Sensitivity, Specifity and Accuracy of KNN, SVM, Naïve bayes, NN, Classification tree and CN2 algorithms and related similar studies was evaluated and ROC curves were plotted to show the performance of the system. Conclusion: The results show that we can accurately diagnose asthma, approximately ninety percent, based on the demographical and clinical data. The study also showed that the methods based on pattern discovery and data mining have a higher sensitivity compared to expert and knowledge-based systems. On the other hand, medical guidelines and evidence-based medicine should be base of diagnostics methods, therefore recommended to machine learning algorithms used in combination with knowledge-based algorithms.

Keywords: asthma, datamining, classification, machine learning

Procedia PDF Downloads 450
6520 UV-Cured Thiol-ene Based Polymeric Phase Change Materials for Thermal Energy Storage

Authors: M. Vezir Kahraman, Emre Basturk

Abstract:

Energy storage technology offers new ways to meet the demand to obtain efficient and reliable energy storage materials. Thermal energy storage systems provide the potential to acquire energy savings, which in return decrease the environmental impact related to energy usage. For this purpose, phase change materials (PCMs) that work as 'latent heat storage units' which can store or release large amounts of energy are preferred. Phase change materials (PCMs) are being utilized to absorb, collect and discharge thermal energy during the cycle of melting and freezing, converting from one phase to another. Phase Change Materials (PCMs) can generally be arranged into three classes: organic materials, salt hydrates and eutectics. Many kinds of organic and inorganic PCMs and their blends have been examined as latent heat storage materials. PCMs have found different application areas such as solar energy storage and transfer, HVAC (Heating, Ventilating and Air Conditioning) systems, thermal comfort in vehicles, passive cooling, temperature controlled distributions, industrial waste heat recovery, under floor heating systems and modified fabrics in textiles. Ultraviolet (UV)-curing technology has many advantages, which made it applicable in many different fields. Low energy consumption, high speed, room-temperature operation, low processing costs, high chemical stability, and being environmental friendly are some of its main benefits. UV-curing technique has many applications. One of the many advantages of UV-cured PCMs is that they prevent the interior PCMs from leaking. Shape-stabilized PCM is prepared by blending the PCM with a supporting material, usually polymers. In our study, this problem is minimized by coating the fatty alcohols with a photo-cross-linked thiol-ene based polymeric system. Leakage is minimized because photo-cross-linked polymer acts a matrix. The aim of this study is to introduce a novel thiol-ene based shape-stabilized PCM. Photo-crosslinked thiol-ene based polymers containing fatty alcohols were prepared and characterized for the purpose of phase change materials (PCMs). Different types of fatty alcohols were used in order to investigate their properties as shape-stable PCMs. The structure of the PCMs was confirmed by ATR-FTIR techniques. The phase transition behaviors, thermal stability of the prepared photo-crosslinked PCMs were investigated by differential scanning calorimetry (DSC) and thermogravimetric analysis (TGA). This work was supported by Marmara University, Commission of Scientific Research Project.

Keywords: differential scanning calorimetry (DSC), Polymeric phase change material, thermal energy storage, UV-curing

Procedia PDF Downloads 230
6519 Fundamentals of Mobile Application Architecture

Authors: Mounir Filali

Abstract:

Companies use many innovative ways to reach their customers to stay ahead of the competition. Along with the growing demand for innovative business solutions is the demand for new technology. The most noticeable area of demand for business innovations is the mobile application industry. Recently, companies have recognized the growing need to integrate proprietary mobile applications into their suite of services; Companies have realized that developing mobile apps gives them a competitive edge. As a result, many have begun to rapidly develop mobile apps to stay ahead of the competition. Mobile application development helps companies meet the needs of their customers. Mobile apps also help businesses to take advantage of every potential opportunity to generate leads that convert into sales. Mobile app download growth statistics with the recent rise in demand for business-related mobile apps, there has been a similar rise in the range of mobile app solutions being offered. Today, companies can use the traditional route of the software development team to build their own mobile applications. However, there are also many platform-ready "low-code and no-code" mobile apps available to choose from. These mobile app development options have more streamlined business processes. This helps them be more responsive to their customers without having to be coding experts. Companies must have a basic understanding of mobile app architecture to attract and maintain the interest of mobile app users. Mobile application architecture refers to the buildings or structural systems and design elements that make up a mobile application. It also includes the technologies, processes, and components used during application development. The underlying foundation of all applications consists of all elements of the mobile application architecture; developing a good mobile app architecture requires proper planning and strategic design. The technology framework or platform on the back end and user-facing side of a mobile application is part of the mobile architecture of the application. In-application development Software programmers loosely refer to this set of mobile architecture systems and processes as the "technology stack."

Keywords: mobile applications, development, architecture, technology

Procedia PDF Downloads 106
6518 A Comparative Legal Enquiry on the Concept of Invention

Authors: Giovanna Carugno

Abstract:

The concept of invention is rarely scrutinized by legal scholars since it is a slippery one, full of nuances and difficult to be defined. When does an idea become relevant for the patent law? When is it simply possible to talk of what an invention is? It is the first question to be answered to obtain a patent, but it is sometimes neglected by treaties or reduced to very simple and automatically re-cited definitions. Maybe, also because it is more a transnational and cultural concept than a mere institution of law. Tautology is used to avoid the challenge (in the United States patent regulation, the inventor is the one who contributed to have a patentable invention); in other case, a clear definition is surprisingly not even provided (see, e.g., the European Patent Convention). In Europe, the issue is still more complicated because there are several different solutions elaborate inorganically be national systems of courts varying one to the other only with the aim of solving different IP cases. Also a neighbor domain, like copyright law, is not assisting us in the research, since an author in this field is entitles to be the 'inventor' or the 'author' and to protect as far as he produces something new. Novelty is not enough in patent law. A simple distinction between mere improvement that can be achieved by a man skilled in the art (a sort of reasonable man, in other sectors) or a change that is not obvious rising to the dignity of protection seems not going too far. It is not still defining this concept; it is rigid and not fruitful. So, setting aside for the moment the issue related to the definition of the invention/inventor, our proposal is to scrutinize the possible self-sufficiency of a system in which the inventor or the improver should be awarded of royalties or similar compensation according to the economic improvement he was able to bring. The law, in this case, is in the penumbras of misleading concepts, divided between facts that are obscure and technical, and not involving necessarily legal issues. The aim of this paper is to find out a single definition (or, at least, the minimum elements common in the different legal systems) of what is (legally) an invention and what can be the hints to practically identify an authentic invention. In conclusion, it will propose an alternative system in which the invention is not considered anymore and the only thing that matters are the revenues generated by technological improvement, caused by the worker's activity.

Keywords: comparative law, intellectual property, invention, patents

Procedia PDF Downloads 186
6517 Engineering Thermal-Hydraulic Simulator Based on Complex Simulation Suite “Virtual Unit of Nuclear Power Plant”

Authors: Evgeny Obraztsov, Ilya Kremnev, Vitaly Sokolov, Maksim Gavrilov, Evgeny Tretyakov, Vladimir Kukhtevich, Vladimir Bezlepkin

Abstract:

Over the last decade, a specific set of connected software tools and calculation codes has been gradually developed. It allows simulating I&C systems, thermal-hydraulic, neutron-physical and electrical processes in elements and systems at the Unit of NPP (initially with WWER (pressurized water reactor)). In 2012 it was called a complex simulation suite “Virtual Unit of NPP” (or CSS “VEB” for short). Proper application of this complex tool should result in a complex coupled mathematical computational model. And for a specific design of NPP, it is called the Virtual Power Unit (or VPU for short). VPU can be used for comprehensive modelling of a power unit operation, checking operator's functions on a virtual main control room, and modelling complicated scenarios for normal modes and accidents. In addition, CSS “VEB” contains a combination of thermal hydraulic codes: the best-estimate (two-liquid) calculation codes KORSAR and CORTES and a homogenous calculation code TPP. So to analyze a specific technological system one can build thermal-hydraulic simulation models with different detalization levels up to a nodalization scheme with real geometry. And the result at some points is similar to the notion “engineering/testing simulator” described by the European utility requirements (EUR) for LWR nuclear power plants. The paper is dedicated to description of the tools mentioned above and an example of the application of the engineering thermal-hydraulic simulator in analysis of the boron acid concentration in the primary coolant (changed by the make-up and boron control system).

Keywords: best-estimate code, complex simulation suite, engineering simulator, power plant, thermal hydraulic, VEB, virtual power unit

Procedia PDF Downloads 382
6516 Effective Emergency Response and Disaster Prevention: A Decision Support System for Urban Critical Infrastructure Management

Authors: M. Shahab Uddin, Pennung Warnitchai

Abstract:

Currently more than half of the world’s populations are living in cities, and the number and sizes of cities are growing faster than ever. Cities rely on the effective functioning of complex and interdependent critical infrastructures networks to provide public services, enhance the quality of life, and save the community from hazards and disasters. In contrast, complex connectivity and interdependency among the urban critical infrastructures bring management challenges and make the urban system prone to the domino effect. Unplanned rapid growth, increased connectivity, and interdependency among the infrastructures, resource scarcity, and many other socio-political factors are affecting the typical state of an urban system and making it susceptible to numerous sorts of diversion. In addition to internal vulnerabilities, urban systems are consistently facing external threats from natural and manmade hazards. Cities are not just complex, interdependent system, but also makeup hubs of the economy, politics, culture, education, etc. For survival and sustainability, complex urban systems in the current world need to manage their vulnerabilities and hazardous incidents more wisely and more interactively. Coordinated management in such systems makes for huge potential when it comes to absorbing negative effects in case some of its components were to function improperly. On the other hand, ineffective management during a similar situation of overall disorder from hazards devastation may make the system more fragile and push the system to an ultimate collapse. Following the quantum, the current research hypothesizes that a hazardous event starts its journey as an emergency, and the system’s internal vulnerability and response capacity determine its destination. Connectivity and interdependency among the urban critical infrastructures during this stage may transform its vulnerabilities into dynamic damaging force. An emergency may turn into a disaster in the absence of effective management; similarly, mismanagement or lack of management may lead the situation towards a catastrophe. Situation awareness and factual decision-making is the key to win a battle. The current research proposed a contextual decision support system for an urban critical infrastructure system while integrating three different models: 1) Damage cascade model which demonstrates damage propagation among the infrastructures through their connectivity and interdependency, 2) Restoration model, a dynamic restoration process of individual infrastructure, which is based on facility damage state and overall disruptions in surrounding support environment, and 3) Optimization model that ensures optimized utilization and distribution of available resources in and among the facilities. All three models are tightly connected, mutually interdependent, and together can assess the situation and forecast the dynamic outputs of every input. Moreover, this integrated model will hold disaster managers and decision makers responsible when it comes to checking all the alternative decision before any implementation, and support to produce maximum possible outputs from the available limited inputs. This proposed model will not only support to reduce the extent of damage cascade but will ensure priority restoration and optimize resource utilization through adaptive and collaborative management. Complex systems predictably fail but in unpredictable ways. System understanding, situation awareness, and factual decisions may significantly help urban system to survive and sustain.

Keywords: disaster prevention, decision support system, emergency response, urban critical infrastructure system

Procedia PDF Downloads 229
6515 Modeling Aggregation of Insoluble Phase in Reactors

Authors: A. Brener, B. Ismailov, G. Berdalieva

Abstract:

In the paper we submit the modification of kinetic Smoluchowski equation for binary aggregation applying to systems with chemical reactions of first and second orders in which the main product is insoluble. The goal of this work is to create theoretical foundation and engineering procedures for calculating the chemical apparatuses in the conditions of joint course of chemical reactions and processes of aggregation of insoluble dispersed phases which are formed in working zones of the reactor.

Keywords: binary aggregation, clusters, chemical reactions, insoluble phases

Procedia PDF Downloads 309
6514 High Efficiency Double-Band Printed Rectenna Model for Energy Harvesting

Authors: Rakelane A. Mendes, Sandro T. M. Goncalves, Raphaella L. R. Silva

Abstract:

The concepts of energy harvesting and wireless energy transfer have been widely discussed in recent times. There are some ways to create autonomous systems for collecting ambient energy, such as solar, vibratory, thermal, electromagnetic, radiofrequency (RF), among others. In the case of the RF it is possible to collect up to 100 μW / cm². To collect and/or transfer energy in RF systems, a device called rectenna is used, which is defined by the junction of an antenna and a rectifier circuit. The rectenna presented in this work is resonant at the frequencies of 1.8 GHz and 2.45 GHz. Frequencies at 1.8 GHz band are e part of the GSM / LTE band. The GSM (Global System for Mobile Communication) is a frequency band of mobile telephony, it is also called second generation mobile networks (2G), it came to standardize mobile telephony in the world and was originally developed for voice traffic. LTE (Long Term Evolution) or fourth generation (4G) has emerged to meet the demand for wireless access to services such as Internet access, online games, VoIP and video conferencing. The 2.45 GHz frequency is part of the ISM (Instrumentation, Scientific and Medical) frequency band, this band is internationally reserved for industrial, scientific and medical development with no need for licensing, and its only restrictions are related to maximum power transfer and bandwidth, which must be kept within certain limits (in Brazil the bandwidth is 2.4 - 2.4835 GHz). The rectenna presented in this work was designed to present efficiency above 50% for an input power of -15 dBm. It is known that for wireless energy capture systems the signal power is very low and varies greatly, for this reason this ultra-low input power was chosen. The Rectenna was built using the low cost FR4 (Flame Resistant) substrate, the antenna selected is a microfita antenna, consisting of a Meandered dipole, and this one was optimized using the software CST Studio. This antenna has high efficiency, high gain and high directivity. Gain is the quality of an antenna in capturing more or less efficiently the signals transmitted by another antenna and/or station. Directivity is the quality that an antenna has to better capture energy in a certain direction. The rectifier circuit used has series topology and was optimized using Keysight's ADS software. The rectifier circuit is the most complex part of the rectenna, since it includes the diode, which is a non-linear component. The chosen diode is the Schottky diode SMS 7630, this presents low barrier voltage (between 135-240 mV) and a wider band compared to other types of diodes, and these attributes make it perfect for this type of application. In the rectifier circuit are also used inductor and capacitor, these are part of the input and output filters of the rectifier circuit. The inductor has the function of decreasing the dispersion effect on the efficiency of the rectifier circuit. The capacitor has the function of eliminating the AC component of the rectifier circuit and making the signal undulating.

Keywords: dipole antenna, double-band, high efficiency, rectenna

Procedia PDF Downloads 125
6513 Agent-Based Modelling to Improve Dairy-origin Beef Production: Model Description and Evaluation

Authors: Addisu H. Addis, Hugh T. Blair, Paul R. Kenyon, Stephen T. Morris, Nicola M. Schreurs, Dorian J. Garrick

Abstract:

Agent-based modeling (ABM) enables an in silico representation of complex systems and cap-tures agent behavior resulting from interaction with other agents and their environment. This study developed an ABM to represent a pasture-based beef cattle finishing systems in New Zea-land (NZ) using attributes of the rearer, finisher, and processor, as well as specific attributes of dairy-origin beef cattle. The model was parameterized using values representing 1% of NZ dairy-origin cattle, and 10% of rearers and finishers in NZ. The cattle agent consisted of 32% Holstein-Friesian, 50% Holstein-Friesian–Jersey crossbred, and 8% Jersey, with the remainder being other breeds. Rearers and finishers repetitively and simultaneously interacted to determine the type and number of cattle populating the finishing system. Rearers brought in four-day-old spring-born calves and reared them until 60 calves (representing a full truck load) on average had a live weight of 100 kg before selling them on to finishers. Finishers mainly attained weaners from rearers, or directly from dairy farmers when weaner demand was higher than the supply from rearers. Fast-growing cattle were sent for slaughter before the second winter, and the re-mainder were sent before their third winter. The model finished a higher number of bulls than heifers and steers, although it was 4% lower than the industry reported value. Holstein-Friesian and Holstein-Friesian–Jersey-crossbred cattle dominated the dairy-origin beef finishing system. Jersey cattle account for less than 5% of total processed beef cattle. Further studies to include re-tailer and consumer perspectives and other decision alternatives for finishing farms would im-prove the applicability of the model for decision-making processes.

Keywords: agent-based modelling, dairy cattle, beef finishing, rearers, finishers

Procedia PDF Downloads 101
6512 The Current State Of Human Gait Simulator Development

Authors: Stepanov Ivan, Musalimov Viktor, Monahov Uriy

Abstract:

This report examines the current state of human gait simulator development based on the human hip joint model. This unit will create a database of human gait types, useful for setting up and calibrating mechano devices, as well as the creation of new systems of rehabilitation, exoskeletons and walking robots. The system has ample opportunity to configure the dimensions and stiffness, while maintaining relative simplicity.

Keywords: hip joint, human gait, physiotherapy, simulation

Procedia PDF Downloads 407
6511 Development of a Sustainable Municipal Solid Waste Management for an Urban Area: Case Study from a Developing Country

Authors: Anil Kumar Gupta, Dronadula Venkata Sai Praneeth, Brajesh Dubey, Arundhuti Devi, Suravi Kalita, Khanindra Sharma

Abstract:

Increase in urbanization and industrialization have led to improve in the standard of living. However, at the same time, the challenges due to improper solid waste management are also increasing. Municipal Solid Waste management is considered as a vital step in the development of urban infrastructure. The present study focuses on developing a solid waste management plan for an urban area in a developing country. The current scenario of solid waste management practices at various urban bodies in India is summarized. Guwahati city in the northeastern part of the country and is also one of the targeted smart cities (under the governments Smart Cities program) was chosen as case study to develop and implement the solid waste management plan. The whole city was divided into various divisions and waste samples were collected according to American Society for Testing and Materials (ASTM) - D5231-92 - 2016 for each division in the city and a composite sample prepared to represent the waste from the entire city. The solid waste characterization in terms of physical and chemical which includes mainly proximate and ultimate analysis were carried out. Existing primary and secondary collection systems were studied and possibilities of enhancing the collection systems were discussed. The composition of solid waste for the overall city was found to be as: organic matters 38%, plastic 27%, paper + cardboard 15%, Textile 9%, inert 7% and others 4%. During the conference presentation, further characterization results in terms of Thermal gravimetric analysis (TGA), pH and water holding capacity will be discussed. The waste management options optimizing activities such as recycling, recovery, reuse and reduce will be presented and discussed.

Keywords: proximate, recycling, thermal gravimetric analysis (TGA), solid waste management

Procedia PDF Downloads 193
6510 Characterization of Optical Communication Channels as Non-Deterministic Model

Authors: Valentina Alessandra Carvalho do Vale, Elmo Thiago Lins Cöuras Ford

Abstract:

Increasingly telecommunications sectors are adopting optical technologies, due to its ability to transmit large amounts of data over long distances. However, as in all systems of data transmission, optical communication channels suffer from undesirable and non-deterministic effects, being essential to know the same. Thus, this research allows the assessment of these effects, as well as their characterization and beneficial uses of these effects.

Keywords: optical communication, optical fiber, non-deterministic effects, telecommunication

Procedia PDF Downloads 788
6509 Consequence of Multi-Templating of Closely Related Structural Analogues on a Chitosan-Methacryllic Acid Molecularly Imprinted Polymer Matrix-Thermal and Chromatographic Traits

Authors: O.Ofoegbu, S. Roongnapa, A.N. Eboatu

Abstract:

Most polluted environments, most challengingly, aerosol types, contain a cocktail of different toxicants. Multi-templating of matrices have been the recent target by researchers in a bid to solving complex mixed-toxicant challenges using single or common remediation systems. This investigation looks at the effect of such multi-templated system vis-a-vis the synthesis by non-covalent interaction, of a molecularly imprinted polymer architecture using nicotine and its structural analogue Phenylalanine amide individually and, in the blend, (50:50), as template materials in a Chitosan-Methacrylic acid functional monomer matrix. The temperature for polymerization is 60OC and time for polymerization, 12hrs (water bath heating), 4mins for (microwave heating). The characteristic thermal properties of the molecularly imprinted materials are investigated using Simultaneous Thermal Analysis (STA) profiling, while the absorption and separation efficiencies based on the relative retention times and peak areas of templates were studied amongst other properties. Transmission Electron Microscopy (TEM) results obtained, show the creation of heterogeneous nanocavities, regardless, the introduction of Caffeine a close structural analogue presented near-zero perfusion. This confirms the selectivity and specificity of the templated polymers despite its dual-templated nature. The STA results presented the materials as having decomposition temperatures above 250OC and a relative loss in mass of less than19% over a period within 50mins of heating. Consequent to this outcome, multi-templated systems can be fabricated to sequester specifically and selectively targeted toxicants in a mixed toxicant populated system effectively.

Keywords: chitosan, dual-templated, methacrylic acid, mixed-toxicants, molecularly-imprinted-polymer

Procedia PDF Downloads 118
6508 Enhanced Exchange Bias in Poly-crystalline Compounds through Oxygen Vacancy and B-site Disorder

Authors: Koustav Pal, Indranil Das

Abstract:

In recent times, perovskite and double perovskite (DP) systems attracts lot of interest as they provide a rich material platform for studying emergent functionalities like near-room-temperature ferromagnetic (FM) insulators, exchange bias (EB), magnetocaloric effects, colossal magnetoresistance, anisotropy, etc. These interesting phenomena emerge because of complex couplings between spin, charge, orbital, and lattice degrees of freedom in these systems. Various magnetic phenomena such as exchange bias, spin glass, memory effect, colossal magneto-resistance, etc. can be modified and controlled through antisite (B-site) disorder or controlling oxygen concentration of the material. By controlling oxygen concentration in SrFe0.5Co0.5O3 – δ (SFCO) (δ ∼ 0.3), we achieve intrinsic exchange bias effect with a large exchange bias field (∼1.482 Tesla) and giant coercive field (∼1.454 Tesla). Now we modified the B-site by introducing 10% iridium in the system. This modification give rise to the exchange bias field as high as 1.865 tesla and coercive field 1.863 tesla. Our work aims to investigate the effect of oxygen deficiency and B-site effect on exchange bias in oxide materials for potential technological applications. Structural characterization techniques including X-ray diffraction, scanning tunneling microscopy, and transmission electron microscopy were utilized to determine crystal structure and particle size. X-ray photoelectron spectroscopy was used to identify valence states of the ions. Magnetic analysis revealed that oxygen deficiency resulted in a large exchange bias due to a significant number of ionic mixtures. Iridium doping was found to break interaction paths, resulting in various antiferromagnetic and ferromagnetic surfaces that enhance exchange bias.

Keywords: coercive field, disorder, exchange bias, spin glass

Procedia PDF Downloads 81
6507 Practical Experiences in the Development of a Lab-Scale Process for the Production and Recovery of Fucoxanthin

Authors: Alma Gómez-Loredo, José González-Valdez, Jorge Benavides, Marco Rito-Palomares

Abstract:

Fucoxanthin is a carotenoid that exerts multiple beneficial effects on human health, including antioxidant, anti-cancer, antidiabetic and anti-obesity activity; making the development of a whole process for its production and recovery an important contribution. In this work, the lab-scale production and purification of fucoxanthin in Isocrhysis galbana have been studied. In batch cultures, low light intensities (13.5 μmol/m2s) and bubble agitation were the best conditions for production of the carotenoid with product yields of up to 0.143 mg/g. After fucoxanthin ethanolic extraction from biomass and hexane partition, further recovery and purification of the carotenoid has been accomplished by means of alcohol – salt Aqueous Two-Phase System (ATPS) extraction followed by an ultrafiltration (UF) step. An ATPS comprised of ethanol and potassium phosphate (Volume Ratio (VR) =3; Tie-line Length (TLL) 60% w/w) presented a fucoxanthin recovery yield of 76.24 ± 1.60% among the studied systems and was able to remove 64.89 ± 2.64% of the carotenoid and chlorophyll pollutants. For UF, the addition of ethanol to the original recovered ethanolic ATPS stream to a final relation of 74.15% (w/w) resulted in a reduction of approximately 16% of the protein contents, increasing product purity with a recovery yield of about 63% of the compound in the permeate stream. Considering the production, extraction and primary recovery (ATPS and UF) steps, around a 45% global fucoxanthin recovery should be expected. Although other purification technologies, such as Centrifugal Partition Chromatography are able to obtain fucoxanthin recoveries of up to 83%, the process developed in the present work does not require large volumes of solvents or expensive equipment. Moreover, it has a potential for scale up to commercial scale and represents a cost-effective strategy when compared to traditional separation techniques like chromatography.

Keywords: aqueous two-phase systems, fucoxanthin, Isochrysis galbana, microalgae, ultrafiltration

Procedia PDF Downloads 425
6506 Violence Against Nurses – Healthcare Workers with Great Sacrifice - During The COVID-19 Pandemic: A Discussion Article

Authors: Sarieh Poortaghi, Zakiyeh Jafaryparvar, Marzieh Hasanpour, Reza Negarandeh

Abstract:

Aim: This article aims to discuss how violence against health care workers especially nurses affects health care systems and the quality of care of the patients. In this paper causes of violence and strategies to reduce it have been discussed. Methods: Discourse of the literature considering the violence against nurses during the COVID-19 pandemic and its reasons and outcomes. Results: The COVID-19 pandemic has led to a significant increase in violence against healthcare providers. The attacker against nurses may be among patients, companions, visitors, colleagues such as doctors and other nurses, supervisors, and managers. Many individuals who experience violence in healthcare environments refrain from reporting it. The causes of violence against nurses include spending long periods with patients, perceiving nursing as a low-status profession, gender of nurses, direct and frequent contact with patients and their companions, inadequate facilities and high workload, weak healthcare delivery systems in public hospitals and inequality in health, nature of the department and shift type of personnel, work shifts and staff shortages, forcing nurses to work in non-standard conditions during the COVID-19 pandemic, prohibition of patient visits during the pandemic, patient death and nurses' sense of incompetence, and expressing stress through aggression. Workplace violence leads to a decrease in job satisfaction and an increase in continuous psychological stress, which has a negative impact on the personal and professional lives of nurses. Potential strategies for reducing workplace violence include protecting healthcare workers through laws, improving communication with patients and their families, critically analyzing information in social media, facilitating patient access through remote medical strategies, and improving access to primary healthcare services.

Keywords: nurses, health care workers, Covid-19, nursing

Procedia PDF Downloads 16
6505 Exploring Partnership Brokering Science in Social Entrepreneurship: A Literature Review

Authors: Lani Fraizer

Abstract:

Increasingly, individuals from diverse professional and academic backgrounds are making a conscious choice to pursue careers related to social change; a sophisticated understanding of social entrepreneur education is becoming ever more important. Social entrepreneurs are impassioned change makers who characteristically combine leadership and entrepreneurial spirits to problem solve social ills affecting our planet. Generating partnership opportunities and nurturing them is an important part of their change-making work. Faced with the complexities of these partnerships, social entrepreneurs and people who work with them need to be well prepared to tackle new and unforeseen challenges faced. As partnerships become even more critical to advance initiatives at scale, for example, understanding the partnership brokering role is even more important for educators who prepare these leaders to establish and sustain multi-stakeholder partnerships. This paper aims to provide practitioners in social entrepreneurship with enhanced knowledge of partnership brokering and identify directions for future research. A literature review search from January 1977 to May 2015 was conducted using the combined keywords ‘partnership brokering’ and ‘social entrepreneurship’ via WorldCat, one of the largest database catalogs in the world with collections of more than 10,000 worldwide. This query focused on literature written in the English language and analyzed solely the role of partnership brokering in social entrepreneurship. The synthesis of the literature review found three main themes emerging: the need for more professional awareness of partnership brokering and its value add in systems change-making work, the need for more knowledge on developing partnership brokering competencies, and the need for more applied research in the area of partnership brokering and how it is practiced by practitioners in social entrepreneurship. The results of the review serve to emphasize and reiterate the importance of partnership brokers in social entrepreneurship work, and act as a reminder of the need for further scholarly research in this area to bridge the gap between practice and research.

Keywords: partnership brokering, leadership, social entrepreneurship, systems changemaking

Procedia PDF Downloads 343
6504 Multi-Label Approach to Facilitate Test Automation Based on Historical Data

Authors: Warda Khan, Remo Lachmann, Adarsh S. Garakahally

Abstract:

The increasing complexity of software and its applicability in a wide range of industries, e.g., automotive, call for enhanced quality assurance techniques. Test automation is one option to tackle the prevailing challenges by supporting test engineers with fast, parallel, and repetitive test executions. A high degree of test automation allows for a shift from mundane (manual) testing tasks to a more analytical assessment of the software under test. However, a high initial investment of test resources is required to establish test automation, which is, in most cases, a limitation to the time constraints provided for quality assurance of complex software systems. Hence, a computer-aided creation of automated test cases is crucial to increase the benefit of test automation. This paper proposes the application of machine learning for the generation of automated test cases. It is based on supervised learning to analyze test specifications and existing test implementations. The analysis facilitates the identification of patterns between test steps and their implementation with test automation components. For the test case generation, this approach exploits historical data of test automation projects. The identified patterns are the foundation to predict the implementation of unknown test case specifications. Based on this support, a test engineer solely has to review and parameterize the test automation components instead of writing them manually, resulting in a significant time reduction for establishing test automation. Compared to other generation approaches, this ML-based solution can handle different writing styles, authors, application domains, and even languages. Furthermore, test automation tools require expert knowledge by means of programming skills, whereas this approach only requires historical data to generate test cases. The proposed solution is evaluated using various multi-label evaluation criteria (EC) and two small-sized real-world systems. The most prominent EC is ‘Subset Accuracy’. The promising results show an accuracy of at least 86% for test cases, where a 1:1 relationship (Multi-Class) between test step specification and test automation component exists. For complex multi-label problems, i.e., one test step can be implemented by several components, the prediction accuracy is still at 60%. It is better than the current state-of-the-art results. It is expected the prediction quality to increase for larger systems with respective historical data. Consequently, this technique facilitates the time reduction for establishing test automation and is thereby independent of the application domain and project. As a work in progress, the next steps are to investigate incremental and active learning as additions to increase the usability of this approach, e.g., in case labelled historical data is scarce.

Keywords: machine learning, multi-class, multi-label, supervised learning, test automation

Procedia PDF Downloads 134
6503 Chronic Fatigue Syndrome/Myalgic Encephalomyelitis in Younger Children: A Qualitative Analysis of Families’ Experiences of the Condition and Perspective on Treatment

Authors: Amberly Brigden, Ali Heawood, Emma C. Anderson, Richard Morris, Esther Crawley

Abstract:

Background: Paediatric chronic fatigue syndrome (CFS)/myalgic encephalomyelitis (ME) is characterised by persistent, disabling fatigue. Health services see patients below the age of 12. This age group experience high levels of disability, with low levels of school attendance, high levels of fatigue, anxiety, functional disability and pain. CFS/ME interventions have been developed for adolescents, but the developmental needs of younger children suggest treatment should be tailored to this age group. Little is known about how intervention should be delivered to this age group, and further work is needed to explore this. Qualitative research aids patient-centered design of health intervention. Methods: Five to 11-year-olds and their parents were recruited from a specialist CFS/ME service. Semi-structured interviews explored the families’ experience of the condition and perspectives on treatment. Interactive and arts-based methods were used. Interviews were audio-recorded, transcribed and analysed thematically. Qualitative Results: 14 parents and 7 children were interviewed. Early analysis of the interviews revealed the importance of the social-ecological setting of the child, which led to themes being developed in the context of Systems Theory. Theme one relates to the level of the child, theme two the family system, theme three the organisational and societal systems, and theme four cuts-across all levels. Theme1: The child’s capacity to describe, understand and manage their condition. Younger children struggled to describe their internal experiences, such as physical symptoms. Parents felt younger children did not understand some concepts of CFS/ME and did not have the capabilities to monitor and self-regulate their behaviour, as required by treatment. A spectrum of abilities was described; older children (10-11-year-olds) were more involved in clinical sessions and had more responsibility for self-management. Theme2: Parents’ responsibility for managing their child’s condition. Parents took responsibility for regulating their child’s behaviour in accordance with the treatment programme. They structured their child’s environment, gave direct instructions to their child, and communicated the needs of their child to others involved in care. Parents wanted their child to experience a 'normal' childhood and took steps to shield their child from medicalization, including diagnostic labels and clinical discussions. Theme3: Parental isolation and the role of organisational and societal systems. Parents felt unsupported in their role of managing the condition and felt negative responses from primary care health services and schools were underpinned by a lack of awareness and knowledge about CFS/ME in younger children. This sometimes led to a protracted time to diagnosis. Parents felt that schools have the potential important role in managing the child’s condition. Theme4: Complexity and uncertainty. Many parents valued specialist treatment (which included activity management, physiotherapy, sleep management, dietary advice, medical management and psychological support), but felt it needed to account for the complexity of the condition in younger children. Some parents expressed uncertainty about the diagnosis and the treatment programme. Conclusions: Interventions for younger children need to consider the 'systems' (family, organisational and societal) involved in the child’s care. Future research will include interviews with clinicians and schools supporting younger children with CFS/ME.

Keywords: chronic fatigue syndrome (CFS)/myalgic encephalomyelitis (ME), pediatric, qualitative, treatment

Procedia PDF Downloads 142
6502 Solar Energy for Decontamination of Ricinus communis

Authors: Elmo Thiago Lins Cöuras Ford, Valentina Alessandra Carvalho do Vale

Abstract:

The solar energy was used as a source of heating in Ricinus communis pie with the objective of eliminating or minimizing the percentage of the poison in it, so that it can be used as animal feed. A solar cylinder and plane collector were used as heating system. In the focal area of the solar concentrator a gutter support endowed with stove effect was placed. Parameters that denote the efficiency of the systems for the proposed objective was analyzed.

Keywords: solar energy, concentrate, Ricinus communis, temperature

Procedia PDF Downloads 428
6501 Vibro-Tactile Equalizer for Musical Energy-Valence Categorization

Authors: Dhanya Nair, Nicholas Mirchandani

Abstract:

Musical haptic systems can enhance a listener’s musical experience while providing an alternative platform for the hearing impaired to experience music. Current music tactile technologies focus on representing tactile metronomes to synchronize performers or encoding musical notes into distinguishable (albeit distracting) tactile patterns. There is growing interest in the development of musical haptic systems to augment the auditory experience, although the haptic-music relationship is still not well understood. This paper represents a tactile music interface that provides vibrations to multiple fingertips in synchronicity with auditory music. Like an audio equalizer, different frequency bands are filtered out, and the power in each frequency band is computed and converted to a corresponding vibrational strength. These vibrations are felt on different fingertips, each corresponding to a different frequency band. Songs with music from different spectrums, as classified by their energy and valence, were used to test the effectiveness of the system and to understand the relationship between music and tactile sensations. Three participants were trained on one song categorized as sad (low energy and low valence score) and one song categorized as happy (high energy and high valence score). They were trained both with and without auditory feedback (listening to the song while experiencing the tactile music on their fingertips and then experiencing the vibrations alone without the music). The participants were then tested on three songs from both categories, without any auditory feedback, and were asked to classify the tactile vibrations they felt into either category. The participants were blinded to the songs being tested and were not provided any feedback on the accuracy of their classification. These participants were able to classify the music with 100% accuracy. Although the songs tested were on two opposite spectrums (sad/happy), the preliminary results show the potential of utilizing a vibrotactile equalizer, like the one presented, for augmenting musical experience while furthering the current understanding of music tactile relationship.

Keywords: haptic music relationship, tactile equalizer, tactile music, vibrations and mood

Procedia PDF Downloads 182
6500 Tailorability of Poly(Aspartic Acid)/BSA Complex by Self-Assembling in Aqueous Solutions

Authors: Loredana E. Nita, Aurica P. Chiriac, Elena Stoleru, Alina Diaconu, Tudorachi Nita

Abstract:

Self-assembly processes are an attractive method to form new and complex structures between macromolecular compounds to be used for specific applications. In this context, intramolecular and intermolecular bonds play a key role during self-assembling processes in preparation of carrier systems of bioactive substances. Polyelectrolyte complexes (PECs) are formed through electrostatic interactions, and though they are significantly below of the covalent linkages in their strength, these complexes are sufficiently stable owing to the association processes. The relative ease way of PECs formation makes from them a versatile tool for preparation of various materials, with properties that can be tuned by adjusting several parameters, such as the chemical composition and structure of polyelectrolytes, pH and ionic strength of solutions, temperature and post-treatment procedures. For example, protein-polyelectrolyte complexes (PPCs) are playing an important role in various chemical and biological processes, such as protein separation, enzyme stabilization and polymer drug delivery systems. The present investigation is focused on evaluation of the PPC formation between a synthetic polypeptide (poly(aspartic acid) – PAS) and a natural protein (bovine serum albumin - BSA). The PPC obtained from PAS and BSA in different ratio was investigated by corroboration of various techniques of characterization as: spectroscopy, microscopy, thermo-gravimetric analysis, DLS and zeta potential determination, measurements which were performed in static and/or dynamic conditions. The static contact angle of the sample films was also determined in order to evaluate the changes brought upon surface free energy of the prepared PPCs in interdependence with the complexes composition. The evolution of hydrodynamic diameter and zeta potential of the PPC, recorded in situ, confirm changes of both co-partners conformation, a 1/1 ratio between protein and polyelectrolyte being benefit for the preparation of a stable PPC. Also, the study evidenced the dependence of PPC formation on the temperature of preparation. Thus, at low temperatures the PPC is formed with compact structure, small dimension and hydrodynamic diameter, close to those of BSA. The behavior at thermal treatment of the prepared PPCs is in agreement with the composition of the complexes. From the contact angle determination results the increase of the PPC films cohesion, which is higher than that of BSA films. Also, a higher hydrophobicity corresponds to the new PPC films denoting a good adhesion of the red blood cells onto the surface of PSA/BSA interpenetrated systems. The SEM investigation evidenced as well the specific internal structure of PPC concretized in phases with different size and shape in interdependence with the interpolymer mixture composition.

Keywords: polyelectrolyte – protein complex, bovine serum albumin, poly(aspartic acid), self-assembly

Procedia PDF Downloads 247
6499 Building and Development of the Stock Market Institutional Infrastructure in Russia

Authors: Irina Bondarenko, Olga Vandina

Abstract:

The theory of evolutionary economics is the basis for preparation and application of methods forming the stock market infrastructure development concept. The authors believe that the basis for the process of formation and development of the stock market model infrastructure in Russia is the theory of large systems. This theory considers the financial market infrastructure as a whole on the basis of macroeconomic approach with the further definition of its aims and objectives. Evaluation of the prospects for interaction of securities market institutions will enable identifying the problems associated with the development of this system. The interaction of elements of the stock market infrastructure allows to reduce the costs and time of transactions, thereby freeing up resources of market participants for more efficient operation. Thus, methodology of the transaction analysis allows to determine the financial infrastructure as a set of specialized institutions that form a modern quasi-stable system. The financial infrastructure, based on international standards, should include trading systems, regulatory and supervisory bodies, rating agencies, settlement, clearing and depository organizations. Distribution of financial assets, reducing the magnitude of transaction costs, increased transparency of the market are promising tasks in the solution for questions of services level and quality increase provided by institutions of the securities market financial infrastructure. In order to improve the efficiency of the regulatory system, it is necessary to provide "standards" for all market participants. The development of a clear regulation for the barrier to the stock market entry and exit, provision of conditions for the development and implementation of new laws regulating the activities of participants in the securities market, as well as formulation of proposals aimed at minimizing risks and costs, will enable the achievement of positive results. The latter will be manifested in increasing the level of market participant security and, accordingly, the attractiveness of this market for investors and issuers.

Keywords: institutional infrastructure, financial assets, regulatory system, stock market, transparency of the market

Procedia PDF Downloads 136
6498 Role of Energy Storage in Renewable Electricity Systems in The Gird of Ethiopia

Authors: Dawit Abay Tesfamariam

Abstract:

Ethiopia’s Climate- Resilient Green Economy (ECRGE) strategy focuses mainly on generating and proper utilization of renewable energy (RE). Nonetheless, the current electricity generation of the country is dominated by hydropower. The data collected in 2016 by Ethiopian Electric Power (EEP) indicates that the intermittent RE sources from solar and wind energy were only 8 %. On the other hand, the EEP electricity generation plan in 2030 indicates that 36.1 % of the energy generation share will be covered by solar and wind sources. Thus, a case study was initiated to model and compute the balance and consumption of electricity in three different scenarios: 2016, 2025, and 2030 using the EnergyPLAN Model (EPM). Initially, the model was validated using the 2016 annual power-generated data to conduct the EnergyPLAN (EP) analysis for two predictive scenarios. The EP simulation analysis using EPM for 2016 showed that there was no significant excess power generated. Thus, the EPM was applied to analyze the role of energy storage in RE in Ethiopian grid systems. The results of the EP simulation analysis showed there will be excess production of 402 /7963 MW average and maximum, respectively, in 2025. The excess power was in the three rainy months of the year (June, July, and August). The outcome of the model also showed that in the dry seasons of the year, there would be excess power production in the country. Consequently, based on the validated outcomes of EP indicates, there is a good reason to think about other alternatives for the utilization of excess energy and storage of RE. Thus, from the scenarios and model results obtained, it is realistic to infer that if the excess power is utilized with a storage system, it can stabilize the grid system and be exported to support the economy. Therefore, researchers must continue to upgrade the current and upcoming storage system to synchronize with potentials that can be generated from renewable energy.

Keywords: renewable energy, power, storage, wind, energy plan

Procedia PDF Downloads 78
6497 Mobile App Architecture in 2023: Build Your Own Mobile App

Authors: Mounir Filali

Abstract:

Companies use many innovative ways to reach their customers to stay ahead of the competition. Along with the growing demand for innovative business solutions is the demand for new technology. The most noticeable area of demand for business innovations is the mobile application industry. Recently, companies have recognized the growing need to integrate proprietary mobile applications into their suite of services; Companies have realized that developing mobile apps gives them a competitive edge. As a result, many have begun to rapidly develop mobile apps to stay ahead of the competition. Mobile application development helps companies meet the needs of their customers. Mobile apps also help businesses to take advantage of every potential opportunity to generate leads that convert into sales. Mobile app download growth statistics with the recent rise in demand for business-related mobile apps, there has been a similar rise in the range of mobile app solutions being offered. Today, companies can use the traditional route of the software development team to build their own mobile applications. However, there are also many platform-ready "low-code and no-code" mobile apps available to choose from. These mobile app development options have more streamlined business processes. This helps them be more responsive to their customers without having to be coding experts. Companies must have a basic understanding of mobile app architecture to attract and maintain the interest of mobile app users. Mobile application architecture refers to the buildings or structural systems and design elements that make up a mobile application. It also includes the technologies, processes, and components used during application development. The underlying foundation of all applications consists of all elements of the mobile application architecture, developing a good mobile app architecture requires proper planning and strategic design. The technology framework or platform on the back end and user-facing side of a mobile application is part of the mobile architecture of the application. In-application development Software programmers loosely refer to this set of mobile architecture systems and processes as the "technology stack".

Keywords: mobile applications, development, architecture, technology

Procedia PDF Downloads 104
6496 Genetic Programming: Principles, Applications and Opportunities for Hydrological Modelling

Authors: Oluwaseun K. Oyebode, Josiah A. Adeyemo

Abstract:

Hydrological modelling plays a crucial role in the planning and management of water resources, most especially in water stressed regions where the need to effectively manage the available water resources is of critical importance. However, due to the complex, nonlinear and dynamic behaviour of hydro-climatic interactions, achieving reliable modelling of water resource systems and accurate projection of hydrological parameters are extremely challenging. Although a significant number of modelling techniques (process-based and data-driven) have been developed and adopted in that regard, the field of hydrological modelling is still considered as one that has sluggishly progressed over the past decades. This is majorly as a result of the identification of some degree of uncertainty in the methodologies and results of techniques adopted. In recent times, evolutionary computation (EC) techniques have been developed and introduced in response to the search for efficient and reliable means of providing accurate solutions to hydrological related problems. This paper presents a comprehensive review of the underlying principles, methodological needs and applications of a promising evolutionary computation modelling technique – genetic programming (GP). It examines the specific characteristics of the technique which makes it suitable to solving hydrological modelling problems. It discusses the opportunities inherent in the application of GP in water related-studies such as rainfall estimation, rainfall-runoff modelling, streamflow forecasting, sediment transport modelling, water quality modelling and groundwater modelling among others. Furthermore, the means by which such opportunities could be harnessed in the near future are discussed. In all, a case for total embracement of GP and its variants in hydrological modelling studies is made so as to put in place strategies that would translate into achieving meaningful progress as it relates to modelling of water resource systems, and also positively influence decision-making by relevant stakeholders.

Keywords: computational modelling, evolutionary algorithms, genetic programming, hydrological modelling

Procedia PDF Downloads 300
6495 Linking Information Systems Capabilities for Service Quality: The Role of Customer Connection and Environmental Dynamism

Authors: Teng Teng, Christos Tsinopoulos

Abstract:

The purpose of this research is to explore the link between IS capabilities, customer connection, and quality performance in the service context, with investigation of the impact of firm’s stable and dynamic environments. The application of Information Systems (IS) has become a significant effect on contemporary service operations. Firms invest in IS with the presumption that they will facilitate operations processes so that their performance will improve. Yet, IS resources by themselves are not sufficiently 'unique' and thus, it would be more useful and theoretically relevant to focus on the processes they affect. One such organisational process, which has attracted a lot of research attention by supply chain management scholars, is the integration of customer connection, where IS-enabled customer connection enhances communication and contact processes, and with such customer resources integration comes greater success for the firm in its abilities to develop a good understanding of customer needs and set accurate customer. Nevertheless, prior studies on IS capabilities have focused on either one specific type of technology or operationalised it as a highly aggregated concept. Moreover, although conceptual frameworks have been identified to show customer integration is valuable in service provision, there is much to learn about the practices of integrating customer resources. In this research, IS capabilities have been broken down into three dimensions based on the framework of Wade and Hulland: IT for supply chain activities (ITSCA), flexible IT infrastructure (ITINF), and IT operations shared knowledge (ITOSK); and focus on their impact on operational performance of firms in services. With this background, this paper addresses the following questions: -How do IS capabilities affect the integration of customer connection and service quality? -What is the relationship between environmental dynamism and the relationship of customer connection and service quality? A survey of 156 service establishments was conducted, and the data analysed to determine the role of customer connection in mediating the effects of IS capabilities on firms’ service quality. Confirmatory factor analysis was used to check convergent validity. There is a good model fit for the structural model. Moderating effect of environmental dynamism on the relationship of customer connection and service quality is analysed. Results show that ITSCA, ITINF, and ITOSK have a positive influence on the degree of the integration of customer connection. In addition, customer connection positively related to service quality; this relationship is further emphasised when firms work in a dynamic environment. This research takes a step towards quelling concerns about the business value of IS, contributing to the development and validation of the measurement of IS capabilities in the service operations context. Additionally, it adds to the emerging body of literature linking customer connection to the operational performance of service firms. Managers of service firms should consider the strength of the mediating role of customer connection when investing in IT-related technologies and policies. Particularly, service firms developing IS capabilities should simultaneously implement processes that encourage supply chain integration.

Keywords: customer connection, environmental dynamism, information systems capabilities, service quality, service supply chain

Procedia PDF Downloads 141
6494 A Cloud-Based Federated Identity Management in Europe

Authors: Jesus Carretero, Mario Vasile, Guillermo Izquierdo, Javier Garcia-Blas

Abstract:

Currently, there is a so called ‘identity crisis’ in cybersecurity caused by the substantial security, privacy and usability shortcomings encountered in existing systems for identity management. Federated Identity Management (FIM) could be solution for this crisis, as it is a method that facilitates management of identity processes and policies among collaborating entities without enforcing a global consistency, that is difficult to achieve when there are ID legacy systems. To cope with this problem, the Connecting Europe Facility (CEF) initiative proposed in 2014 a federated solution in anticipation of the adoption of the Regulation (EU) N°910/2014, the so-called eIDAS Regulation. At present, a network of eIDAS Nodes is being deployed at European level to allow that every citizen recognized by a member state is to be recognized within the trust network at European level, enabling the consumption of services in other member states that, until now were not allowed, or whose concession was tedious. This is a very ambitious approach, since it tends to enable cross-border authentication of Member States citizens without the need to unify the authentication method (eID Scheme) of the member state in question. However, this federation is currently managed by member states and it is initially applied only to citizens and public organizations. The goal of this paper is to present the results of a European Project, named eID@Cloud, that focuses on the integration of eID in 5 cloud platforms belonging to authentication service providers of different EU Member States to act as Service Providers (SP) for private entities. We propose an initiative based on a private eID Scheme both for natural and legal persons. The methodology followed in the eID@Cloud project is that each Identity Provider (IdP) is subscribed to an eIDAS Node Connector, requesting for authentication, that is subscribed to an eIDAS Node Proxy Service, issuing authentication assertions. To cope with high loads, load balancing is supported in the eIDAS Node. The eID@Cloud project is still going on, but we already have some important outcomes. First, we have deployed the federation identity nodes and tested it from the security and performance point of view. The pilot prototype has shown the feasibility of deploying this kind of systems, ensuring good performance due to the replication of the eIDAS nodes and the load balance mechanism. Second, our solution avoids the propagation of identity data out of the native domain of the user or entity being identified, which avoids problems well known in cybersecurity due to network interception, man in the middle attack, etc. Last, but not least, this system allows to connect any country or collectivity easily, providing incremental development of the network and avoiding difficult political negotiations to agree on a single authentication format (which would be a major stopper).

Keywords: cybersecurity, identity federation, trust, user authentication

Procedia PDF Downloads 168
6493 Security of Database Using Chaotic Systems

Authors: Eman W. Boghdady, A. R. Shehata, M. A. Azem

Abstract:

Database (DB) security demands permitting authorized users and prohibiting non-authorized users and intruders actions on the DB and the objects inside it. Organizations that are running successfully demand the confidentiality of their DBs. They do not allow the unauthorized access to their data/information. They also demand the assurance that their data is protected against any malicious or accidental modification. DB protection and confidentiality are the security concerns. There are four types of controls to obtain the DB protection, those include: access control, information flow control, inference control, and cryptographic. The cryptographic control is considered as the backbone for DB security, it secures the DB by encryption during storage and communications. Current cryptographic techniques are classified into two types: traditional classical cryptography using standard algorithms (DES, AES, IDEA, etc.) and chaos cryptography using continuous (Chau, Rossler, Lorenz, etc.) or discreet (Logistics, Henon, etc.) algorithms. The important characteristics of chaos are its extreme sensitivity to initial conditions of the system. In this paper, DB-security systems based on chaotic algorithms are described. The Pseudo Random Numbers Generators (PRNGs) from the different chaotic algorithms are implemented using Matlab and their statistical properties are evaluated using NIST and other statistical test-suits. Then, these algorithms are used to secure conventional DB (plaintext), where the statistical properties of the ciphertext are also tested. To increase the complexity of the PRNGs and to let pass all the NIST statistical tests, we propose two hybrid PRNGs: one based on two chaotic Logistic maps and another based on two chaotic Henon maps, where each chaotic algorithm is running side-by-side and starting from random independent initial conditions and parameters (encryption keys). The resulted hybrid PRNGs passed the NIST statistical test suit.

Keywords: algorithms and data structure, DB security, encryption, chaotic algorithms, Matlab, NIST

Procedia PDF Downloads 265
6492 Enhancing Environmental Impact Assessment for Natural Gas Pipeline Systems: Lessons in Water and Wastewater Management

Authors: Kittipon Chittanukul, Chayut Bureethan, Chutimon Piromyaporn

Abstract:

In Thailand, the natural gas pipeline system requires the preparation of an Environmental Impact Assessment (EIA) report for approval by the relevant agency, the Office of Natural Resources and Environmental Policy and Planning (ONEP), in the pre-construction stage. As of December 2022, PTT has a lot of gas pipeline system spanning around the country. Our experience has shown that the EIA is a significant part of the project plan. In 2011, There was a catastrophic flood in multiple areas of Thailand. It destroyed lives and properties. This event is still in Thai people’s mind. Furthermore, rainfall has been increasing for three consecutive years (2020-2022). Moreover, municipalities are situated in low land river basin and tropical rainfall zone. So many areas still suffer from flooding. Especially in 2022, there will be a 60% increase in water demand compared to the previous year. Therefore, all activities will take into account the quality of the receiving water. The above information emphasizes water and wastewater management are significant in EIA report. PTT has accumulated a large number of lessons learned in water and wastewater management. Our pipeline system execution is composed of EIA stage, construction stage, and operation and maintenance phase. We provide practical Information on water and wastewater management to enhance the EIA process for the pipeline system. The examples of lessons learned in water and wastewater management include techniques to address water and wastewater impact throughout the overall pipelines systems, mitigation measures and monitoring results of these measures. This practical information will alleviate the anxiety of the ONEP committee when approving the EIA report and will build trust among stakeholders in the vicinity of the gas pipeline system area.

Keywords: environmental impact assessment, gas pipeline system, low land basin, high risk flooding area, mitigation measure

Procedia PDF Downloads 67