Search results for: heterogeneous systems
6718 Model Predictive Control Applied to Thermal Regulation of Thermoforming Process Based on the Armax Linear Model and a Quadratic Criterion Formulation
Authors: Moaine Jebara, Lionel Boillereaux, Sofiane Belhabib, Michel Havet, Alain Sarda, Pierre Mousseau, Rémi Deterre
Abstract:
Energy consumption efficiency is a major concern for the material processing industry such as thermoforming process and molding. Indeed, these systems should deliver the right amount of energy at the right time to the processed material. Recent technical development, as well as the particularities of the heating system dynamics, made the Model Predictive Control (MPC) one of the best candidates for thermal control of several production processes like molding and composite thermoforming to name a few. The main principle of this technique is to use a dynamic model of the process inside the controller in real time in order to anticipate the future behavior of the process which allows the current timeslot to be optimized while taking future timeslots into account. This study presents a procedure based on a predictive control that brings balance between optimality, simplicity, and flexibility of its implementation. The development of this approach is progressive starting from the case of a single zone before its extension to the multizone and/or multisource case, taking thus into account the thermal couplings between the adjacent zones. After a quadratic formulation of the MPC criterion to ensure the thermal control, the linear expression is retained in order to reduce calculation time thanks to the use of the ARMAX linear decomposition methods. The effectiveness of this approach is illustrated by experiment and simulation.Keywords: energy efficiency, linear decomposition methods, model predictive control, mold heating systems
Procedia PDF Downloads 2726717 Corporate Governance of Enterprise IT: Research Study on IT Governance Maturity
Authors: Mario Spremic
Abstract:
Despite the financial crisis and ongoing need for cost cutting, companies all around the world heavily invest in information systems (IS) and underlying information technology (IT). Information systems (IS) play very important role in modern business organizations supporting its organizational efficiency or, under certain circumstances, fostering business model innovation and change. IS can influence organization competitiveness in two ways: supporting operational efficiency (IS as a main infrastructure for the current business), or differentiating business through business model innovation and business process change. In either way, IS becomes very important to the business and needs to be aligned with strategic objectives in order to justify massive investments. A number of studies showed that investments in IS and underlying IT resulted in added business value if they are truly connected with strategic business objectives. In that sense proliferation of governance of enterprise IT helps companies manage, or rather, governs IS as a primary business function with executive management involved in making a decision about IS and IT. The quality of IT governance is rising with the large number of decisions about IS made by executive management, not IT departments. The more executive management is engaged in making a decision about IS and IT, the IT governance is of better quality. In this paper, the practice of governing the enterprise IT will be investigated on a sample of the largest 100 Croatian companies. Research questions posed here will reveal if there are some formal IT governance mechanisms, are there any differences in perceived role of IS and IT between CIOs (Chief Information Officers) and CEOs (Chief Executive Officers) of the sampled companies and what are the mechanisms to govern massive investment in enterprise IT.Keywords: IT governance, governance of enterprise IT, information system auditing, operational efficiency
Procedia PDF Downloads 3046716 Photocatalytic Properties of Pt/Er-KTaO3
Authors: Anna Krukowska, Tomasz Klimczuk, Adriana Zaleska-Medynska
Abstract:
Photoactive materials have attracted attention due to their potential application in the degradation of environmental pollutants to non-hazardous compounds in an eco-friendly route. Among semiconductor photocatalysts, tantalates such as potassium tantalate (KTaO3) is one of the excellent functional photomaterial. However, tantalates-based materials are less active under visible-light irradiation, the enhancement in photoactivity could be improved with the modification of opto-eletronic properties of KTaO3 by doping rare earth metal (Er) and further photodeposition of noble metal nanoparticles (Pt). Inclusion of rare earth element in orthorhombic structure of tantalate can generate one high-energy photon by absorbing two or more incident low-energy photons, which convert visible-light and infrared-light into the ultraviolet-light to satisfy the requirement of KTaO3 photocatalysts. On the other hand, depositions of noble metal nanoparticles on the surface of semiconductor strongly absorb visible-light due to their surface plasmon resonance, in which their conducting electrons undergo a collective oscillation induced by electric field of visible-light. Furthermore, the high dispersion of Pt nanoparticles, which will be obtained by photodeposition process is additional important factor to improve the photocatalytic activity. The present work is aimed to study the effect of photocatalytic process of the prepared Er-doped KTaO3 and further incorporation of Pt nanoparticles by photodeposition. Moreover, the research is also studied correlations between photocatalytic activity and physico-chemical properties of obtained Pt/Er-KTaO3 samples. The Er-doped KTaO3 microcomposites were synthesized by a hydrothermal method. Then photodeposition method was used for Pt loading over Er-KTaO3. The structural and optical properties of Pt/Er-KTaO3 photocatalytic were characterized using scanning electron microscope (SEM), X-ray diffraction (XRD), volumetric adsorption method (BET), UV-Vis absorption measurement, Raman spectroscopy and luminescence spectroscopy. The photocatalytic properties of Pt/Er-KTaO3 microcomposites were investigated by degradation of phenol in aqueous phase as model pollutant under visible and ultraviolet-light irradiation. Results of this work show that all the prepared photocatalysis exhibit low BET surface area, although doping of the bare KTaO3 with rare earth element (Er) presents a slight increase in this value. The crystalline structure of Pt/Er-KTaO3 powders exhibited nearly identical positions for the main peak at about 22,8o and the XRD pattern could be assigned to an orthorhombic distorted perovskite structure. The Raman spectra of obtained semiconductors confirmed demonstrating perovskite-like structure. The optical absorption spectra of Pt nanoparticles exhibited plasmon absorption band for main peaks at about 216 and 264 nm. The addition of Pt nanoparticles increased photoactivity compared to Er-KTaO3 and pure KTaO3. Summary optical properties of KTaO3 change with its doping Er-element and further photodeposition of Pt nanoparticles.Keywords: heterogeneous photocatalytic, KTaO3 photocatalysts, Er3+ ion doping, Pt photodeposition
Procedia PDF Downloads 3606715 Chaotic Electronic System with Lambda Diode
Authors: George Mahalu
Abstract:
The Chua diode has been configured over time in various ways, using electronic structures like operational amplifiers (AOs) or devices with gas or semiconductors. When discussing the use of semiconductor devices, tunnel diodes (Esaki diodes) are most often considered, and more recently, transistorized configurations such as lambda diodes. The paperwork proposed here uses in the modeling a lambda diode type configuration consisting of two junction field effect transistors (JFET). The original scheme is created in the MULTISIM electronic simulation environment and is analyzed in order to identify the conditions for the appearance of evolutionary unpredictability specific to nonlinear dynamic systems with chaos-induced behavior. The chaotic deterministic oscillator is one autonomous type, a fact that places it in the class of Chua’s type oscillators, the only significant and most important difference being the presence of a nonlinear device like the one mentioned structure above. The chaotic behavior is identified both by means of strange attractor-type trajectories and visible during the simulation and by highlighting the hypersensitivity of the system to small variations of one of the input parameters. The results obtained through simulation and the conclusions drawn are useful in the further research of ways to implement such constructive electronic solutions in theoretical and practical applications related to modern small signal amplification structures, to systems for encoding and decoding messages through various modern ways of communication, as well as new structures that can be imagined both in modern neural networks and in those for the physical implementation of some requirements imposed by current research with the aim of obtaining practically usable solutions in quantum computing and quantum computers.Keywords: chua, diode, memristor, chaos
Procedia PDF Downloads 886714 Simulation Research of Innovative Ignition System of ASz62IR Radial Aircraft Engine
Authors: Miroslaw Wendeker, Piotr Kacejko, Mariusz Duk, Pawel Karpinski
Abstract:
The research in the field of aircraft internal combustion engines is currently driven by the needs of decreasing fuel consumption and CO2 emissions, while fulfilling the level of safety. Currently, reciprocating aircraft engines are found in sports, emergency, agricultural and recreation aviation. Technically, they are most at a pre-war knowledge of the theory of operation, design and manufacturing technology, especially if compared to that high level of development of automotive engines. Typically, these engines are driven by carburetors of a quite primitive construction. At present, due to environmental requirements and dealing with a climate change, it is beneficial to develop aircraft piston engines and adopt the achievements of automotive engineering such as computer-controlled low-pressure injection, electronic ignition control and biofuels. The paper describes simulation research of the innovative power and control systems for the aircraft radial engine of high power. Installing an electronic ignition system in the radial aircraft engine is a fundamental innovative idea of this solution. Consequently, the required level of safety and better functionality as compared to the today’s plug system can be guaranteed. In this framework, this research work focuses on describing a methodology for optimizing the electronically controlled ignition system. This attempt can reduce emissions of toxic compounds as a result of lowered fuel consumption, optimized combustion and engine capability of efficient combustion of ecological fuels. New, redundant elements of the control system can improve the safety of aircraft. Consequently, the required level of safety and better functionality as compared to the today’s plug system can be guaranteed. The simulation research aimed to determine the vulnerability of the values measured (they were planned as the quantities measured by the measurement systems) to determining the optimal ignition angle (the angle of maximum torque at a given operating point). The described results covered: a) research in steady states; b) velocity ranging from 1500 to 2200 rpm (every 100 rpm); c) loading ranging from propeller power to maximum power; d) altitude ranging according to the International Standard Atmosphere from 0 to 8000 m (every 1000 m); e) fuel: automotive gasoline ES95. The three models of different types of ignition coil (different energy discharge) were studied. The analysis aimed at the optimization of the design of the innovative ignition system for an aircraft engine. The optimization involved: a) the optimization of the measurement systems; b) the optimization of actuator systems. The studies enabled the research on the vulnerability of the signals to the control of the ignition timing. Accordingly, the number and type of sensors were determined for the ignition system to achieve its optimal performance. The results confirmed the limited benefits, in terms of fuel consumption. Thus, including spark management in the optimization is mandatory to significantly decrease the fuel consumption. This work has been financed by the Polish National Centre for Research and Development, INNOLOT, under Grant Agreement No. INNOLOT/I/1/NCBR/2013.Keywords: piston engine, radial engine, ignition system, CFD model, engine optimization
Procedia PDF Downloads 3866713 pH-Responsive Carrier Based on Polymer Particle
Authors: Florin G. Borcan, Ramona C. Albulescu, Adela Chirita-Emandi
Abstract:
pH-responsive drug delivery systems are gaining more importance because these systems deliver the drug at a specific time in regards to pathophysiological necessity, resulting in improved patient therapeutic efficacy and compliance. Polyurethane materials are well-known for industrial applications (elastomers and foams used in different insulations and automotive), but they are versatile biocompatible materials with many applications in medicine, as artificial skin for the premature neonate, membrane in the hybrid artificial pancreas, prosthetic heart valves, etc. This study aimed to obtain the physico-chemical characterization of a drug delivery system based on polyurethane microparticles. The synthesis is based on a polyaddition reaction between an aqueous phase (mixture of polyethylene-glycol M=200, 1,4-butanediol and Tween® 20) and an organic phase (lysin-diisocyanate in acetone) combined with simultaneous emulsification. Different active agents (omeprazole, amoxicillin, metoclopramide) were used to verify the release profile of the macromolecular particles in different pH mediums. Zetasizer measurements were performed using an instrument based on two modules: a Vasco size analyzer and a Wallis Zeta potential analyzer (Cordouan Technol., France) in samples that were kept in various solutions with different pH and the maximum absorbance in UV-Vis spectra were collected on a UVi Line 9,400 Spectrophotometer (SI Analytics, Germany). The results of this investigation have revealed that these particles are proper for a prolonged release in gastric medium where they can assure an almost constant concentration of the active agents for 1-2 weeks, while they can be disassembled faster in a medium with neutral pHs, such as the intestinal fluid.Keywords: lysin-diisocyanate, nanostructures, polyurethane, Zetasizer
Procedia PDF Downloads 1846712 The Power of in situ Characterization Techniques in Heterogeneous Catalysis: A Case Study of Deacon Reaction
Authors: Ramzi Farra, Detre Teschner, Marc Willinger, Robert Schlögl
Abstract:
Introduction: The conventional approach of characterizing solid catalysts under static conditions, i.e., before and after reaction, does not provide sufficient knowledge on the physicochemical processes occurring under dynamic conditions at the molecular level. Hence, the necessity of improving new in situ characterizing techniques with the potential of being used under real catalytic reaction conditions is highly desirable. In situ Prompt Gamma Activation Analysis (PGAA) is a rapidly developing chemical analytical technique that enables us experimentally to assess the coverage of surface species under catalytic turnover and correlate these with the reactivity. The catalytic HCl oxidation (Deacon reaction) over bulk ceria will serve as our example. Furthermore, the in situ Transmission Electron Microscopy is a powerful technique that can contribute to the study of atmosphere and temperature induced morphological or compositional changes of a catalyst at atomic resolution. The application of such techniques (PGAA and TEM) will pave the way to a greater and deeper understanding of the dynamic nature of active catalysts. Experimental/Methodology: In situ Prompt Gamma Activation Analysis (PGAA) experiments were carried out to determine the Cl uptake and the degree of surface chlorination under reaction conditions by varying p(O2), p(HCl), p(Cl2), and the reaction temperature. The abundance and dynamic evolution of OH groups on working catalyst under various steady-state conditions were studied by means of in situ FTIR with a specially designed homemade transmission cell. For real in situ TEM we use a commercial in situ holder with a home built gas feeding system and gas analytics. Conclusions: Two complimentary in situ techniques, namely in situ PGAA and in situ FTIR were utilities to investigate the surface coverage of the two most abundant species (Cl and OH). The OH density and Cl uptake were followed under multiple steady-state conditions as a function of p(O2), p(HCl), p(Cl2), and temperature. These experiments have shown that, the OH density positively correlates with the reactivity whereas Cl negatively. The p(HCl) experiments give rise to increased activity accompanied by Cl-coverage increase (opposite trend to p(O2) and T). Cl2 strongly inhibits the reaction, but no measurable increase of the Cl uptake was found. After considering all previous observations we conclude that only a minority of the available adsorption sites contribute to the reactivity. In addition, the mechanism of the catalysed reaction was proposed. The chlorine-oxygen competition for the available active sites renders re-oxidation as the rate-determining step of the catalysed reaction. Further investigations using in situ TEM are planned and will be conducted in the near future. Such experiments allow us to monitor active catalysts at the atomic scale under the most realistic conditions of temperature and pressure. The talk will shed a light on the potential and limitations of in situ PGAA and in situ TEM in the study of catalyst dynamics.Keywords: CeO2, deacon process, in situ PGAA, in situ TEM, in situ FTIR
Procedia PDF Downloads 2916711 Towards Renewable Energy: A Qualitative Study of Biofuel Development Policy in Indonesia
Authors: Arie Yanwar Kapriadi
Abstract:
This research is aiming to develop deeper understanding of the scale of power that shaped the biofuel policy. This research is important for the following reasons. Firstly, this research will enrich the body of literature within the field of political ecology, scale and environmental governance. Secondly, by focussing on energy transition policies, this research offers a critical perspective on how government policy, aimed at delivering low carbon sustainable energy systems, being scaled and implemented through multi variate stakeholders. Finally, the research could help the government of Indonesia as a policy evaluation on delivering low carbon sustainable energy systems at the macro level that (possibility) being unable to be delivered at different scale and instead being perceived differently by different stakeholders. Qualitative method is applied particularly an in depth interview with government officials as well as policy stakeholders outside of government and people in positions of responsibility with regards to policy delivery. There are 4 field study location where interview took place as well as sites visit to some biofuel refining facilities. There are some major companies which involve on the production and distribution of biofuel and its relation with biofuel feedstock industry as the source of data. The research investigates how the government biofuel policies correlated with other policy issues such as land reclassification and carbon emission reduction which also influenced plantations expansion as well as its impact on the local people. The preliminary result shows tension of power between governing authorities caused the Indonesian biofuel policy being unfocused which led to failing to meet its mandatory blending target despite the abundance of its feedstock.Keywords: biofuel, energy transition, renewable energy, political ecology
Procedia PDF Downloads 1976710 Disability in the Course of a Chronic Disease: The Example of People Living with Multiple Sclerosis in Poland
Authors: Milena Trojanowska
Abstract:
Disability is a phenomenon for which meanings and definitions have evolved over the decades. This became the trigger to start a project to answer the question of what disability constitutes in the course of an incurable chronic disease. The chosen research group are people living with multiple sclerosis.The contextual phase of the research was participant observation at the Polish Multiple Sclerosis Society, the largest NGO in Poland supporting people living with MS and their relatives. The research techniques used in the project are (in order of implementation): group interviews with people living with MS and their relatives, narrative interviews, asynchronous technique, participant observation during events organised for people living with MS and their relatives.The researcher is currently conducting follow-up interviews, as inaccuracies in the respondents' narratives were identified during the data analysis. Interviews and supplementary research techniques were used over the four years of the research, and the researcher also benefited from experience gained from 12 years of working with NGOs (diaries, notes). The research was carried out in Poland with the participation of people living in this country only.The research has been based on grounded theory methodology in a constructivist perspectivedeveloped by Kathy Charmaz. The goal was to follow the idea that research must be reliable, original, and useful. The aim was to construct an interpretive theory that assumes temporality and the processualityof social life. TheAtlas.ti software was used to collect research material and analyse it. It is a program from the CAQDAS(Computer-Assisted Qualitative Data Analysis Software) group.Several key factors influencing the construction of a disability identity by people living with multiple sclerosis was identified:-course of interaction with significant relatives,- the expectation of identification with disability (expressed by close relatives),- economic profitability (pension, allowances),- institutional advantages (e.g. parking card),- independence and autonomy (not equated with physical condition, but access to adapted infrastructure and resources to support daily functioning),- the way a person with MS construes the meaning of disability,- physical and mental state,- medical diagnosis of illness.In addition, it has been shown that making an assumption about the experience of disability in the course of MS is a form of cognitive reductionism leading to further phenomenon such as: the expectation of the person with MS to construct a social identity as a person with a disability (e.g. giving up work), the occurrence of institutional inequalities. It can also be a determinant of the choice of a life strategy that limits social and individual functioning, even if this necessity is not influenced by the person's physical or psychological condition.The results of the research are important for the development of knowledge about the phenomenon of disability. It indicates the contextuality and complexity of the disability phenomenon, which in the light of the research is a set of different phenomenon of heterogeneous nature and multifaceted causality. This knowledge can also be useful for institutions and organisations in the non-governmental sector supporting people with disabilities and people living with multiple sclerosis.Keywords: disability, multiple sclerosis, grounded theory, poland
Procedia PDF Downloads 1066709 Collaborative Governance in Dutch Flood Risk Management: An Historical Analysis
Authors: Emma Avoyan
Abstract:
The safety standards for flood protection in the Netherlands have been revised recently. It is expected that all major flood-protection structures will have to be reinforced to meet the new standards. The Dutch Flood Protection Programme aims at accomplishing this task through innovative integrated projects such as construction of multi-functional flood defenses. In these projects, flood safety purposes will be combined with spatial planning, nature development, emergency management or other sectoral objectives. Therefore, implementation of dike reinforcement projects requires early involvement and collaboration between public and private sectors, different governmental actors and agencies. The development and implementation of such integrated projects has been an issue in Dutch flood risk management since long. Therefore, this article analyses how cross-sector collaboration within flood risk governance in the Netherlands has evolved over time, and how this development can be explained. The integrative framework for collaborative governance is applied as an analytical tool to map external factors framing possibilities as well as constraints for cross-sector collaboration in Dutch flood risk domain. Supported by an extensive document and literature analysis, the paper offers insights on how the system context and different drivers changing over time either promoted or hindered cross-sector collaboration between flood protection sector, urban development, nature conservation or any other sector involved in flood risk governance. The system context refers to the multi-layered and interrelated suite of conditions that influence the formation and performance of complex governance systems, such as collaborative governance regimes, whereas the drivers initiate and enable the overall process of collaboration. In addition, by applying a method of process tracing we identify a causal and chronological chain of events shaping cross-sectoral interaction in Dutch flood risk management. Our results indicate that in order to evaluate the performance of complex governance systems, it is important to firstly study the system context that shapes it. Clear understanding of the system conditions and drivers for collaboration gives insight into the possibilities of and constraints for effective performance of complex governance systems. The performance of the governance system is affected by the system conditions, while at the same time the governance system can also change the system conditions. Our results show that the sequence of changes within the system conditions and drivers over time affect how cross-sector interaction in Dutch flood risk governance system happens now. Moreover, we have traced the potential of this governance system to shape and change the system context.Keywords: collaborative governance, cross-sector interaction, flood risk management, the Netherlands
Procedia PDF Downloads 1306708 Hybrid Approach for Face Recognition Combining Gabor Wavelet and Linear Discriminant Analysis
Authors: A: Annis Fathima, V. Vaidehi, S. Ajitha
Abstract:
Face recognition system finds many applications in surveillance and human computer interaction systems. As the applications using face recognition systems are of much importance and demand more accuracy, more robustness in the face recognition system is expected with less computation time. In this paper, a hybrid approach for face recognition combining Gabor Wavelet and Linear Discriminant Analysis (HGWLDA) is proposed. The normalized input grayscale image is approximated and reduced in dimension to lower the processing overhead for Gabor filters. This image is convolved with bank of Gabor filters with varying scales and orientations. LDA, a subspace analysis techniques are used to reduce the intra-class space and maximize the inter-class space. The techniques used are 2-dimensional Linear Discriminant Analysis (2D-LDA), 2-dimensional bidirectional LDA ((2D)2LDA), Weighted 2-dimensional bidirectional Linear Discriminant Analysis (Wt (2D)2 LDA). LDA reduces the feature dimension by extracting the features with greater variance. k-Nearest Neighbour (k-NN) classifier is used to classify and recognize the test image by comparing its feature with each of the training set features. The HGWLDA approach is robust against illumination conditions as the Gabor features are illumination invariant. This approach also aims at a better recognition rate using less number of features for varying expressions. The performance of the proposed HGWLDA approaches is evaluated using AT&T database, MIT-India face database and faces94 database. It is found that the proposed HGWLDA approach provides better results than the existing Gabor approach.Keywords: face recognition, Gabor wavelet, LDA, k-NN classifier
Procedia PDF Downloads 4676707 Health Care using Queuing Theory
Authors: S. Vadivukkarasi, K. Karthi, M. Karthick, C. Dinesh, S. Santhosh, A. Yogaraj
Abstract:
The appointment system was designed to minimize patient’s idle time overlooking patients waiting time in hospitals. This is no longer valid in today’s consumer oriented society. Long waiting times for treatment in the outpatient department followed by short consultations has long been a complaint. Nowadays, customers use waiting time as a decisive factor in choosing a service provider. Queuing theory constitutes a very powerful tool because queuing models require relatively little data and are simple and fast to use. Because of this simplicity and speed, modelers can be used to quickly evaluate and compare various alternatives for providing service. The application of queuing models in the analysis of health care systems is increasingly accepted by health care decision makers. Timely access to care is a key component of high-quality health care. However, patient delays are prevalent throughout health care systems, resulting in dissatisfaction and adverse clinical consequences for patients as well as potentially higher costs and wasted capacity for providers. Arguably, the most critical delays for health care are the ones associated with health care emergencies. The allocation of resources can be divided into three general areas: bed management, staff management, and room facility management. Effective and efficient patient flow is indicated by high patient throughput, low patient waiting times, a short length of stay at the hospital and overtime, while simultaneously maintaining adequate staff utilization rates and low patient’s idle times.Keywords: appointment system, patient scheduling, bed management, queueing calculation, system analysis
Procedia PDF Downloads 3006706 Integrating Radar Sensors with an Autonomous Vehicle Simulator for an Enhanced Smart Parking Management System
Authors: Mohamed Gazzeh, Bradley Null, Fethi Tlili, Hichem Besbes
Abstract:
The burgeoning global ownership of personal vehicles has posed a significant strain on urban infrastructure, notably parking facilities, leading to traffic congestion and environmental concerns. Effective parking management systems (PMS) are indispensable for optimizing urban traffic flow and reducing emissions. The most commonly deployed systems nowadays rely on computer vision technology. This paper explores the integration of radar sensors and simulation in the context of smart parking management. We concentrate on radar sensors due to their versatility and utility in automotive applications, which extends to PMS. Additionally, radar sensors play a crucial role in driver assistance systems and autonomous vehicle development. However, the resource-intensive nature of radar data collection for algorithm development and testing necessitates innovative solutions. Simulation, particularly the monoDrive simulator, an internal development tool used by NI the Test and Measurement division of Emerson, offers a practical means to overcome this challenge. The primary objectives of this study encompass simulating radar sensors to generate a substantial dataset for algorithm development, testing, and, critically, assessing the transferability of models between simulated and real radar data. We focus on occupancy detection in parking as a practical use case, categorizing each parking space as vacant or occupied. The simulation approach using monoDrive enables algorithm validation and reliability assessment for virtual radar sensors. It meticulously designed various parking scenarios, involving manual measurements of parking spot coordinates, orientations, and the utilization of TI AWR1843 radar. To create a diverse dataset, we generated 4950 scenarios, comprising a total of 455,400 parking spots. This extensive dataset encompasses radar configuration details, ground truth occupancy information, radar detections, and associated object attributes such as range, azimuth, elevation, radar cross-section, and velocity data. The paper also addresses the intricacies and challenges of real-world radar data collection, highlighting the advantages of simulation in producing radar data for parking lot applications. We developed classification models based on Support Vector Machines (SVM) and Density-Based Spatial Clustering of Applications with Noise (DBSCAN), exclusively trained and evaluated on simulated data. Subsequently, we applied these models to real-world data, comparing their performance against the monoDrive dataset. The study demonstrates the feasibility of transferring models from a simulated environment to real-world applications, achieving an impressive accuracy score of 92% using only one radar sensor. This finding underscores the potential of radar sensors and simulation in the development of smart parking management systems, offering significant benefits for improving urban mobility and reducing environmental impact. The integration of radar sensors and simulation represents a promising avenue for enhancing smart parking management systems, addressing the challenges posed by the exponential growth in personal vehicle ownership. This research contributes valuable insights into the practicality of using simulated radar data in real-world applications and underscores the role of radar technology in advancing urban sustainability.Keywords: autonomous vehicle simulator, FMCW radar sensors, occupancy detection, smart parking management, transferability of models
Procedia PDF Downloads 816705 Architecture - Performance Relationship in GPU Computing - Composite Process Flow Modeling and Simulations
Authors: Ram Mohan, Richard Haney, Ajit Kelkar
Abstract:
Current developments in computing have shown the advantage of using one or more Graphic Processing Units (GPU) to boost the performance of many computationally intensive applications but there are still limits to these GPU-enhanced systems. The major factors that contribute to the limitations of GPU(s) for High Performance Computing (HPC) can be categorized as hardware and software oriented in nature. Understanding how these factors affect performance is essential to develop efficient and robust applications codes that employ one or more GPU devices as powerful co-processors for HPC computational modeling. This research and technical presentation will focus on the analysis and understanding of the intrinsic interrelationship of both hardware and software categories on computational performance for single and multiple GPU-enhanced systems using a computationally intensive application that is representative of a large portion of challenges confronting modern HPC. The representative application uses unstructured finite element computations for transient composite resin infusion process flow modeling as the computational core, characteristics and results of which reflect many other HPC applications via the sparse matrix system used for the solution of linear system of equations. This work describes these various software and hardware factors and how they interact to affect performance of computationally intensive applications enabling more efficient development and porting of High Performance Computing applications that includes current, legacy, and future large scale computational modeling applications in various engineering and scientific disciplines.Keywords: graphical processing unit, software development and engineering, performance analysis, system architecture and software performance
Procedia PDF Downloads 3636704 A “Best Practice” Model for Physical Education in the BRICS Countries
Authors: Vasti Oelofse, Niekie van der Merwe, Dorita du Toit
Abstract:
This study addresses the need for a unified best practice model for Physical Education across BRICS nations, as current research primarily offers individual country recommendations. Drawing on relevant literature within the framework of Bronfenbrenner’s Ecological Systems Theory, as well as data from open-ended questionnaires completed by Physical Education experts from the BRICS countries, , the study develops a best practice model based on identified challenges and effective practices in Physical Education. A model is proposed that incorporates flexible and resource-efficient strategies tailored to address PE challenges specific to these countries, enhancing outcomes for learners, empowering teachers, and fostering systemic collaboration among BRICS members. The proposed model comprises six key areas: “Curriculum and policy requirements”, “General approach”, “Theoretical basis”, “Strategies for presenting content”, “Teacher training”, and “Evaluation”. The “Strategies for presenting program content” area addresses both well-resourced and poorly resourced schools, adapting curriculum, teaching strategies, materials, and learner activities for varied socio-economic contexts. The model emphasizes a holistic approach to learner development, engaging environments, and continuous teacher training. A collaborative approach among BRICS countries, focusing on shared best practices and continuous improvement, is vital for the model's successful implementation, enhancing Physical Education programs and outcomes across these nations.Keywords: BRICS countries, physical education, best practice model, ecological systems theory
Procedia PDF Downloads 126703 Standardized Testing of Filter Systems regarding Their Separation Efficiency in Terms of Allergenic Particles and Airborne Germs
Authors: Johannes Mertl
Abstract:
Our surrounding air contains various particles. Besides typical representatives of inorganic dust, such as soot and ash, also particles originating from animals, microorganisms or plants are floating through the air, so-called bioaerosols. The group of bioaerosols consists of a broad spectrum of particles of different size, including fungi, bacteria, viruses, spores, or tree, flower and grass pollen that are of high relevance for allergy sufferers. In dependence of the environmental climate and the actual season, these allergenic particles can be found in enormous numbers in the air and are inhaled by humans via the respiration tract, with a potential for inflammatory diseases of the airways, such as asthma or allergic rhinitis. As a consequence air filter systems of ventilation and air conditioning devices are required to meet very high standards to prevent, or at least lower the number of allergens and airborne germs entering the indoor air. Still, filter systems are merely classified for their separation rates using well-defined mineral test dust, while no appropriate sufficiently standardized test methods for bioaerosols exist. However, determined separation rates for mineral test particles of a certain size cannot simply be transferred to bioaerosols, as separation efficiency of particularly fine and respirable particles (< 10 microns) is dependent not only on their shape and particle diameter, but also defined by their density and physicochemical properties. For this reason, the OFI developed a test method, which directly enables a testing of filters and filter media for their separation rates on bioaerosols, as well as a classification of filters. Besides allergens from an intact or fractured tree or grass pollen, allergenic proteins bound to particulates, as well as allergenic fungal spores (e.g. Cladosporium cladosporioides), or bacteria can be used to classify filters regarding their separation rates. Allergens passing through the filter can then be detected by highly sensitive immunological assays (ELISA) or in the case of fungal spores by microbiological methods, which allow for the detection of even one single spore passing the filter. The test procedure, which is carried out in laboratory scale, was furthermore validated regarding its sufficiency to cover real life situations by upscaling using air conditioning devices showing great conformity in terms of separation rates. Additionally, a clinical study with allergy sufferers was performed to verify analytical results. Several different air conditioning filters from the car industry have been tested, showing significant differences in their separation rates.Keywords: airborne germs, allergens, classification of filters, fine dust
Procedia PDF Downloads 2536702 Modern Information Security Management and Digital Technologies: A Comprehensive Approach to Data Protection
Authors: Mahshid Arabi
Abstract:
With the rapid expansion of digital technologies and the internet, information security has become a critical priority for organizations and individuals. The widespread use of digital tools such as smartphones and internet networks facilitates the storage of vast amounts of data, but simultaneously, vulnerabilities and security threats have significantly increased. The aim of this study is to examine and analyze modern methods of information security management and to develop a comprehensive model to counteract threats and information misuse. This study employs a mixed-methods approach, including both qualitative and quantitative analyses. Initially, a systematic review of previous articles and research in the field of information security was conducted. Then, using the Delphi method, interviews with 30 information security experts were conducted to gather their insights on security challenges and solutions. Based on the results of these interviews, a comprehensive model for information security management was developed. The proposed model includes advanced encryption techniques, machine learning-based intrusion detection systems, and network security protocols. AES and RSA encryption algorithms were used for data protection, and machine learning models such as Random Forest and Neural Networks were utilized for intrusion detection. Statistical analyses were performed using SPSS software. To evaluate the effectiveness of the proposed model, T-Test and ANOVA statistical tests were employed, and results were measured using accuracy, sensitivity, and specificity indicators of the models. Additionally, multiple regression analysis was conducted to examine the impact of various variables on information security. The findings of this study indicate that the comprehensive proposed model reduced cyber-attacks by an average of 85%. Statistical analysis showed that the combined use of encryption techniques and intrusion detection systems significantly improves information security. Based on the obtained results, it is recommended that organizations continuously update their information security systems and use a combination of multiple security methods to protect their data. Additionally, educating employees and raising public awareness about information security can serve as an effective tool in reducing security risks. This research demonstrates that effective and up-to-date information security management requires a comprehensive and coordinated approach, including the development and implementation of advanced techniques and continuous training of human resources.Keywords: data protection, digital technologies, information security, modern management
Procedia PDF Downloads 306701 A Temporary Shelter Proposal for Displaced People
Authors: İrem Yetkin, Feray Maden, Seda Tosun, Yenal Akgün, Özgür Kilit, Koray Korkmaz, Gökhan Kiper, Mustafa Gündüzalp
Abstract:
Forced migration, whether caused by conflicts or other factors, frequently places individuals in vulnerable situations, necessitating immediate access to shelter. To promptly address the immediate needs of affected individuals, temporary shelters are often established. These shelters are characterized by their adaptable and functional nature, encompassing lightweight and sustainable structural systems, rapid assembly capabilities, modularity, and transportability. The shelter design is contingent upon demand, resulting in distinct phases for different structural forms. A multi-phased shelter approach covers emergency response, temporary shelter, and permanent reconstruction. Emergency shelters play a critical role in providing immediate life-saving aid, while temporary and transitional shelters, which are also called “t-shelters,” offer longer-term living environments during the recovery and rebuilding phases. Among these, temporary shelters are more extensively covered in the literature due to their diverse inhabiting functions. The roles of emergency shelters and temporary shelters are inherently separate, addressing distinct aspects of sheltering processes. Given their prolonged usage, temporary shelters are built for greater durability compared to emergency shelters. Nonetheless, inadequacies in temporary shelters can lead to challenges in ensuring habitability. Issues like non-expandable structures unsuitable for accommodating large families, the use of short-term shelters that worsen conditions, non-waterproof materials providing insufficient protection against bad weather conditions, and complex installation systems contribute to these problems. Given the aforementioned problems, there arises a need to develop adaptive shelters featuring lightweight components for ease of transport, possess the ability for rapid assembly, and utilize durable materials to withstand adverse weather conditions. In this study, first, the state-of-the-art on temporary shelters is presented. Then, an adaptive temporary shelter composed of foldable plates is proposed, which can easily be assembled and transportable. The proposed shelter is deliberated upon its movement capacity, transportability, and flexibility. This study makes a valuable contribution to the literature since it not only offers a systematic analysis of temporary shelters utilizing kinetic systems but also presents a practical solution that meets the necessary design requirements.Keywords: deployable structures, foldable plates, forced migration, temporary shelters
Procedia PDF Downloads 726700 Optimal Construction Using Multi-Criteria Decision-Making Methods
Authors: Masood Karamoozian, Zhang Hong
Abstract:
The necessity and complexity of the decision-making process and the interference of the various factors to make decisions and consider all the relevant factors in a problem are very obvious nowadays. Hence, researchers show their interest in multi-criteria decision-making methods. In this research, the Analytical Hierarchy Process (AHP), Simple Additive Weighting (SAW), and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) methods of multi-criteria decision-making have been used to solve the problem of optimal construction systems. Systems being evaluated in this problem include; Light Steel Frames (LSF), a case study of designs by Zhang Hong studio in the Southeast University of Nanjing, Insulating Concrete Form (ICF), Ordinary Construction System (OCS), and Prefabricated Concrete System (PRCS) as another case study designs in Zhang Hong studio in the Southeast University of Nanjing. Crowdsourcing was done by using a questionnaire at the sample level (200 people). Questionnaires were distributed among experts, university centers, and conferences. According to the results of the research, the use of different methods of decision-making led to relatively the same results. In this way, with the use of all three multi-criteria decision-making methods mentioned above, the Prefabricated Concrete System (PRCS) was in the first rank, and the Light Steel Frame (LSF) system ranked second. Also, the Prefabricated Concrete System (PRCS), in terms of performance standards and economics, was ranked first, and the Light Steel Frame (LSF) system was allocated the first rank in terms of environmental standards.Keywords: multi-criteria decision making, AHP, SAW, TOPSIS
Procedia PDF Downloads 1106699 Nanocomposites Based Micro/Nano Electro-Mechanical Systems for Energy Harvesters and Photodetectors
Authors: Radhamanohar Aepuru, R. V. Mangalaraja
Abstract:
Flexible electronic devices have drawn potential interest and provide significant new insights to develop energy conversion and storage devices such as photodetectors and nanogenerators. Recently, self-powered electronic systems have captivated huge attention for next generation MEMS/NEMS devices that can operate independently by generating built-in field without any need of external bias voltage and have wide variety of applications in telecommunication, imaging, environmental and defence sectors. The basic physical process involved in these devices are charge generation, separation, and charge flow across the electrodes. Many inorganic nanostructures have been exploring to fabricate various optoelectronic and electromechanical devices. However, the interaction of nanostructures and their excited charge carrier dynamics, photoinduced charge separation, and fast carrier mobility are yet to be studied. The proposed research is to address one such area and to realize the self-powered electronic devices. In the present work, nanocomposites of inorganic nanostructures based on ZnO, metal halide perovskites; and polyvinylidene fluoride (PVDF) based nanocomposites are realized for photodetectors and nanogenerators. The characterization of the inorganic nanostructures is carried out through steady state optical absorption and luminescence spectroscopies as well as X-ray diffraction and high-resolution transmission electron microscopy (TEM) studies. The detailed carrier dynamics is investigated using various spectroscopic techniques. The developed composite nanostructures exhibit significant optical and electrical properties, which have wide potential applications in various MEMS/NEMS devices such as photodetectors and nanogenerators.Keywords: dielectrics, nanocomposites, nanogenerators, photodetectors
Procedia PDF Downloads 1296698 Studying the Establishment of Knowledge Management Background Factors at Islamic Azad University, Behshahr Branch
Authors: Mohammad Reza Bagherzadeh, Mohammad Hossein Taheri
Abstract:
Knowledge management serves as one of the great breakthroughs in information and knowledge era and given its outstanding features, successful organizations tends to adopt it. Therefore, to deal with knowledge management establishment in universities is of special importance. In this regard, the present research aims to shed lights on factors background knowledge management establishment at Islamic Azad University, Behshahr Branch (Northern Iran). Considering three factors information technology system, knowledge process system and organizational culture as a fundamental of knowledge management infrastructure, foregoing factors were evaluated individually. The present research was conducted in descriptive-survey manner and participants included all staffs and faculty members, so that according to Krejcie & Morgan table a sample size proportional to the population size was considered. The measurement tools included survey questionnaire whose reliability was calculated to 0.83 according to Cronbachs alpha. To data analysis, descriptive statistics such as frequency and its percentage tables, column charts, mean, standard deviation and as for inferential statistics Kolomogrov- Smirnov test and single T-test were used. The findings show that despite the good corporate culture as one of the three factors background the establishment of the knowledge management at Islamic Azad University Behshahr Branch, other two ones, including IT systems, and knowledge processes systems are characterized with adverse status. As a result, these factors have caused no necessary conditions for the establishment of Knowledge Management in the university provided.Keywords: knowledge management, information technology, knowledge processes, organizational culture, educational institutions
Procedia PDF Downloads 5216697 Simulation of Solar Assisted Absorption Cooling and Electricity Generation along with Thermal Storage
Authors: Faezeh Mosallat, Eric L. Bibeau, Tarek El Mekkawy
Abstract:
Availability of a wide variety of renewable resources, such as large reserves of hydro, biomass, solar and wind in Canada provides significant potential to improve the sustainability of energy uses. As buildings represent a considerable portion of energy use in Canada, application of distributed solar energy systems for heating and cooling may increase the amount of renewable energy use. Parabolic solar trough systems have seen limited deployments in cold northern climates as they are more suitable for electricity production in southern latitudes. Heat production by concentrating solar rays using parabolic troughs can overcome the poor efficiencies of flat panels and evacuated tubes in cold climates. A numerical dynamic model is developed to simulate an installed parabolic solar trough facility in Winnipeg. The results of the numerical model are validated using the experimental data obtained from this system. The model is developed in Simulink and will be utilized to simulate a tri-generation system for heating, cooling and electricity generation in remote northern communities. The main objective of this simulation is to obtain operational data of solar troughs in cold climates as this is lacking in the literature. In this paper, the validated Simulink model is applied to simulate a solar assisted absorption cooling system along with electricity generation using organic Rankine cycle (ORC) and thermal storage. A control strategy is employed to distribute the heated oil from solar collectors among the above three systems considering the temperature requirements. This modeling provides dynamic performance results using real time minutely meteorological data which are collected at the same location the solar system is installed. This is a big step ahead of the current models by accurately calculating the available solar energy at each time step considering the solar radiation fluctuations due to passing clouds. The solar absorption cooling is modeled to use the generated heat from the solar trough system and provide cooling in summer for a greenhouse which is located next to the solar field. A natural gas water heater provides the required excess heat for the absorption cooling at low or no solar radiation periods. The results of the simulation are presented for a summer month in Winnipeg which includes the amount of generated electric power from ORC and contribution of solar energy in the cooling load provisionKeywords: absorption cooling, parabolic solar trough, remote community, validated model
Procedia PDF Downloads 2166696 Measurements and Predictions of Hydrates of CO₂-rich Gas Mixture in Equilibrium with Multicomponent Salt Solutions
Authors: Abdullahi Jibril, Rod Burgass, Antonin Chapoy
Abstract:
Carbon dioxide (CO₂) is widely used in reservoirs to enhance oil and gas production, mixing with natural gas and other impurities in the process. However, hydrate formation frequently hinders the efficiency of CO₂-based enhanced oil recovery, causing pipeline blockages and pressure build-ups. Current hydrate prediction methods are primarily designed for gas mixtures with low CO₂ content and struggle to accurately predict hydrate formation in CO₂-rich streams in equilibrium with salt solutions. Given that oil and gas reservoirs are saline, experimental data for CO₂-rich streams in equilibrium with salt solutions are essential to improve these predictive models. This study investigates the inhibition of hydrate formation in a CO₂-rich gas mixture (CO₂, CH₄, N₂, H₂ at 84.73/15/0.19/0.08 mol.%) using multicomponent salt solutions at concentrations of 2.4 wt.%, 13.65 wt.%, and 27.3 wt.%. The setup, test fluids, methodology, and results for hydrates formed in equilibrium with varying salt solution concentrations are presented. Measurements were conducted using an isochoric pressure-search method at pressures up to 45 MPa. Experimental data were compared with predictions from a thermodynamic model based on the Cubic-Plus-Association equation of state (EoS), while hydrate-forming conditions were modeled using the van der Waals and Platteeuw solid solution theory. Water activity was evaluated based on hydrate suppression temperature to assess consistency in the inhibited systems. Results indicate that hydrate stability is significantly influenced by inhibitor concentration, offering valuable guidelines for the design and operation of pipeline systems involved in offshore gas transport of CO₂-rich streams.Keywords: CO₂-rich streams, hydrates, monoethylene glycol, phase equilibria
Procedia PDF Downloads 176695 How Virtualization, Decentralization, and Network-Building Change the Manufacturing Landscape: An Industry 4.0 Perspective
Authors: Malte Brettel, Niklas Friederichsen, Michael Keller, Marius Rosenberg
Abstract:
The German manufacturing industry has to withstand an increasing global competition on product quality and production costs. As labor costs are high, several industries have suffered severely under the relocation of production facilities towards aspiring countries, which have managed to close the productivity and quality gap substantially. Established manufacturing companies have recognized that customers are not willing to pay large price premiums for incremental quality improvements. As a consequence, many companies from the German manufacturing industry adjust their production focusing on customized products and fast time to market. Leveraging the advantages of novel production strategies such as Agile Manufacturing and Mass Customization, manufacturing companies transform into integrated networks, in which companies unite their core competencies. Hereby, virtualization of the process- and supply-chain ensures smooth inter-company operations providing real-time access to relevant product and production information for all participating entities. Boundaries of companies deteriorate, as autonomous systems exchange data, gained by embedded systems throughout the entire value chain. By including Cyber-Physical-Systems, advanced communication between machines is tantamount to their dialogue with humans. The increasing utilization of information and communication technology allows digital engineering of products and production processes alike. Modular simulation and modeling techniques allow decentralized units to flexibly alter products and thereby enable rapid product innovation. The present article describes the developments of Industry 4.0 within the literature and reviews the associated research streams. Hereby, we analyze eight scientific journals with regards to the following research fields: Individualized production, end-to-end engineering in a virtual process chain and production networks. We employ cluster analysis to assign sub-topics into the respective research field. To assess the practical implications, we conducted face-to-face interviews with managers from the industry as well as from the consulting business using a structured interview guideline. The results reveal reasons for the adaption and refusal of Industry 4.0 practices from a managerial point of view. Our findings contribute to the upcoming research stream of Industry 4.0 and support decision-makers to assess their need for transformation towards Industry 4.0 practices.Keywords: Industry 4.0., mass customization, production networks, virtual process-chain
Procedia PDF Downloads 2776694 A Lexicographic Approach to Obstacles Identified in the Ontological Representation of the Tree of Life
Authors: Sandra Young
Abstract:
The biodiversity literature is vast and heterogeneous. In today’s data age, numbers of data integration and standardisation initiatives aim to facilitate simultaneous access to all the literature across biodiversity domains for research and forecasting purposes. Ontologies are being used increasingly to organise this information, but the rationalisation intrinsic to ontologies can hit obstacles when faced with the intrinsic fluidity and inconsistency found in the domains comprising biodiversity. Essentially the problem is a conceptual one: biological taxonomies are formed on the basis of specific, physical specimens yet nomenclatural rules are used to provide labels to describe these physical objects. These labels are ambiguous representations of the physical specimen. An example of this is with the genus Melpomene, the scientific nomenclatural representation of a genus of ferns, but also for a genus of spiders. The physical specimens for each of these are vastly different, but they have been assigned the same nomenclatural reference. While there is much research into the conceptual stability of the taxonomic concept versus the nomenclature used, to the best of our knowledge as yet no research has looked empirically at the literature to see the conceptual plurality or singularity of the use of these species’ names, the linguistic representation of a physical entity. Language itself uses words as symbols to represent real world concepts, whether physical entities or otherwise, and as such lexicography has a well-founded history in the conceptual mapping of words in context for dictionary making. This makes it an ideal candidate to explore this problem. The lexicographic approach uses corpus-based analysis to look at word use in context, with a specific focus on collocated word frequencies (the frequencies of words used in specific grammatical and collocational contexts). It allows for inconsistencies and contradictions in the source data and in fact includes these in the word characterisation so that 100% of the available evidence is counted. Corpus analysis is indeed suggested as one of the ways to identify concepts for ontology building, because of its ability to look empirically at data and show patterns in language usage, which can indicate conceptual ideas which go beyond words themselves. In this sense it could potentially be used to identify if the hierarchical structures present within the empirical body of literature match those which have been identified in ontologies created to represent them. The first stages of this research have revealed a hierarchical structure that becomes apparent in the biodiversity literature when annotating scientific species’ names, common names and more general names as classes, which will be the focus of this paper. The next step in the research is focusing on a larger corpus in which specific words can be analysed and then compared with existing ontological structures looking at the same material, to evaluate the methods by means of an alternative perspective. This research aims to provide evidence as to the validity of the current methods in knowledge representation for biological entities, and also shed light on the way that scientific nomenclature is used within the literature.Keywords: ontology, biodiversity, lexicography, knowledge representation, corpus linguistics
Procedia PDF Downloads 1376693 Resilience-Vulnerability Interaction in the Context of Disasters and Complexity: Study Case in the Coastal Plain of Gulf of Mexico
Authors: Cesar Vazquez-Gonzalez, Sophie Avila-Foucat, Leonardo Ortiz-Lozano, Patricia Moreno-Casasola, Alejandro Granados-Barba
Abstract:
In the last twenty years, academic and scientific literature has been focused on understanding the processes and factors of coastal social-ecological systems vulnerability and resilience. Some scholars argue that resilience and vulnerability are isolated concepts due to their epistemological origin, while others note the existence of a strong resilience-vulnerability relationship. Here we present an ordinal logistic regression model based on the analytical framework about dynamic resilience-vulnerability interaction along adaptive cycle of complex systems and disasters process phases (during, recovery and learning). In this way, we demonstrate that 1) during the disturbance, absorptive capacity (resilience as a core of attributes) and external response capacity explain the probability of households capitals to diminish the damage, and exposure sets the thresholds about the amount of disturbance that households can absorb, 2) at recovery, absorptive capacity and external response capacity explain the probability of households capitals to recovery faster (resilience as an outcome) from damage, and 3) at learning, adaptive capacity (resilience as a core of attributes) explains the probability of households adaptation measures based on the enhancement of physical capital. As a result, during the disturbance phase, exposure has the greatest weight in the probability of capital’s damage, and households with absorptive and external response capacity elements absorbed the impact of floods in comparison with households without these elements. At the recovery phase, households with absorptive and external response capacity showed a faster recovery on their capital; however, the damage sets the thresholds of recovery time. More importantly, diversity in financial capital increases the probability of recovering other capital, but it becomes a liability so that the probability of recovering the household finances in a longer time increases. At learning-reorganizing phase, adaptation (modifications to the house) increases the probability of having less damage on physical capital; however, it is not very relevant. As conclusion, resilience is an outcome but also core of attributes that interacts with vulnerability along the adaptive cycle and disaster process phases. Absorptive capacity can diminish the damage experienced by floods; however, when exposure overcomes thresholds, both absorptive and external response capacity are not enough. In the same way, absorptive and external response capacity diminish the recovery time of capital, but the damage sets the thresholds in where households are not capable of recovering their capital.Keywords: absorptive capacity, adaptive capacity, capital, floods, recovery-learning, social-ecological systems
Procedia PDF Downloads 1336692 Servant Leadership for Elder Care in St. Camillus Health Systems, USA
Authors: Anthoni Jeorge
Abstract:
Throughout the history of the world, servant leadership has been researched, and favourable results such as individual, team, and organizational have been linked to the construct. This research paper designates St. Camillus de Lellis, a practitioner of servant leadership and founder of the Ministers of the Sick as a servant leader in his approach to care for the sick. Service is the visible face of his servant leadership. First of all, despite many challenges, St. Camillus de Lellis practiced leadership by the example of compassionate service to the sick. Second, he made service to the sick the highest priority of his life. Third, Camillus displayed servant leadership such that his manner of leadership gave birth to a New School of Service to the Sick. The paper identifies the distinctive dimensions and essential elements which characterized his service-centered leadership. Furthermore, discuss the six major characteristics of a servant leader as set forth by St. Camillus’s life example. The research illustrates the transformational power of servant leadership infield healthcare in general and, in doing so, provides servant leadership seekers ways servant leadership can transform elder care in one’s own field (St. Camillus Health Systems). Thus, it ascertains that servant leadership is best-fit for humanized elder care. Supported by the review of literature, the paper ascertains that Camillus, by identifying himself with the sick, gained deeper insights concerning the pain and suffering of the population. Uniquely drawn from his true grit, Camillus’ service-centered leadership is value-based, people-oriented, and compassion-filled. His way of service to the sick is the prolongation of gestures of mercy and compassion. It is hoped that the results of this study will help health care workers and servant leadership practitioners to humanize elder care and cultivate servant leadership attitude in their health care services to the sick. By incorporating such service-oriented elements into their leadership orientation, health care workers will be true servant leaders of the sick.Keywords: leadership, service, healthcare, compassion
Procedia PDF Downloads 1646691 Improvement plan for Integrity of Intensive Care Unit Patients Withdrawn from Life-Sustaining Medical Care
Authors: Shang-Sin Shiu, Shu-I Chin, Hsiu-Ju Chen, Ru-Yu Lien
Abstract:
The Hospice and Palliative Care Act has undergone three revisions, making it less challenging for terminal patients to withdraw life support systems. However, the adequacy of care before withdraw is a crucial factor in end-of-life medical treatment. The author observed that intensive care unit (ICU) nursing staff often rely on simple flowcharts or word of mouth, leading to inadequate preparation and failure to meet patient needs before withdraw. This results in confusion or hesitation among those executing the process. Therefore, there is a motivation to improve the withdraw of patient care processes, establish standardized procedures, ensure the accuracy of removal execution, enhance end-of-life care self-efficacy for nursing staff, and improve the overall quality of care. The investigation identified key issues: the lack of applicable guidelines for ICU care for withdraw from life-sustaining, insufficient education and training on withdraw and end-of-life care, scattered locations of withdraw-related tools, and inadequate self-efficacy in withdraw from life-sustaining care. Solutions proposed include revising withdraw care processes and guidelines, integrating tools and locations, conducting educational courses, and forming support groups. After the project implementation, the accuracy of removal cognition improved from 78% to 96.5%, self-efficacy in end-of-life care after removal increased from 54.7% to 93.1%, and the correctness of care behavior progressed from 27.7% to 97.8%. It is recommended to regularly conduct courses on removing life support system care and grief consolation to enhance the quality of end-of-life care.Keywords: the intensive care unit (ICU) patients, nursing staff, withdraw life support systems, self-efficacy
Procedia PDF Downloads 516690 A Multi-Cluster Enterprise Framework for Evolution of Knowledge System among Enterprises, Governments and Research Institutions
Authors: Sohail Ahmed, Ke Xing
Abstract:
This research theoretically explored the evolution mechanism of enterprise technological innovation capability system (ETICS) from the perspective of complex adaptive systems (CAS). Starting from CAS theory, this study proposed an analytical framework for ETICS, its concepts and theory by integrating CAS methodology into the management of technological innovation capability of enterprises and discusses how to use the principles of complexity to analyze the composition, evolution and realization of the technological innovation capabilities in complex dynamic environment. This paper introduces the concept and interaction of multi-agent, the theoretical background of CAS and summarizes the sources of technological innovation, the elements of each subject and the main clusters of adaptive interactions and innovation activities. The concept of multi-agents is applied through the linkages of enterprises, research institutions and government agencies with the leading enterprises in industrial settings. The study was exploratory based on CAS theory. Theoretical model is built by considering technological and innovation literature from foundational to state of the art projects of technological enterprises. On this basis, the theoretical model is developed to measure the evolution mechanism of enterprise technological innovation capability system. This paper concludes that the main characteristics for evolution in technological systems are based on enterprise’s research and development personal, investments in technological processes and innovation resources are responsible for the evolution of enterprise technological innovation performance. The research specifically enriched the application process of technological innovation in institutional networks related to enterprises.Keywords: complex adaptive system, echo model, enterprise knowledge system, research institutions, multi-agents.
Procedia PDF Downloads 696689 Effects of a Simulated Power Cut in Automatic Milking Systems on Dairy Cows Heart Activity
Authors: Anja Gräff, Stefan Holzer, Manfred Höld, Jörn Stumpenhausen, Heinz Bernhardt
Abstract:
In view of the increasing quantity of 'green energy' from renewable raw materials and photovoltaic facilities, it is quite conceivable that power supply variations may occur, so that constantly working machines like automatic milking systems (AMS) may break down temporarily. The usage of farm-made energy is steadily increasing in order to keep energy costs as low as possible. As a result, power cuts are likely to happen more frequently. Current work in the framework of the project 'stable 4.0' focuses on possible stress reactions by simulating power cuts up to four hours in dairy farms. Based on heart activity it should be found out whether stress on dairy cows increases under these circumstances. In order to simulate a power cut, 12 random cows out of 2 herds were not admitted to the AMS for at least two hours on three consecutive days. The heart rates of the cows were measured and the collected data evaluated with HRV Program Kubios Version 2.1 on the basis of eight parameters (HR, RMSSD, pNN50, SD1, SD2, LF, HF and LF/HF). Furthermore, stress reactions were examined closely via video analysis, milk yield, ruminant activity, pedometer and measurements of cortisol metabolites. Concluding it turned out, that during the test only some animals were suffering from minor stress symptoms, when they tried to get into the AMS at their regular milking time, but couldn´t be milked because the system was manipulated. However, the stress level during a regular “time-dependent milking rejection” was just as high. So the study comes to the conclusion, that the low psychological stress level in the case of a 2-4 hours failure of an AMS does not have any impact on animal welfare and health.Keywords: dairy cow, heart activity, power cut, stable 4.0
Procedia PDF Downloads 311