Search results for: efficient service
866 Simulation and Characterization of Stretching and Folding in Microchannel Electrokinetic Flows
Authors: Justo Rodriguez, Daming Chen, Amador M. Guzman
Abstract:
The detection, treatment, and control of rapidly propagating, deadly viruses such as COVID-19, require the development of inexpensive, fast, and accurate devices to address the urgent needs of the population. Microfluidics-based sensors are amongst the different methods and techniques for detection that are easy to use. A micro analyzer is defined as a microfluidics-based sensor, composed of a network of microchannels with varying functions. Given their size, portability, and accuracy, they are proving to be more effective and convenient than other solutions. A micro analyzer based on the concept of “Lab on a Chip” presents advantages concerning other non-micro devices due to its smaller size, and it is having a better ratio between useful area and volume. The integration of multiple processes in a single microdevice reduces both the number of necessary samples and the analysis time, leading the next generation of analyzers for the health-sciences. In some applications, the flow of solution within the microchannels is originated by a pressure gradient, which can produce adverse effects on biological samples. A more efficient and less dangerous way of controlling the flow in a microchannel-based analyzer is applying an electric field to induce the fluid motion and either enhance or suppress the mixing process. Electrokinetic flows are characterized by no less than two non-dimensional parameters: the electric Rayleigh number and its geometrical aspect ratio. In this research, stable and unstable flows have been studied numerically (and when possible, will be experimental) in a T-shaped microchannel. Additionally, unstable electrokinetic flows for Rayleigh numbers higher than critical have been characterized. The flow mixing enhancement was quantified in relation to the stretching and folding that fluid particles undergo when they are subjected to supercritical electrokinetic flows. Computational simulations were carried out using a finite element-based program while working with the flow mixing concepts developed by Gollub and collaborators. Hundreds of seeded massless particles were tracked along the microchannel from the entrance to exit for both stable and unstable flows. After post-processing, their trajectories, the folding and stretching values for the different flows were found. Numerical results show that for supercritical electrokinetic flows, the enhancement effects of the folding and stretching processes become more apparent. Consequently, there is an improvement in the mixing process, ultimately leading to a more homogenous mixture.Keywords: microchannel, stretching and folding, electro kinetic flow mixing, micro-analyzer
Procedia PDF Downloads 127865 Pushover Analysis of Masonry Infilled Reinforced Concrete Frames for Performance Based Design for near Field Earthquakes
Authors: Alok Madan, Ashok Gupta, Arshad K. Hashmi
Abstract:
Non-linear dynamic time history analysis is considered as the most advanced and comprehensive analytical method for evaluating the seismic response and performance of multi-degree-of-freedom building structures under the influence of earthquake ground motions. However, effective and accurate application of the method requires the implementation of advanced hysteretic constitutive models of the various structural components including masonry infill panels. Sophisticated computational research tools that incorporate realistic hysteresis models for non-linear dynamic time-history analysis are not popular among the professional engineers as they are not only difficult to access but also complex and time-consuming to use. And, commercial computer programs for structural analysis and design that are acceptable to practicing engineers do not generally integrate advanced hysteretic models which can accurately simulate the hysteresis behavior of structural elements with a realistic representation of strength degradation, stiffness deterioration, energy dissipation and ‘pinching’ under cyclic load reversals in the inelastic range of behavior. In this scenario, push-over or non-linear static analysis methods have gained significant popularity, as they can be employed to assess the seismic performance of building structures while avoiding the complexities and difficulties associated with non-linear dynamic time-history analysis. “Push-over” or non-linear static analysis offers a practical and efficient alternative to non-linear dynamic time-history analysis for rationally evaluating the seismic demands. The present paper is based on the analytical investigation of the effect of distribution of masonry infill panels over the elevation of planar masonry infilled reinforced concrete (R/C) frames on the seismic demands using the capacity spectrum procedures implementing nonlinear static analysis (pushover analysis) in conjunction with the response spectrum concept. An important objective of the present study is to numerically evaluate the adequacy of the capacity spectrum method using pushover analysis for performance based design of masonry infilled R/C frames for near-field earthquake ground motions.Keywords: nonlinear analysis, capacity spectrum method, response spectrum, seismic demand, near-field earthquakes
Procedia PDF Downloads 405864 Structure Modification of Leonurine to Improve Its Potency as Aphrodisiac
Authors: Ruslin, R. E. Kartasasmita, M. S. Wibowo, S. Ibrahim
Abstract:
An aphrodisiac is a substance contained in food or drug that can arouse sexual instinct and increase pleasure while working, these substances derived from plants, animals, and minerals. When consuming substances that have aphrodisiac activity and duration can improve the sexual instinct. The natural aphrodisiac effect can be obtained through plants, animals, and minerals. Leonurine compound has aphrodisiac activity, these compounds can be isolated from plants of Leonurus Sp, Sundanese people is known as deundereman, this plant is empirical has aphrodisiac activity and based on the isolation of active compounds from plants known to contain compounds leonurine, so that the compound is expected to have activity aphrodisiac. Leonurine compound can be isolated from plants or synthesized chemically with material dasa siringat acid. Leonurine compound can be obtained commercial and derivatives of these compounds can be synthesized in an effort to increase its activity. This study aims to obtain derivatives leonurine better aphrodisiac activity compared with the parent compound, modified the structure of the compounds in the form leonurin guanidino butyl ester group with butyl amin and bromoetanol. ArgusLab program version 4.0.1 is used to determine the binding energy, hydrogen bonds and amino acids involved in the interaction of the compound PDE5 receptor. The in vivo test leonurine compounds and derivatives as an aphrodisiac ingredients and hormone testosterone levels using 27 male rats Wistar strain and 9 female mice of the same species, ages ranged from 12 weeks rats weighing + 200 g / tail. The test animal is divided into 9 groups according to the type of compounds and the dose given. Each treatment group was orally administered 2 ml per day for 5 days. On the sixth day was observed male rat sexual behavior and taking blood from the heart to measure testosterone levels using ELISA technique. Statistical analysis was performed in this study is the ANOVA test Least Square Differences (LSD) using the program Statistical Product and Service Solutions (SPSS). Aphrodisiac efficacy of the leonurine compound and its derivatives have proven in silico and in vivo test, the in silico testing leonurine derivatives have smaller binding energy derivatives leonurine so that activity better than leonurine compounds. Testing in vivo using rats of wistar strain that better leonurine derivative of this compound shows leonurine that in silico studies in parallel with in vivo tests. Modification of the structure in the form of guanidine butyl ester group with butyl amin and bromoethanol increase compared leonurine compound for aphrodisiac activity, testosterone derivatives of compounds leonurine experienced a significant improvement especial is 1RD compounds especially at doses of 100 and 150 mg/bb. The results showed that the compound leonurine and its compounds contain aphrodisiac activity and increase the amount of testosterone in the blood. The compound test used in this study acts as a steroid precursor resulting in increased testosterone.Keywords: aphrodisiac dysfunction erectile leonurine 1-RD 2-RD, dysfunction, erectile leonurine, 1-RD 2-RD
Procedia PDF Downloads 279863 Body, Experience, Sense, and Place: Past and Present Sensory Mappings of Istiklal Street in Istanbul
Authors: Asiye Nisa Kartal
Abstract:
An attempt to recognize the undiscovered bounds of Istiklal Street in Istanbul between its sensory experiences (intangible qualities) and physical setting (tangible qualities) could be taken as the first inspiration point for this study. ‘The dramatic physical changes’ and ‘their current impacts on sensory attributions’ of Istiklal Street have directed this study to consider the role of changing the physical layout on sensory dimensions which have a subtle but important role in the examination of urban places. The public places have always been subject to transformation, so in the last years, the changing socio-cultural structure, economic and political movements, law and city regulations, innovative transportation and communication activities have resulted in a controversial modification of Istanbul. And, as the culture, entertainment, tourism, and shopping focus of Istanbul, Istiklal Street has witnessed different changing stages within the last years. In this process, because of the projects being implemented, many buildings such as cinemas, theatres, and bookstores have restored, moved, converted, closed and demolished which have been significant elements in terms of the qualitative value of this area. And, the multi-layered socio-cultural, and architectural structure of Istiklal Street has been changing in a dramatical and controversial way. But importantly, while the physical setting of Istiklal Street has changed, the transformation has not been spatial, socio-cultural, economic; avoidably the sensory dimensions of Istiklal Street which have great importance in terms of intangible qualities of this area have begun to lose their distinctive features. This has created the challenge of this research. As the main hypothesis, this study claims that the physical transformations have led to change in the sensory characteristic of Istiklal Street, therefore the Sensescape of Istiklal Street deserve to be recorded, decoded and promoted as expeditiously as possible to observe the sensory reflections of physical transformations in this area. With the help of the method of ‘Sensewalking’ which is an efficient research tool to generate knowledge on sensory dimensions of an urban settlement, this study suggests way of ‘mapping’ to understand how do ‘changes of physical setting’ play role on ‘sensory qualities’ of Istiklal Street which have been changed or lost over time. Basically, this research focuses on the sensory mapping of Istiklal Street from the 1990s until today to picture, interpret, criticize the ‘sensory mapping of Istiklal Street in present’ and the ‘sensory mapping of Istiklal Street in past’. Through the sensory mapping of Istiklal Street, this study intends to increase the awareness about the distinctive sensory qualities of places. It is worthwhile for further studies that consider the sensory dimensions of places especially in the field of architecture.Keywords: Istiklal street, sense, sensewalking, sensory mapping
Procedia PDF Downloads 179862 Revolutionizing Healthcare Facility Maintenance: A Groundbreaking AI, BIM, and IoT Integration Framework
Authors: Mina Sadat Orooje, Mohammad Mehdi Latifi, Behnam Fereydooni Eftekhari
Abstract:
The integration of cutting-edge Internet of Things (IoT) technologies with advanced Artificial Intelligence (AI) systems is revolutionizing healthcare facility management. However, the current landscape of hospital building maintenance suffers from slow, repetitive, and disjointed processes, leading to significant financial, resource, and time losses. Additionally, the potential of Building Information Modeling (BIM) in facility maintenance is hindered by a lack of data within digital models of built environments, necessitating a more streamlined data collection process. This paper presents a robust framework that harmonizes AI with BIM-IoT technology to elevate healthcare Facility Maintenance Management (FMM) and address these pressing challenges. The methodology begins with a thorough literature review and requirements analysis, providing insights into existing technological landscapes and associated obstacles. Extensive data collection and analysis efforts follow to deepen understanding of hospital infrastructure and maintenance records. Critical AI algorithms are identified to address predictive maintenance, anomaly detection, and optimization needs alongside integration strategies for BIM and IoT technologies, enabling real-time data collection and analysis. The framework outlines protocols for data processing, analysis, and decision-making. A prototype implementation is executed to showcase the framework's functionality, followed by a rigorous validation process to evaluate its efficacy and gather user feedback. Refinement and optimization steps are then undertaken based on evaluation outcomes. Emphasis is placed on the scalability of the framework in real-world scenarios and its potential applications across diverse healthcare facility contexts. Finally, the findings are meticulously documented and shared within the healthcare and facility management communities. This framework aims to significantly boost maintenance efficiency, cut costs, provide decision support, enable real-time monitoring, offer data-driven insights, and ultimately enhance patient safety and satisfaction. By tackling current challenges in healthcare facility maintenance management it paves the way for the adoption of smarter and more efficient maintenance practices in healthcare facilities.Keywords: artificial intelligence, building information modeling, healthcare facility maintenance, internet of things integration, maintenance efficiency
Procedia PDF Downloads 61861 Prevalent Features of Human Infections with Highly Pathogenic Avian Influenza A(H7N9) Virus, China, 2017
Authors: Lei Zhou, Dan Li, Ruiqi Ren, Chao Li, Yali Wang, Daxin Ni, Zijian Feng, Timothy M. Uyeki, Qun Li
Abstract:
Since the first human infections with avian influenza A(H7N9) virus were identified in early 2013, 1533 cases of laboratory-confirmed A(H7N9) virus infections were reported and confirmed as of September 13, 2017. The fifth epidemic was defined as starting from September 1, 2016, and the number of A(H7N9) cases has surged since the end of December in 2016. On February 18, 2017, the A(H7N9) cases who were infected with highly pathogenic avian influenza (HPAI) virus was reported from Southern China. The HPAI A(H7N9) cases were identified and then an investigation and analyses were conducted to assess whether disease severity in humans has changed with HPAI A(H7N9) compared with low pathogenic avian influenza (LPAI) A(H7N9) virus infection. Methods: All confirmed cases with A(H7N9) virus infections reported throughout mainland China from September 1, 2016, to September 13, 2017, were included. Cases' information was extracted from field investigation reports and the notifiable infectious surveillance system to describe the demographic, clinical, and epidemiologic characteristics. Descriptive statistics were used to compare HPAI A(H7N9) cases with all LPAI A(H7N9) cases reported during the fifth epidemic. Results: A total of 27 cases of HPAI A(H7N9) virus were identified infection from five provinces, including Guangxi (44%), Guangdong (33%), Hunan (15%), Hebei (4%) and Shangxi (4%). The median age of cases of HPAI A(H7N9) virus infection was 60 years (range, 15 to 80) and most of them were male (59%) and lived in rural areas (78%). All 27 cases had live poultry related exposures within 10 days before their illness onset. In comparison with LPAI A(H7N9) case-patients, HPAI A(H7N9) case-patients were significantly more likely to live in rural areas (78% vs. 51%; p = 0.006), have exposure to the sick or dead poultry (56% vs. 19%; p = 0.000), and be hospitalized earlier (median 3 vs. 4 days; p = 0.007). No significant differences were observed in median age, sex, prevalence of underlying chronic medical conditions, median time from illness onset to first medical service seeking, starting antiviral treatment, and diagnosis. Although the median time from illness onset to death (9 vs. 13 days) was shorter and the overall case-fatality proportion (48% vs. 38%) was higher for HPAI A(H7N9) case-patients than for LPAI A(H7N9) case-patients, these differences were not statistically significant. Conclusions: Our findings indicate that HPAI A(H7N9) virus infection was associated with exposure to sick and dead poultry in rural areas when visited live poultry market or in the backyard. In the fifth epidemic in mainland China, HPAI A (H7N9) case-patients were hospitalized earlier than LPAI A(H7N9) case-patients. Although the difference was not statistically significant, the mortality of HPAI A (H7N9) case-patients was obviously higher than that of LPAI A(H7N9) case-patients, indicating a potential severity change of HPAI A(H7N9) virus infection.Keywords: Avian influenza A (H7N9) virus, highly pathogenic avian influenza (HPAI), case-patients, poultry
Procedia PDF Downloads 168860 Effects of Lime and N100 on the Growth and Phytoextraction Capability of a Willow Variety (S. Viminalis × S. Schwerinii × S. Dasyclados) Grown in Contaminated Soils
Authors: Mir Md. Abdus Salam, Muhammad Mohsin, Pertti Pulkkinen, Paavo Pelkonen, Ari Pappinen
Abstract:
Soil and water pollution caused by extensive mining practices can adversely affect environmental components, such as humans, animals, and plants. Despite a generally positive contribution to society, mining practices have become a serious threat to biological systems. As metals do not degrade completely, they require immobilization, toxicity reduction, or removal. A greenhouse experiment was conducted to evaluate the effects of lime and N100 (11-amino-1-hydroxyundecylidene) chelate amendment on the growth and phytoextraction potential of the willow variety Klara (S. viminalis × S. schwerinii × S. dasyclados) grown in soils heavily contaminated with copper (Cu). The plants were irrigated with tap or processed water (mine wastewater). The sequential extraction technique and inductively coupled plasma-mass spectrometry (ICP-MS) tool were used to determine the extractable metals and evaluate the fraction of metals in the soil that could be potentially available for plant uptake. The results suggest that the combined effects of the contaminated soil and processed water inhibited growth parameter values. In contrast, the accumulation of Cu in the plant tissues was increased compared to the control. When the soil was supplemented with lime and N100; growth parameter and resistance capacity were significantly higher compared to unamended soil treatments, especially in the contaminated soil treatments. The combined lime- and N100-amended soil treatment produced higher growth rate of biomass, resistance capacity and phytoextraction efficiency levels relative to either the lime-amended or the N100-amended soil treatments. This study provides practical evidence of the efficient chelate-assisted phytoextraction capability of Klara and highlights its potential as a viable and inexpensive novel approach for in-situ remediation of Cu-contaminated soils and mine wastewaters. Abandoned agricultural, industrial and mining sites can also be utilized by a Salix afforestation program without conflict with the production of food crops. This kind of program may create opportunities for bioenergy production and economic development, but contamination levels should be examined before bioenergy products are used.Keywords: copper, Klara, lime, N100, phytoextraction
Procedia PDF Downloads 146859 Mathematical Modelling of Biogas Dehumidification by Using of Counterflow Heat Exchanger
Authors: Staņislavs Gendelis, Andris Jakovičs, Jānis Ratnieks, Aigars Laizāns, Dāvids Vardanjans
Abstract:
Dehumidification of biogas at the biomass plants is very important to provide the energy efficient burning of biomethane at the outlet. A few methods are widely used to reduce the water content in biogas, e.g. chiller/heat exchanger based cooling, usage of different adsorbents like PSA, or the combination of such approaches. A quite different method of biogas dehumidification is offered and analyzed in this paper. The main idea is to direct the flow of biogas from the plant around it downwards; thus, creating additional insulation layer. As the temperature in gas shell layer around the plant will decrease from ~ 38°C to 20°C in the summer or even to 0°C in the winter, condensation of water vapor occurs. The water from the bottom of the gas shell can be collected and drain away. In addition, another upward shell layer is created after the condensate drainage place on the outer side to further reducing heat losses. Thus, counterflow biogas heat exchanger is created around the biogas plant. This research work deals with the numerical modelling of biogas flow, taking into account heat exchange and condensation on cold surfaces. Different kinds of boundary conditions (air and ground temperatures in summer/winter) and various physical properties of constructions (insulation between layers, wall thickness) are included in the model to make it more general and useful for different biogas flow conditions. The complexity of this problem is fact, that the temperatures in both channels are conjugated in case of low thermal resistance between layers. MATLAB programming language is used for multiphysical model development, numerical calculations and result visualization. Experimental installation of a biogas plant’s vertical wall with an additional 2 layers of polycarbonate sheets with the controlled gas flow was set up to verify the modelling results. Gas flow at inlet/outlet, temperatures between the layers and humidity were controlled and measured during a number of experiments. Good correlation with modelling results for vertical wall section allows using of developed numerical model for an estimation of parameters for the whole biogas dehumidification system. Numerical modelling of biogas counterflow heat exchanger system placed on the plant’s wall for various cases allows optimizing of thickness for gas layers and insulation layer to ensure necessary dehumidification of the gas under different climatic conditions. Modelling of system’s defined configuration with known conditions helps to predict the temperature and humidity content of the biogas at the outlet.Keywords: biogas dehumidification, numerical modelling, condensation, biogas plant experimental model
Procedia PDF Downloads 550858 Considering Uncertainties of Input Parameters on Energy, Environmental Impacts and Life Cycle Costing by Monte Carlo Simulation in the Decision Making Process
Authors: Johannes Gantner, Michael Held, Matthias Fischer
Abstract:
The refurbishment of the building stock in terms of energy supply and efficiency is one of the major challenges of the German turnaround in energy policy. As the building sector accounts for 40% of Germany’s total energy demand, additional insulation is key for energy efficient refurbished buildings. Nevertheless the energetic benefits often the environmental and economic performances of insulation materials are questioned. The methods Life Cycle Assessment (LCA) as well as Life Cycle Costing (LCC) can form the standardized basis for answering this doubts and more and more become important for material producers due efforts such as Product Environmental Footprint (PEF) or Environmental Product Declarations (EPD). Due to increasing use of LCA and LCC information for decision support the robustness and resilience of the results become crucial especially for support of decision and policy makers. LCA and LCC results are based on respective models which depend on technical parameters like efficiencies, material and energy demand, product output, etc.. Nevertheless, the influence of parameter uncertainties on lifecycle results are usually not considered or just studied superficially. Anyhow the effect of parameter uncertainties cannot be neglected. Based on the example of an exterior wall the overall lifecycle results are varying by a magnitude of more than three. As a result simple best case worst case analyses used in practice are not sufficient. These analyses allow for a first rude view on the results but are not taking effects into account such as error propagation. Thereby LCA practitioners cannot provide further guidance for decision makers. Probabilistic analyses enable LCA practitioners to gain deeper understanding of the LCA and LCC results and provide a better decision support. Within this study, the environmental and economic impacts of an exterior wall system over its whole lifecycle are illustrated, and the effect of different uncertainty analysis on the interpretation in terms of resilience and robustness are shown. Hereby the approaches of error propagation and Monte Carlo Simulations are applied and combined with statistical methods in order to allow for a deeper understanding and interpretation. All in all this study emphasis the need for a deeper and more detailed probabilistic evaluation based on statistical methods. Just by this, misleading interpretations can be avoided, and the results can be used for resilient and robust decisions.Keywords: uncertainty, life cycle assessment, life cycle costing, Monte Carlo simulation
Procedia PDF Downloads 286857 Drivers of Satisfaction and Dissatisfaction in Camping Tourism: A Case Study from Croatia
Authors: Darko Prebežac, Josip Mikulić, Maja Šerić, Damir Krešić
Abstract:
Camping tourism is recognized as a growing segment of the broader tourism industry, currently evolving from an inexpensive, temporary sojourn in a rural environment into a highly fragmented niche tourism sector. The trends among public-managed campgrounds seem to be moving away from rustic campgrounds that provide only a tent pad and a fire ring to more developed facilities that offer a range of different amenities, where campers still search for unique experiences that go above the opportunity to experience nature and social interaction. In addition, while camping styles and options changed significantly over the last years, coastal camping in particular became valorized as is it regarded with a heightened sense of nostalgia. Alongside this growing interest in the camping tourism, a demand for quality servicing infrastructure emerged in order to satisfy the wide variety of needs, wants, and expectations of an increasingly demanding traveling public. However, camping activity in general and quality of camping experience and campers’ satisfaction in particular remain an under-researched area of the tourism and consumption behavior literature. In this line, very few studies addressed the issue of quality product/service provision in satisfying nature based tourists and in driving their future behavior with respect to potential re-visitation and recommendation intention. The present study thus aims to investigate the drivers of positive and negative campsite experience using the case of Croatia. Due to the well-preserved nature and indented coastline, camping tourism has a long tradition in Croatia and represents one of the most important and most developed tourism products. During the last decade the number of tourist overnights in Croatian camps has increased by 26% amounting to 16.5 million in 2014. Moreover, according to Eurostat the market share of campsites in the EU is around 14%, indicating that the market share of Croatian campsites is almost double large compared to the EU average. Currently, there are a total of 250 camps in Croatia with approximately 75.8 thousands accommodation units. It is further noteworthy that Croatian camps have higher average occupancy rates and a higher average length of stay as compared to the national average of all types of accommodation. In order to explore the main drivers of positive and negative campsite experiences, this study uses principal components analysis (PCA) and an impact-asymmetry analysis (IAA). Using the PCA, first the main dimensions of the campsite experience are extracted in an exploratory manner. Using the IAA, the extracted factors are investigated for their potentials to create customer delight and/or frustration. The results provide valuable insight to both researchers and practitioners regarding the understanding of campsite satisfaction.Keywords: Camping tourism, campsite, impact-asymmetry analysis, satisfaction
Procedia PDF Downloads 187856 The Inclusive Human Trafficking Checklist: A Dialectical Measurement Methodology
Authors: Maria C. Almario, Pam Remer, Jeff Resse, Kathy Moran, Linda Theander Adam
Abstract:
The identification of victims of human trafficking and consequential service provision is characterized by a significant disconnection between the estimated prevalence of this issue and the number of cases identified. This poses as tremendous problem for human rights advocates as it prevents data collection, information sharing, allocation of resources and opportunities for international dialogues. The current paper introduces the Inclusive Human Trafficking Checklist (IHTC) as a measurement methodology with theoretical underpinnings derived from dialectic theory. The presence of human trafficking in a person’s life is conceptualized as a dynamic and dialectic interaction between vulnerability and exploitation. The current papers explores the operationalization of exploitation and vulnerability, evaluates the metric qualities of the instrument, evaluates whether there are differences in assessment based on the participant’s profession, level of knowledge, and training, and assesses if users of the instrument perceive it as useful. A total of 201 participants were asked to rate three vignettes predetermined by experts to qualify as a either human trafficking case or not. The participants were placed in three conditions: business as usual, utilization of the IHTC with and without training. The results revealed a statistically significant level of agreement between the expert’s diagnostic and the application of the IHTC with an improvement of 40% on identification when compared with the business as usual condition While there was an improvement in identification in the group with training, the difference was found to have a small effect size. Participants who utilized the IHTC showed an increased ability to identify elements of identity-based vulnerabilities as well as elements of fraud, which according to the results, are distinctive variables in cases of human trafficking. In terms of the perceived utility, the results revealed higher mean scores for the groups utilizing the IHTC when compared to the business as usual condition. These findings suggest that the IHTC improves appropriate identification of cases and that it is perceived as a useful instrument. The application of the IHTC as a multidisciplinary instrumentation that can be utilized in legal and human services settings is discussed as a pivotal piece of helping victims restore their sense of dignity, and advocate for legal, physical and psychological reparations. It is noteworthy that this study was conducted with a sample in the United States and later re-tested in Colombia. The implications of the instrument for treatment conceptualization and intervention in human trafficking cases are discussed as opportunities for enhancement of victim well-being, restoration engagement and activism. With the idea that what is personal is also political, we believe that the careful observation and data collection in specific cases can inform new areas of human rights activism.Keywords: exploitation, human trafficking, measurement, vulnerability, screening
Procedia PDF Downloads 331855 Effect of the Orifice Plate Specifications on Coefficient of Discharge
Authors: Abulbasit G. Abdulsayid, Zinab F. Abdulla, Asma A. Omer
Abstract:
On the ground that the orifice plate is relatively inexpensive, requires very little maintenance and only calibrated during the occasion of plant turnaround, the orifice plate has turned to be in a real prevalent use in gas industry. Inaccuracy of measurement in the fiscal metering stations may highly be accounted to be the most vital factor for mischarges in the natural gas industry in Libya. A very trivial error in measurement can add up a fast escalating financial burden to the custodian transactions. The unaccounted gas quantity transferred annually via orifice plates in Libya, could be estimated in an extent of multi-million dollars. As the oil and gas wealth is the solely source of income to Libya, every effort is now being exerted to improve the accuracy of existing orifice metering facilities. Discharge coefficient has become pivotal in current researches undertaken in this regard. Hence, increasing the knowledge of the flow field in a typical orifice meter is indispensable. Recently and in a drastic pace, the CFD has become the most time and cost efficient versatile tool for in-depth analysis of fluid mechanics, heat and mass transfer of various industrial applications. Getting deeper into the physical phenomena lied beneath and predicting all relevant parameters and variables with high spatial and temporal resolution have been the greatest weighing pros counting for CFD. In this paper, flow phenomena for air passing through an orifice meter were numerically analyzed with CFD code based modeling, giving important information about the effect of orifice plate specifications on the discharge coefficient for three different tappings locations, i.e., flange tappings, D and D/2 tappings compared with vena contracta tappings. Discharge coefficients were paralleled with discharge coefficients estimated by ISO 5167. The influences of orifice plate bore thickness, orifice plate thickness, beveled angle, perpendicularity and buckling of the orifice plate, were all duly investigated. A case of an orifice meter whose pipe diameter of 2 in, beta ratio of 0.5 and Reynolds number of 91100, was taken as a model. The results highlighted that the discharge coefficients were highly responsive to the variation of plate specifications and under all cases, the discharge coefficients for D and D/2 tappings were very close to that of vena contracta tappings which were believed as an ideal arrangement. Also, in general sense, it was appreciated that the standard equation in ISO 5167, by which the discharge coefficient was calculated, cannot capture the variation of the plate specifications and thus further thorough considerations would be still needed.Keywords: CFD, discharge coefficients, orifice meter, orifice plate specifications
Procedia PDF Downloads 119854 Digimesh Wireless Sensor Network-Based Real-Time Monitoring of ECG Signal
Authors: Sahraoui Halima, Dahani Ameur, Tigrine Abedelkader
Abstract:
DigiMesh technology represents a pioneering advancement in wireless networking, offering cost-effective and energy-efficient capabilities. Its inherent simplicity and adaptability facilitate the seamless transfer of data between network nodes, extending the range and ensuring robust connectivity through autonomous self-healing mechanisms. In light of these advantages, this study introduces a medical platform harnessed with DigiMesh wireless network technology characterized by low power consumption, immunity to interference, and user-friendly operation. The primary application of this platform is the real-time, long-distance monitoring of Electrocardiogram (ECG) signals, with the added capacity for simultaneous monitoring of ECG signals from multiple patients. The experimental setup comprises key components such as Raspberry Pi, E-Health Sensor Shield, and Xbee DigiMesh modules. The platform is composed of multiple ECG acquisition devices labeled as Sensor Node 1 and Sensor Node 2, with a Raspberry Pi serving as the central hub (Sink Node). Two communication approaches are proposed: Single-hop and multi-hop. In the Single-hop approach, ECG signals are directly transmitted from a sensor node to the sink node through the XBee3 DigiMesh RF Module, establishing peer-to-peer connections. This approach was tested in the first experiment to assess the feasibility of deploying wireless sensor networks (WSN). In the multi-hop approach, two sensor nodes communicate with the server (Sink Node) in a star configuration. This setup was tested in the second experiment. The primary objective of this research is to evaluate the performance of both Single-hop and multi-hop approaches in diverse scenarios, including open areas and obstructed environments. Experimental results indicate the DigiMesh network's effectiveness in Single-hop mode, with reliable communication over distances of approximately 300 meters in open areas. In the multi-hop configuration, the network demonstrated robust performance across approximately three floors, even in the presence of obstacles, without the need for additional router devices. This study offers valuable insights into the capabilities of DigiMesh wireless technology for real-time ECG monitoring in healthcare applications, demonstrating its potential for use in diverse medical scenarios.Keywords: DigiMesh protocol, ECG signal, real-time monitoring, medical platform
Procedia PDF Downloads 81853 Scalable Performance Testing: Facilitating The Assessment Of Application Performance Under Substantial Loads And Mitigating The Risk Of System Failures
Authors: Solanki Ravirajsinh
Abstract:
In the software testing life cycle, failing to conduct thorough performance testing can result in significant losses for an organization due to application crashes and improper behavior under high user loads in production. Simulating large volumes of requests, such as 5 million within 5-10 minutes, is challenging without a scalable performance testing framework. Leveraging cloud services to implement a performance testing framework makes it feasible to handle 5-10 million requests in just 5-10 minutes, helping organizations ensure their applications perform reliably under peak conditions. Implementing a scalable performance testing framework using cloud services and tools like JMeter, EC2 instances (Virtual machine), cloud logs (Monitor errors and logs), EFS (File storage system), and security groups offers several key benefits for organizations. Creating performance test framework using this approach helps optimize resource utilization, effective benchmarking, increased reliability, cost savings by resolving performance issues before the application is released. In performance testing, a master-slave framework facilitates distributed testing across multiple EC2 instances to emulate many concurrent users and efficiently handle high loads. The master node orchestrates the test execution by coordinating with multiple slave nodes to distribute the workload. Slave nodes execute the test scripts provided by the master node, with each node handling a portion of the overall user load and generating requests to the target application or service. By leveraging JMeter's master-slave framework in conjunction with cloud services like EC2 instances, EFS, CloudWatch logs, security groups, and command-line tools, organizations can achieve superior scalability and flexibility in their performance testing efforts. In this master-slave framework, JMeter must be installed on both the master and each slave EC2 instance. The master EC2 instance functions as the "brain," while the slave instances operate as the "body parts." The master directs each slave to execute a specified number of requests. Upon completion of the execution, the slave instances transmit their results back to the master. The master then consolidates these results into a comprehensive report detailing metrics such as the number of requests sent, encountered errors, network latency, response times, server capacity, throughput, and bandwidth. Leveraging cloud services, the framework benefits from automatic scaling based on the volume of requests. Notably, integrating cloud services allows organizations to handle more than 5-10 million requests within 5 minutes, depending on the server capacity of the hosted website or application.Keywords: identify crashes of application under heavy load, JMeter with cloud Services, Scalable performance testing, JMeter master and slave using cloud Services
Procedia PDF Downloads 30852 Impinging Acoustics Induced Combustion: An Alternative Technique to Prevent Thermoacoustic Instabilities
Authors: Sayantan Saha, Sambit Supriya Dash, Vinayak Malhotra
Abstract:
Efficient propulsive systems development is an area of major interest and concern in aerospace industry. Combustion forms the most reliable and basic form of propulsion for ground and space applications. The generation of large amount of energy from a small volume relates mostly to the flaming combustion. This study deals with instabilities associated with flaming combustion. Combustion is always accompanied by acoustics be it external or internal. Chemical propulsion oriented rockets and space systems are well known to encounter acoustic instabilities. Acoustic brings in changes in inter-energy conversion and alter the reaction rates. The modified heat fluxes, owing to wall temperature, reaction rates, and non-linear heat transfer are observed. The thermoacoustic instabilities significantly result in reduced combustion efficiency leading to uncontrolled liquid rocket engine performance, serious hazards to systems, assisted testing facilities, enormous loss of resources and every year a substantial amount of money is spent to prevent them. Present work attempts to fundamentally understand the mechanisms governing the thermoacoustic combustion in liquid rocket engine using a simplified experimental setup comprising a butane cylinder and an impinging acoustic source. Rocket engine produces sound pressure level in excess of 153 Db. The RL-10 engine generates noise of 180 Db at its base. Systematic studies are carried out for varying fuel flow rates, acoustic levels and observations are made on the flames. The work is expected to yield a good physical insight into the development of acoustic devices that when coupled with the present propulsive devices could effectively enhance combustion efficiency leading to better and safer missions. The results would be utilized to develop impinging acoustic devices that impinge sound on the combustion chambers leading to stable combustion thus, improving specific fuel consumption, specific impulse, reducing emissions, enhanced performance and fire safety. The results can be effectively applied to terrestrial and space application.Keywords: combustion instability, fire safety, improved performance, liquid rocket engines, thermoacoustics
Procedia PDF Downloads 146851 The Effect of Mesenchymal Stem Cells on Full Thickness Skin Wound Healing in Albino Rats
Authors: Abir O. El Sadik
Abstract:
Introduction: Wound healing involves the interaction of multiple biological processes among different types of cells, intercellular matrix and specific signaling factors producing enhancement of cell proliferation of the epidermis over dermal granulation tissue. Several studies investigated multiple strategies to promote wound healing and to minimize infection and fluid losses. However, burn crisis, and its related morbidity and mortality are still elevated. The aim of the present study was to examine the effects of mesenchymal stem cells (MSCs) in accelerating wound healing and to compare the most efficient route of administration of MSCs, either intradermal or systemic injection, with focusing on the mechanisms producing epidermal and dermal cell regeneration. Material and methods: Forty-two adult male Sprague Dawley albino rats were divided into three equal groups (fourteen rats in each group): control group (group I); full thickness surgical skin wound model, Group II: Wound treated with systemic injection of MSCs and Group III: Wound treated with intradermal injection of MSCs. The healing ulcer was examined on day 2, 6, 10 and 15 for gross morphological evaluation and on day 10 and 15 for fluorescent, histological and immunohistochemical studies. Results: The wounds of the control group did not reach complete closure up to the end of the experiment. In MSCs treated groups, better and faster healing of wounds were detected more than the control group. Moreover, the intradermal route of administration of stem cells increased the rate of healing of the wounds more than the systemic injection. In addition, the wounds were found completely healed by the end of the fifteenth day of the experiment in all rats of the group injected intradermally. Microscopically, the wound areas of group III were hardly distinguished from the adjacent normal skin with complete regeneration of all skin layers; epidermis, dermis, hypodermis and underlying muscle layer. Fully regenerated hair follicles and sebaceous glands in the dermis of the healed areas surrounded by different arrangement of collagen fibers with a significant increase in their area percent were recorded in this group more than in other groups. Conclusion: MSCs accelerate the healing process of wound closure. The route of administration of MSCs has a great influence on wound healing as intradermal injection of MSCs was more effective in enhancement of wound healing than systemic injection.Keywords: intradermal, mesenchymal stem cells, morphology, skin wound, systemic injection
Procedia PDF Downloads 203850 The Significance of Picture Mining in the Fashion and Design as a New Research Method
Authors: Katsue Edo, Yu Hiroi
Abstract:
T Increasing attention has been paid to using pictures and photographs in research since the beginning of the 21th century in social sciences. Meanwhile we have been studying the usefulness of Picture mining, which is one of the new ways for a these picture using researches. Picture Mining is an explorative research analysis method that takes useful information from pictures, photographs and static or moving images. It is often compared with the methods of text mining. The Picture Mining concept includes observational research in the broad sense, because it also aims to analyze moving images (Ochihara and Edo 2013). In the recent literature, studies and reports using pictures are increasing due to the environmental changes. These are identified as technological and social changes (Edo et.al. 2013). Low price digital cameras and i-phones, high information transmission speed, low costs for information transferring and high performance and resolution of the cameras of mobile phones have changed the photographing behavior of people. Consequently, there is less resistance in taking and processing photographs for most of the people in the developing countries. In these studies, this method of collecting data from respondents is often called as ‘participant-generated photography’ or ‘respondent-generated visual imagery’, which focuses on the collection of data and its analysis (Pauwels 2011, Snyder 2012). But there are few systematical and conceptual studies that supports it significance of these methods. We have discussed in the recent years to conceptualize these picture using research methods and formalize theoretical findings (Edo et. al. 2014). We have identified the most efficient fields of Picture mining in the following areas inductively and in case studies; 1) Research in Consumer and Customer Lifestyles. 2) New Product Development. 3) Research in Fashion and Design. Though we have found that it will be useful in these fields and areas, we must verify these assumptions. In this study we will focus on the field of fashion and design, to determine whether picture mining methods are really reliable in this area. In order to do so we have conducted an empirical research of the respondents’ attitudes and behavior concerning pictures and photographs. We compared the attitudes and behavior of pictures toward fashion to meals, and found out that taking pictures of fashion is not as easy as taking meals and food. Respondents do not often take pictures of fashion and upload their pictures online, such as Facebook and Instagram, compared to meals and food because of the difficulty of taking them. We concluded that we should be more careful in analyzing pictures in the fashion area for there still might be some kind of bias existing even if the environment of pictures have drastically changed in these years.Keywords: empirical research, fashion and design, Picture Mining, qualitative research
Procedia PDF Downloads 363849 Assessment of the Living Conditions of Female Inmates in Correctional Service Centres in South West Nigeria
Authors: Ayoola Adekunle Dada, Tolulope Omolola Fateropa
Abstract:
There is no gain saying the fact that the Nigerian correctional services lack rehabilitation reformation. Owing to this, some so many inmates, including the female, become more emotionally bruised and hardened instead of coming out of the prison reformed. Although female inmates constitute only a small percentage worldwide, the challenges resulting from women falling under the provision of the penal system have prompted ficial and humanitarian bodies to consider female inmateas as vulnerable persons who need particular social work measures that meet their specific needs. Female inmates’condition may become worseinprisondue to the absence of the standard living condition. A survey of 100 female inmates will be used to determine the assessment of the living condition of the female inmates within the contexts in which they occur. Employing field methods from Medical Sociology and Law, the study seeks to make use of the collaboration of both disciplines for a comprehensive understanding of the scenario. Its specific objectives encompassed: (1) To examine access and use of health facilities among the female inmates;(2) To examine the effect of officers/warders attitude towards female inmates;(3)To investigate the perception of the female inmates towards the housing facilities in the centre and; (4) To investigate the feeding habit of the female inmates. Due to the exploratory nature of the study, the researchers will make use of mixed-method, such qualitative methods as interviews will be undertaken to complement survey research (quantitative). By adopting the above-explained inter-method triangulation, the study will not only ensure that the advantages of both methods are exploited but will also fulfil the basic purposes of research. The sampling for this study will be purposive. The study aims at sampling two correctional centres (Ado Ekiti and Akure) in order to generate representative data for the female inmates in South West Nigeria. In all, the total number of respondents will be 100. A cross-section of female inmates will be selected as respondents using a multi-stage sampling technique. 100 questionnaires will be administered. A semi structured (in-depth) interviews will be conducted among workers in the two selected correctional centres, respectively, to gain further insight on the living conditions of female inmates, which the survey may not readily elicit. These participants will be selected purposively in respect to their status in the organisation. Ethical issues in research on human subjects will be given due consideration. Such issues rest on principles of beneficence, non-maleficence, autonomy/justice and confidentiality. In the final analysis, qualitative data will be analyzed using manual content analysis. Both the descriptive and inferential statistics will be used for analytical purposes. Frequency, simple percentage, pie chart, bar chart, curve and cross-tabulations will form part of the descriptive analysis.Keywords: assessment, health facilities, inmates, perception, living conditions
Procedia PDF Downloads 98848 Personal Data Protection: A Legal Framework for Health Law in Turkey
Authors: Veli Durmus, Mert Uydaci
Abstract:
Every patient who needs to get a medical treatment should share health-related personal data with healthcare providers. Therefore, personal health data plays an important role to make health decisions and identify health threats during every encounter between a patient and caregivers. In other words, health data can be defined as privacy and sensitive information which is protected by various health laws and regulations. In many cases, the data are an outcome of the confidential relationship between patients and their healthcare providers. Globally, almost all nations have own laws, regulations or rules in order to protect personal data. There is a variety of instruments that allow authorities to use the health data or to set the barriers data sharing across international borders. For instance, Directive 95/46/EC of the European Union (EU) (also known as EU Data Protection Directive) establishes harmonized rules in European borders. In addition, the General Data Protection Regulation (GDPR) will set further common principles in 2018. Because of close policy relationship with EU, this study provides not only information on regulations, directives but also how they play a role during the legislative process in Turkey. Even if the decision is controversial, the Board has recently stated that private or public healthcare institutions are responsible for the patient call system, for doctors to call people waiting outside a consultation room, to prevent unlawful processing of personal data and unlawful access to personal data during the treatment. In Turkey, vast majority private and public health organizations provide a service that ensures personal data (i.e. patient’s name and ID number) to call the patient. According to the Board’s decision, hospital or other healthcare institutions are obliged to take all necessary administrative precautions and provide technical support to protect patient privacy. However, this application does not effectively and efficiently performing in most health services. For this reason, it is important to draw a legal framework of personal health data by stating what is the main purpose of this regulation and how to deal with complicated issues on personal health data in Turkey. The research is descriptive on data protection law for health care setting in Turkey. Primary as well as secondary data has been used for the study. The primary data includes the information collected under current national and international regulations or law. Secondary data include publications, books, journals, empirical legal studies. Consequently, privacy and data protection regimes in health law show there are some obligations, principles and procedures which shall be binding upon natural or legal persons who process health-related personal data. A comparative approach presents there are significant differences in some EU member states due to different legal competencies, policies, and cultural factors. This selected study provides theoretical and practitioner implications by highlighting the need to illustrate the relationship between privacy and confidentiality in Personal Data Protection in Health Law. Furthermore, this paper would help to define the legal framework for the health law case studies on data protection and privacy.Keywords: data protection, personal data, privacy, healthcare, health law
Procedia PDF Downloads 226847 A Study on Inverse Determination of Impact Force on a Honeycomb Composite Panel
Authors: Hamed Kalhori, Lin Ye
Abstract:
In this study, an inverse method was developed to reconstruct the magnitude and duration of impact forces exerted to a rectangular carbon fibre-epoxy composite honeycomb sandwich panel. The dynamic signals captured by Piezoelectric (PZT) sensors installed on the panel remotely from the impact locations were utilized to reconstruct the impact force generated by an instrumented hammer through an extended deconvolution approach. Two discretized forms of convolution integral are considered; the traditional one with an explicit transfer function and the modified one without an explicit transfer function. Deconvolution, usually applied to reconstruct the time history (e.g. magnitude) of a stochastic force at a defined location, is extended to identify both the location and magnitude of the impact force among a number of potential impact locations. It is assumed that a number of impact forces are simultaneously exerted to all potential locations, but the magnitude of all forces except one is zero, implicating that the impact occurs only at one location. The extended deconvolution is then applied to determine the magnitude as well as location (among the potential ones), incorporating the linear superposition of responses resulted from impact at each potential location. The problem can be categorized into under-determined (the number of sensors is less than that of impact locations), even-determined (the number of sensors equals that of impact locations), or over-determined (the number of sensors is greater than that of impact locations) cases. For an under-determined case, it comprises three potential impact locations and one PZT sensor for the rectangular carbon fibre-epoxy composite honeycomb sandwich panel. Assessments are conducted to evaluate the factors affecting the precision of the reconstructed force. Truncated Singular Value Decomposition (TSVD) and the Tikhonov regularization are independently chosen to regularize the problem to find the most suitable method for this system. The selection of optimal value of the regularization parameter is investigated through L-curve and Generalized Cross Validation (GCV) methods. In addition, the effect of different width of signal windows on the reconstructed force is examined. It is observed that the impact force generated by the instrumented impact hammer is sensitive to the impact locations of the structure, having a shape from a simple half-sine to a complicated one. The accuracy of the reconstructed impact force is evaluated using the correlation co-efficient between the reconstructed force and the actual one. Based on this criterion, it is concluded that the forces reconstructed by using the extended deconvolution without an explicit transfer function together with Tikhonov regularization match well with the actual forces in terms of magnitude and duration.Keywords: honeycomb composite panel, deconvolution, impact localization, force reconstruction
Procedia PDF Downloads 536846 Parallel Fuzzy Rough Support Vector Machine for Data Classification in Cloud Environment
Authors: Arindam Chaudhuri
Abstract:
Classification of data has been actively used for most effective and efficient means of conveying knowledge and information to users. The prima face has always been upon techniques for extracting useful knowledge from data such that returns are maximized. With emergence of huge datasets the existing classification techniques often fail to produce desirable results. The challenge lies in analyzing and understanding characteristics of massive data sets by retrieving useful geometric and statistical patterns. We propose a supervised parallel fuzzy rough support vector machine (PFRSVM) for data classification in cloud environment. The classification is performed by PFRSVM using hyperbolic tangent kernel. The fuzzy rough set model takes care of sensitiveness of noisy samples and handles impreciseness in training samples bringing robustness to results. The membership function is function of center and radius of each class in feature space and is represented with kernel. It plays an important role towards sampling the decision surface. The success of PFRSVM is governed by choosing appropriate parameter values. The training samples are either linear or nonlinear separable. The different input points make unique contributions to decision surface. The algorithm is parallelized with a view to reduce training times. The system is built on support vector machine library using Hadoop implementation of MapReduce. The algorithm is tested on large data sets to check its feasibility and convergence. The performance of classifier is also assessed in terms of number of support vectors. The challenges encountered towards implementing big data classification in machine learning frameworks are also discussed. The experiments are done on the cloud environment available at University of Technology and Management, India. The results are illustrated for Gaussian RBF and Bayesian kernels. The effect of variability in prediction and generalization of PFRSVM is examined with respect to values of parameter C. It effectively resolves outliers’ effects, imbalance and overlapping class problems, normalizes to unseen data and relaxes dependency between features and labels. The average classification accuracy for PFRSVM is better than other classifiers for both Gaussian RBF and Bayesian kernels. The experimental results on both synthetic and real data sets clearly demonstrate the superiority of the proposed technique.Keywords: FRSVM, Hadoop, MapReduce, PFRSVM
Procedia PDF Downloads 491845 Setting up a Prototype for the Artificial Interactive Reality Unified System to Transform Psychosocial Intervention in Occupational Therapy
Authors: Tsang K. L. V., Lewis L. A., Griffith S., Tucker P.
Abstract:
Background: Many children with high incidence disabilities, such as autism spectrum disorder (ASD), struggle to participate in the community in a socially acceptable manner. There are limitations for clinical settings to provide natural, real-life scenarios for them to practice the life skills needed to meet their real-life challenges. Virtual reality (VR) offers potential solutions to resolve the existing limitations faced by clinicians to create simulated natural environments for their clients to generalize the facilitated skills. Research design: The research aimed to develop a prototype of an interactive VR system to provide realistic and immersive environments for clients to practice skills. The descriptive qualitative methodology is employed to design and develop the Artificial Interactive Reality Unified System (AIRUS) prototype, which provided insights on how to use advanced VR technology to create simulated real-life social scenarios and enable users to interact with the objects and people inside the virtual environment using natural eye-gazes, hand and body movements. The eye tracking (e.g., selective or joint attention), hand- or body-tracking (e.g., repetitive stimming or fidgeting), and facial tracking (e.g., emotion recognition) functions allowed behavioral data to be captured and managed in the AIRUS architecture. Impact of project: Instead of using external controllers or sensors, hand tracking software enabled the users to interact naturally with the simulated environment using daily life behavior such as handshaking and waving to control and interact with the virtual objects and people. The AIRUS protocol offers opportunities for breakthroughs in future VR-based psychosocial assessment and intervention in occupational therapy. Implications for future projects: AI technology can allow more efficient data capturing and interpretation of object identification and human facial emotion recognition at any given moment. The data points captured can be used to pinpoint our users’ focus and where their interests lie. AI can further help advance the data interpretation system.Keywords: occupational therapy, psychosocial assessment and intervention, simulated interactive environment, virtual reality
Procedia PDF Downloads 39844 Virtual Reality and Other Real-Time Visualization Technologies for Architecture Energy Certifications
Authors: Román Rodríguez Echegoyen, Fernando Carlos López Hernández, José Manuel López Ujaque
Abstract:
Interactive management of energy certification ratings has remained on the sidelines of the evolution of virtual reality (VR) despite related advances in architecture in other areas such as BIM and real-time working programs. This research studies to what extent VR software can help the stakeholders to better understand energy efficiency parameters in order to obtain reliable ratings assigned to the parts of the building. To evaluate this hypothesis, the methodology has included the construction of a software prototype. Current energy certification systems do not follow an intuitive data entry system; neither do they provide a simple or visual verification of the technical values included in the certification by manufacturers or other users. This software, by means of real-time visualization and a graphical user interface, proposes different improvements to the current energy certification systems that ease the understanding of how the certification parameters work in a building. Furthermore, the difficulty of using current interfaces, which are not friendly or intuitive for the user, means that untrained users usually get a poor idea of the grounds for certification and how the program works. In addition, the proposed software allows users to add further information, such as financial and CO₂ savings, energy efficiency, and an explanatory analysis of results for the least efficient areas of the building through a new visual mode. The software also helps the user to evaluate whether or not an investment to improve the materials of an installation is worth the cost of the different energy certification parameters. The evaluated prototype (named VEE-IS) shows promising results when it comes to representing in a more intuitive and simple manner the energy rating of the different elements of the building. Users can also personalize all the inputs necessary to create a correct certification, such as floor materials, walls, installations, or other important parameters. Working in real-time through VR allows for efficiently comparing, analyzing, and improving the rated elements, as well as the parameters that we must enter to calculate the final certification. The prototype also allows for visualizing the building in efficiency mode, which lets us move over the building to analyze thermal bridges or other energy efficiency data. This research also finds that the visual representation of energy efficiency certifications makes it easy for the stakeholders to examine improvements progressively, which adds value to the different phases of design and sale.Keywords: energetic certification, virtual reality, augmented reality, sustainability
Procedia PDF Downloads 188843 Sustainability in Space: Material Efficiency in Space Missions
Authors: Hamda M. Al-Ali
Abstract:
From addressing fundamental questions about the history of the solar system to exploring other planets for any signs of life have always been the core of human space exploration. This triggered humans to explore whether other planets such as Mars could support human life on them. Therefore, many planned space missions to other planets have been designed and conducted to examine the feasibility of human survival on them. However, space missions are expensive and consume a large number of various resources to be successful. To overcome these problems, material efficiency shall be maximized through the use of reusable launch vehicles (RLV) rather than disposable and expendable ones. Material efficiency is defined as a way to achieve service requirements using fewer materials to reduce CO2 emissions from industrial processes. Materials such as aluminum-lithium alloys, steel, Kevlar, and reinforced carbon-carbon composites used in the manufacturing of spacecrafts could be reused in closed-loop cycles directly or by adding a protective coat. Material efficiency is a fundamental principle of a circular economy. The circular economy aims to cutback waste and reduce pollution through maximizing material efficiency so that businesses can succeed and endure. Five strategies have been proposed to improve material efficiency in the space industry, which includes waste minimization, introduce Key Performance Indicators (KPIs) to measure material efficiency, and introduce policies and legislations to improve material efficiency in the space sector. Another strategy to boost material efficiency is through maximizing resource and energy efficiency through material reusability. Furthermore, the environmental effects associated with the rapid growth in the number of space missions include black carbon emissions that lead to climate change. The levels of emissions must be tracked and tackled to ensure the safe utilization of space in the future. The aim of this research paper is to examine and suggest effective methods used to improve material efficiency in space missions so that space and Earth become more environmentally and economically sustainable. The objectives used to fulfill this aim are to identify the materials used in space missions that are suitable to be reused in closed-loop cycles considering material efficiency indicators and circular economy concepts. An explanation of how spacecraft materials could be re-used as well as propose strategies to maximize material efficiency in order to make RLVs possible so that access to space becomes affordable and reliable is provided. Also, the economic viability of the RLVs is examined to show the extent to which the use of RLVs has on the reduction of space mission costs. The environmental and economic implications of the increase in the number of space missions as a result of the use of RLVs are also discussed. These research questions are studied through detailed critical analysis of the literature, such as published reports, books, scientific articles, and journals. A combination of keywords such as material efficiency, circular economy, RLVs, and spacecraft materials were used to search for appropriate literature.Keywords: access to space, circular economy, material efficiency, reusable launch vehicles, spacecraft materials
Procedia PDF Downloads 115842 Development of Risk Index and Corporate Governance Index: An Application on Indian PSUs
Authors: M. V. Shivaani, P. K. Jain, Surendra S. Yadav
Abstract:
Public Sector Undertakings (PSUs), being government-owned organizations have commitments for the economic and social wellbeing of the society; this commitment needs to be reflected in their risk-taking, decision-making and governance structures. Therefore, the primary objective of the study is to suggest measures that may lead to improvement in performance of PSUs. To achieve this objective two normative frameworks (one relating to risk levels and other relating to governance structure) are being put forth. The risk index is based on nine risks, such as, solvency risk, liquidity risk, accounting risk, etc. and each of the risks have been scored on a scale of 1 to 5. The governance index is based on eleven variables, such as, board independence, diversity, risk management committee, etc. Each of them are scored on a scale of 1 to five. The sample consists of 39 PSUs that featured in Nifty 500 index and, the study covers a 10 year period from April 1, 2005 to March, 31, 2015. Return on assets (ROA) and return on equity (ROE) have been used as proxies of firm performance. The control variables used in the model include, age of firm, growth rate of firm and size of firm. A dummy variable has also been used to factor in the effects of recession. Given the panel nature of data and possibility of endogeneity, dynamic panel data- generalized method of moments (Diff-GMM) regression has been used. It is worth noting that the corporate governance index is positively related to both ROA and ROE, indicating that with the improvement in governance structure, PSUs tend to perform better. Considering the components of CGI, it may be suggested that (i). PSUs ensure adequate representation of women on Board, (ii). appoint a Chief Risk Officer, and (iii). constitute a risk management committee. The results also indicate that there is a negative association between risk index and returns. These results not only validate the framework used to develop the risk index but also provide a yardstick to PSUs benchmark their risk-taking if they want to maximize their ROA and ROE. While constructing the CGI, certain non-compliances were observed, even in terms of mandatory requirements, such as, proportion of independent directors. Such infringements call for stringent penal provisions and better monitoring of PSUs. Further, if the Securities and Exchange Board of India (SEBI) and Ministry of Corporate Affairs (MCA) bring about such reforms in the PSUs and make mandatory the adherence to the normative frameworks put forth in the study, PSUs may have more effective and efficient decision-making, lower risks and hassle free management; all these ultimately leading to better ROA and ROE.Keywords: PSU, risk governance, diff-GMM, firm performance, the risk index
Procedia PDF Downloads 159841 Interoperability of 505th Search and Rescue Group and the 205th Tactical Helicopter Wing of the Philippine Air Force in Search and Rescue Operations: An Assessment
Authors: Ryan C. Igama
Abstract:
The complexity of disaster risk reduction management paved the way for various innovations and approaches to mitigate the loss of lives and casualties during disaster-related situations. The efficiency of doing response operations during disasters relies on the timely and organized deployment of search, rescue and retrieval teams. Indeed, the assistance provided by the search, rescue, and retrieval teams during disaster operations is a critical service needed to further minimize the loss of lives and casualties. The Armed Forces of the Philippines was mandated to provide humanitarian assistance and disaster relief operations during calamities and disasters. Thus, this study “Interoperability of 505TH Search and Rescue Group and the 205TH Tactical Helicopter Wing of the Philippine Air Force in Search and Rescue Operations: An Assessment” was intended to provide substantial information to further strengthen and promote the capabilities of search and rescue operations in the Philippines. Further, this study also aims to assess the interoperability of the 505th Search and Rescue Group of the Philippine Air Force and the 205th Tactical Helicopter Wing Philippine Air Force. This study was undertaken covering the component units in the Philippine Air Force of the Armed Forces of the Philippines – specifically the 505th SRG and the 205th THW as the involved units who also acted as the respondents of the study. The qualitative approach was the mechanism utilized in the form of focused group discussions, key informant interviews, and documentary analysis as primary means to obtain the needed data for the study. Essentially, this study was geared towards the evaluation of the effectiveness of the interoperability of the two (2) involved PAF units during search and rescue operations. Further, it also delved into the identification of the impacts, gaps, and challenges confronted regarding interoperability as to training, equipment, and coordination mechanism vis-à-vis the needed measures for improvement, respectively. The result of the study regarding the interoperability of the two (2) PAF units during search and rescue operations showed that there was a duplication in terms of functions or tasks in HADR activities, specifically during the conduct of air rescue operations in situations like calamities. In addition, it was revealed that there was a lack of equipment and training for the personnel involved in search and rescue operations which is a vital element during calamity response activities. Based on the findings of the study, it was recommended that a strategic planning workshop/activity must be conducted regarding the duties and responsibilities of the personnel involved in the search and rescue operations to address the command and control and interoperability issues of these units. Additionally, the conduct of intensive HADR-related training for the personnel involved in search and rescue operations of the two (2) PAF Units must also be conducted so they can be more proficient in their skills and sustainably increase their knowledge of search and rescue scenarios, including the capabilities of the respective units. Lastly, the updating of existing doctrines or policies must be undertaken to adapt advancement to the evolving situations in search and rescue operations.Keywords: interoperability, search and rescue capability, humanitarian assistance, disaster response
Procedia PDF Downloads 94840 A Review of Atomization Mechanisms Used for Spray Flash Evaporation: Their Effectiveness and Proposal of Rotary Bell Atomizer for Flashing Application
Authors: Murad A. Channa, Mehdi Khiadani. Yasir Al-Abdeli
Abstract:
Considering the severity of water scarcity around the world and its widening at an alarming rate, practical improvements in desalination techniques need to be engineered at the earliest. Atomization is the major aspect of flashing phenomena, yet it has been paid less attention to until now. There is a need to test efficient ways of atomization for the flashing process. Flash evaporation together with reverse osmosis is also a commercially matured desalination technique commonly famous as Multi-stage Flash (MSF). Even though reverse osmosis is massively practical, it is not economical or sustainable compared to flash evaporation. However, flashing evaporation has its drawbacks as well such as lower efficiency of water production per higher consumption of power and time. Flash evaporation is simply the instant boiling of a subcooled liquid which is introduced as droplets in a well-maintained negative environment. This negative pressure inside the vacuum increases the temperature of the liquid droplets far above their boiling point, which results in the release of latent heat, and the liquid droplets turn into vapor which is collected to be condensed back into an impurity-free liquid in a condenser. Atomization is the main difference between pool and spray flash evaporation. Atomization is the heart of the flash evaporation process as it increases the evaporating surface area per drop atomized. Atomization can be categorized into many levels depending on its drop size, which again becomes crucial for increasing the droplet density (drop count) per given flow rate. This review comprehensively summarizes the selective results relating to the methods of atomization and their effectiveness on the evaporation rate from earlier works to date. In addition, the reviewers propose using centrifugal atomization for the flashing application, which brings several advantages viz ultra-fine droplets, uniform droplet density, and the swirling geometry of the spray with kinetically more energetic sprays during their flight. Finally, several challenges of using rotary bell atomizer (RBA) and RBA Sprays inside the chamber have been identified which will be explored in detail. A schematic of rotary bell atomizer (RBA) integration with the chamber has been designed. This powerful centrifugal atomization has the potential to increase potable water production in commercial multi-stage flash evaporators, where it would be preferably advantageous.Keywords: atomization, desalination, flash evaporation, rotary bell atomizer
Procedia PDF Downloads 84839 Microplastics in the Seine River Catchment: Results and Lessons from a Pluriannual Research Programme
Authors: Bruno Tassin, Robin Treilles, Cleo Stratmann, Minh Trang Nguyen, Sam Azimi, Vincent Rocher, Rachid Dris, Johnny Gasperi
Abstract:
Microplastics (<5mm) in the environment and in hydro systems is one of the major present environmental issues. Over the last five years a research programme was conducted in order to assess the behavior of microplastics in the Seine river catchment, in a Man-Land-Sea continuum approach. Results show that microplastic concentration varies at the seasonal scale, but also at much smaller scales, during flood events and with tides in the estuary for instance. Moreover, microplastic sampling and characterization issues emerged throughout this work. The Seine river is a 750km long river flowing in Northwestern France. It crosses the Paris megacity (12 millions inhabitants) and reaches the English Channel after a 170 km long estuary. This site is a very relevant one to assess the effect of anthropogenic pollution as the mean river flow is low (mean flow around 350m³/s) while the human presence and activities are very intense. Monthly monitoring of the microplastic concentration took place over a 19-month period and showed significant temporal variations at all sampling stations but no significant upstream-downstream increase, indicating a possible major sink to the sediment. At the scale of a major flood event (winter and spring 2018), microplastic concentration shows an evolution similar to the well-known suspended solids concentration, with an increase during the increase of the flow and a decrease during the decrease of the flow. Assessing the position of the concentration peak in relation to the flow peak was unfortunately impossible. In the estuary, concentrations vary with time in connection with tides movements and in the water column in relation to the salinity and the turbidity. Although major gains of knowledge on the microplastic dynamics in the Seine river have been obtained over the last years, major gaps remain to deal mostly with the interaction with the dynamics of the suspended solids, the selling processes in the water column and the resuspension by navigation or shear stress increase. Moreover, the development of efficient chemical characterization techniques during the 5 year period of this pluriannual research programme led to the improvement of the sampling techniques in order to access smaller microplastics (>10µm) as well as larger but rare ones (>500µm).Keywords: microplastics, Paris megacity, seine river, suspended solids
Procedia PDF Downloads 199838 The Role of Risk Attitudes and Networks on the Migration Decision: Empirical Evidence from the United States
Authors: Tamanna Rimi
Abstract:
A large body of literature has discussed the determinants of migration decision. However, the potential role of individual risk attitudes on migration decision has so far been overlooked. The research on migration literature has studied how the expected income differential influences migration flows for a risk neutral individual. However, migration takes place when there is no expected income differential or even the variability of income appears as lower than in the current location. This migration puzzle motivates a recent trend in the literature that analyzes how attitudes towards risk influence the decision to migrate. However, the significance of risk attitudes on migration decision has been addressed mostly in a theoretical perspective in the mainstream migration literature. The efficient outcome of labor market and overall economy are largely influenced by migration in many countries. Therefore, attitudes towards risk as a determinant of migration should get more attention in empirical studies. To author’s best knowledge, this is the first study that has examined the relationship between relative risk aversion and migration decision in US market. This paper considers movement across United States as a means of migration. In addition, this paper also explores the network effect due to the increasing size of one’s own ethnic group to a source location on the migration decision and how attitudes towards risk vary with network effect. Two ethnic groups (i.e. Asian and Hispanic) have been considered in this regard. For the empirical estimation, this paper uses two sources of data: 1) U.S. census data for social, economic, and health research, 2010 (IPUMPS) and 2) University of Michigan Health and Retirement Study, 2010 (HRS). In order to measure relative risk aversion, this study uses the ‘Two Sample Two-Stage Instrumental Variable (TS2SIV)’ technique. This is a similar method of Angrist (1990) and Angrist and Kruegers’ (1992) ‘Two Sample Instrumental Variable (TSIV)’ technique. Using a probit model, the empirical investigation yields the following results: (i) risk attitude has a significantly large impact on migration decision where more risk averse people are less likely to migrate; (ii) the impact of risk attitude on migration varies by other demographic characteristics such as age and sex; (iii) people with higher concentration of same ethnic households living in a particular place are expected to migrate less from their current place; (iv) the risk attitudes on migration vary with network effect. The overall findings of this paper relating risk attitude, migration decision and network effect can be a significant contribution addressing the gap between migration theory and empirical study in migration literature.Keywords: migration, network effect, risk attitude, U.S. market
Procedia PDF Downloads 164837 Acrylic Microspheres-Based Microbial Bio-Optode for Nitrite Ion Detection
Authors: Siti Nur Syazni Mohd Zuki, Tan Ling Ling, Nina Suhaity Azmi, Chong Kwok Feng, Lee Yook Heng
Abstract:
Nitrite (NO2-) ion is used prevalently as a preservative in processed meat. Elevated levels of nitrite also found in edible bird’s nests (EBNs). Consumption of NO2- ion at levels above the health-based risk may cause cancer in humans. Spectrophotometric Griess test is the simplest established standard method for NO2- ion detection, however, it requires careful control of pH of each reaction step and susceptible to strong oxidants and dyeing interferences. Other traditional methods rely on the use of laboratory-scale instruments such as GC-MS, HPLC and ion chromatography, which cannot give real-time response. Therefore, it is of significant need for devices capable of measuring nitrite concentration in-situ, rapidly and without reagents, sample pretreatment or extraction step. Herein, we constructed a microspheres-based microbial optode for visual quantitation of NO2- ion. Raoutella planticola, the bacterium expressing NAD(P)H nitrite reductase (NiR) enzyme has been successfully extracted by microbial technique from EBN collected from local birdhouse. The whole cells and the lipophilic Nile Blue chromoionophore were physically absorbed on the photocurable poly(n-butyl acrylate-N-acryloxysuccinimide) [poly (nBA-NAS)] microspheres, whilst the reduced coenzyme NAD(P)H was covalently immobilized on the succinimide-functionalized acrylic microspheres to produce a reagentless biosensing system. Upon the NiR enzyme catalyzes the oxidation of NAD(P)H to NAD(P)+, NO2- ion is reduced to ammonium hydroxide, and that a colour change from blue to pink of the immobilized Nile Blue chromoionophore is perceived as a result of deprotonation reaction increasing the local pH in the microspheres membrane. The microspheres-based optosensor was optimized with a reflectance spectrophotometer at 639 nm and pH 8. The resulting microbial bio-optode membrane could quantify NO2- ion at 0.1 ppm and had a linear response up to 400 ppm. Due to the large surface area to mass ratio of the acrylic microspheres, it allows efficient solid state diffusional mass transfer of the substrate to the bio-recognition phase, and achieve the steady state response as fast as 5 min. The proposed optical microbial biosensor requires no sample pre-treatment step and possesses high stability as the whole cell biocatalyst provides protection to the enzymes from interfering substances, hence it is suitable for measurements in contaminated samples.Keywords: acrylic microspheres, microbial bio-optode, nitrite ion, reflectometric
Procedia PDF Downloads 448