Search results for: efficient frontier
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5069

Search results for: efficient frontier

719 Revolutionizing Healthcare Facility Maintenance: A Groundbreaking AI, BIM, and IoT Integration Framework

Authors: Mina Sadat Orooje, Mohammad Mehdi Latifi, Behnam Fereydooni Eftekhari

Abstract:

The integration of cutting-edge Internet of Things (IoT) technologies with advanced Artificial Intelligence (AI) systems is revolutionizing healthcare facility management. However, the current landscape of hospital building maintenance suffers from slow, repetitive, and disjointed processes, leading to significant financial, resource, and time losses. Additionally, the potential of Building Information Modeling (BIM) in facility maintenance is hindered by a lack of data within digital models of built environments, necessitating a more streamlined data collection process. This paper presents a robust framework that harmonizes AI with BIM-IoT technology to elevate healthcare Facility Maintenance Management (FMM) and address these pressing challenges. The methodology begins with a thorough literature review and requirements analysis, providing insights into existing technological landscapes and associated obstacles. Extensive data collection and analysis efforts follow to deepen understanding of hospital infrastructure and maintenance records. Critical AI algorithms are identified to address predictive maintenance, anomaly detection, and optimization needs alongside integration strategies for BIM and IoT technologies, enabling real-time data collection and analysis. The framework outlines protocols for data processing, analysis, and decision-making. A prototype implementation is executed to showcase the framework's functionality, followed by a rigorous validation process to evaluate its efficacy and gather user feedback. Refinement and optimization steps are then undertaken based on evaluation outcomes. Emphasis is placed on the scalability of the framework in real-world scenarios and its potential applications across diverse healthcare facility contexts. Finally, the findings are meticulously documented and shared within the healthcare and facility management communities. This framework aims to significantly boost maintenance efficiency, cut costs, provide decision support, enable real-time monitoring, offer data-driven insights, and ultimately enhance patient safety and satisfaction. By tackling current challenges in healthcare facility maintenance management it paves the way for the adoption of smarter and more efficient maintenance practices in healthcare facilities.

Keywords: artificial intelligence, building information modeling, healthcare facility maintenance, internet of things integration, maintenance efficiency

Procedia PDF Downloads 56
718 Effects of Lime and N100 on the Growth and Phytoextraction Capability of a Willow Variety (S. Viminalis × S. Schwerinii × S. Dasyclados) Grown in Contaminated Soils

Authors: Mir Md. Abdus Salam, Muhammad Mohsin, Pertti Pulkkinen, Paavo Pelkonen, Ari Pappinen

Abstract:

Soil and water pollution caused by extensive mining practices can adversely affect environmental components, such as humans, animals, and plants. Despite a generally positive contribution to society, mining practices have become a serious threat to biological systems. As metals do not degrade completely, they require immobilization, toxicity reduction, or removal. A greenhouse experiment was conducted to evaluate the effects of lime and N100 (11-amino-1-hydroxyundecylidene) chelate amendment on the growth and phytoextraction potential of the willow variety Klara (S. viminalis × S. schwerinii × S. dasyclados) grown in soils heavily contaminated with copper (Cu). The plants were irrigated with tap or processed water (mine wastewater). The sequential extraction technique and inductively coupled plasma-mass spectrometry (ICP-MS) tool were used to determine the extractable metals and evaluate the fraction of metals in the soil that could be potentially available for plant uptake. The results suggest that the combined effects of the contaminated soil and processed water inhibited growth parameter values. In contrast, the accumulation of Cu in the plant tissues was increased compared to the control. When the soil was supplemented with lime and N100; growth parameter and resistance capacity were significantly higher compared to unamended soil treatments, especially in the contaminated soil treatments. The combined lime- and N100-amended soil treatment produced higher growth rate of biomass, resistance capacity and phytoextraction efficiency levels relative to either the lime-amended or the N100-amended soil treatments. This study provides practical evidence of the efficient chelate-assisted phytoextraction capability of Klara and highlights its potential as a viable and inexpensive novel approach for in-situ remediation of Cu-contaminated soils and mine wastewaters. Abandoned agricultural, industrial and mining sites can also be utilized by a Salix afforestation program without conflict with the production of food crops. This kind of program may create opportunities for bioenergy production and economic development, but contamination levels should be examined before bioenergy products are used.

Keywords: copper, Klara, lime, N100, phytoextraction

Procedia PDF Downloads 145
717 Mathematical Modelling of Biogas Dehumidification by Using of Counterflow Heat Exchanger

Authors: Staņislavs Gendelis, Andris Jakovičs, Jānis Ratnieks, Aigars Laizāns, Dāvids Vardanjans

Abstract:

Dehumidification of biogas at the biomass plants is very important to provide the energy efficient burning of biomethane at the outlet. A few methods are widely used to reduce the water content in biogas, e.g. chiller/heat exchanger based cooling, usage of different adsorbents like PSA, or the combination of such approaches. A quite different method of biogas dehumidification is offered and analyzed in this paper. The main idea is to direct the flow of biogas from the plant around it downwards; thus, creating additional insulation layer. As the temperature in gas shell layer around the plant will decrease from ~ 38°C to 20°C in the summer or even to 0°C in the winter, condensation of water vapor occurs. The water from the bottom of the gas shell can be collected and drain away. In addition, another upward shell layer is created after the condensate drainage place on the outer side to further reducing heat losses. Thus, counterflow biogas heat exchanger is created around the biogas plant. This research work deals with the numerical modelling of biogas flow, taking into account heat exchange and condensation on cold surfaces. Different kinds of boundary conditions (air and ground temperatures in summer/winter) and various physical properties of constructions (insulation between layers, wall thickness) are included in the model to make it more general and useful for different biogas flow conditions. The complexity of this problem is fact, that the temperatures in both channels are conjugated in case of low thermal resistance between layers. MATLAB programming language is used for multiphysical model development, numerical calculations and result visualization. Experimental installation of a biogas plant’s vertical wall with an additional 2 layers of polycarbonate sheets with the controlled gas flow was set up to verify the modelling results. Gas flow at inlet/outlet, temperatures between the layers and humidity were controlled and measured during a number of experiments. Good correlation with modelling results for vertical wall section allows using of developed numerical model for an estimation of parameters for the whole biogas dehumidification system. Numerical modelling of biogas counterflow heat exchanger system placed on the plant’s wall for various cases allows optimizing of thickness for gas layers and insulation layer to ensure necessary dehumidification of the gas under different climatic conditions. Modelling of system’s defined configuration with known conditions helps to predict the temperature and humidity content of the biogas at the outlet.

Keywords: biogas dehumidification, numerical modelling, condensation, biogas plant experimental model

Procedia PDF Downloads 547
716 Considering Uncertainties of Input Parameters on Energy, Environmental Impacts and Life Cycle Costing by Monte Carlo Simulation in the Decision Making Process

Authors: Johannes Gantner, Michael Held, Matthias Fischer

Abstract:

The refurbishment of the building stock in terms of energy supply and efficiency is one of the major challenges of the German turnaround in energy policy. As the building sector accounts for 40% of Germany’s total energy demand, additional insulation is key for energy efficient refurbished buildings. Nevertheless the energetic benefits often the environmental and economic performances of insulation materials are questioned. The methods Life Cycle Assessment (LCA) as well as Life Cycle Costing (LCC) can form the standardized basis for answering this doubts and more and more become important for material producers due efforts such as Product Environmental Footprint (PEF) or Environmental Product Declarations (EPD). Due to increasing use of LCA and LCC information for decision support the robustness and resilience of the results become crucial especially for support of decision and policy makers. LCA and LCC results are based on respective models which depend on technical parameters like efficiencies, material and energy demand, product output, etc.. Nevertheless, the influence of parameter uncertainties on lifecycle results are usually not considered or just studied superficially. Anyhow the effect of parameter uncertainties cannot be neglected. Based on the example of an exterior wall the overall lifecycle results are varying by a magnitude of more than three. As a result simple best case worst case analyses used in practice are not sufficient. These analyses allow for a first rude view on the results but are not taking effects into account such as error propagation. Thereby LCA practitioners cannot provide further guidance for decision makers. Probabilistic analyses enable LCA practitioners to gain deeper understanding of the LCA and LCC results and provide a better decision support. Within this study, the environmental and economic impacts of an exterior wall system over its whole lifecycle are illustrated, and the effect of different uncertainty analysis on the interpretation in terms of resilience and robustness are shown. Hereby the approaches of error propagation and Monte Carlo Simulations are applied and combined with statistical methods in order to allow for a deeper understanding and interpretation. All in all this study emphasis the need for a deeper and more detailed probabilistic evaluation based on statistical methods. Just by this, misleading interpretations can be avoided, and the results can be used for resilient and robust decisions.

Keywords: uncertainty, life cycle assessment, life cycle costing, Monte Carlo simulation

Procedia PDF Downloads 285
715 Effect of the Orifice Plate Specifications on Coefficient of Discharge

Authors: Abulbasit G. Abdulsayid, Zinab F. Abdulla, Asma A. Omer

Abstract:

On the ground that the orifice plate is relatively inexpensive, requires very little maintenance and only calibrated during the occasion of plant turnaround, the orifice plate has turned to be in a real prevalent use in gas industry. Inaccuracy of measurement in the fiscal metering stations may highly be accounted to be the most vital factor for mischarges in the natural gas industry in Libya. A very trivial error in measurement can add up a fast escalating financial burden to the custodian transactions. The unaccounted gas quantity transferred annually via orifice plates in Libya, could be estimated in an extent of multi-million dollars. As the oil and gas wealth is the solely source of income to Libya, every effort is now being exerted to improve the accuracy of existing orifice metering facilities. Discharge coefficient has become pivotal in current researches undertaken in this regard. Hence, increasing the knowledge of the flow field in a typical orifice meter is indispensable. Recently and in a drastic pace, the CFD has become the most time and cost efficient versatile tool for in-depth analysis of fluid mechanics, heat and mass transfer of various industrial applications. Getting deeper into the physical phenomena lied beneath and predicting all relevant parameters and variables with high spatial and temporal resolution have been the greatest weighing pros counting for CFD. In this paper, flow phenomena for air passing through an orifice meter were numerically analyzed with CFD code based modeling, giving important information about the effect of orifice plate specifications on the discharge coefficient for three different tappings locations, i.e., flange tappings, D and D/2 tappings compared with vena contracta tappings. Discharge coefficients were paralleled with discharge coefficients estimated by ISO 5167. The influences of orifice plate bore thickness, orifice plate thickness, beveled angle, perpendicularity and buckling of the orifice plate, were all duly investigated. A case of an orifice meter whose pipe diameter of 2 in, beta ratio of 0.5 and Reynolds number of 91100, was taken as a model. The results highlighted that the discharge coefficients were highly responsive to the variation of plate specifications and under all cases, the discharge coefficients for D and D/2 tappings were very close to that of vena contracta tappings which were believed as an ideal arrangement. Also, in general sense, it was appreciated that the standard equation in ISO 5167, by which the discharge coefficient was calculated, cannot capture the variation of the plate specifications and thus further thorough considerations would be still needed.

Keywords: CFD, discharge coefficients, orifice meter, orifice plate specifications

Procedia PDF Downloads 118
714 Digimesh Wireless Sensor Network-Based Real-Time Monitoring of ECG Signal

Authors: Sahraoui Halima, Dahani Ameur, Tigrine Abedelkader

Abstract:

DigiMesh technology represents a pioneering advancement in wireless networking, offering cost-effective and energy-efficient capabilities. Its inherent simplicity and adaptability facilitate the seamless transfer of data between network nodes, extending the range and ensuring robust connectivity through autonomous self-healing mechanisms. In light of these advantages, this study introduces a medical platform harnessed with DigiMesh wireless network technology characterized by low power consumption, immunity to interference, and user-friendly operation. The primary application of this platform is the real-time, long-distance monitoring of Electrocardiogram (ECG) signals, with the added capacity for simultaneous monitoring of ECG signals from multiple patients. The experimental setup comprises key components such as Raspberry Pi, E-Health Sensor Shield, and Xbee DigiMesh modules. The platform is composed of multiple ECG acquisition devices labeled as Sensor Node 1 and Sensor Node 2, with a Raspberry Pi serving as the central hub (Sink Node). Two communication approaches are proposed: Single-hop and multi-hop. In the Single-hop approach, ECG signals are directly transmitted from a sensor node to the sink node through the XBee3 DigiMesh RF Module, establishing peer-to-peer connections. This approach was tested in the first experiment to assess the feasibility of deploying wireless sensor networks (WSN). In the multi-hop approach, two sensor nodes communicate with the server (Sink Node) in a star configuration. This setup was tested in the second experiment. The primary objective of this research is to evaluate the performance of both Single-hop and multi-hop approaches in diverse scenarios, including open areas and obstructed environments. Experimental results indicate the DigiMesh network's effectiveness in Single-hop mode, with reliable communication over distances of approximately 300 meters in open areas. In the multi-hop configuration, the network demonstrated robust performance across approximately three floors, even in the presence of obstacles, without the need for additional router devices. This study offers valuable insights into the capabilities of DigiMesh wireless technology for real-time ECG monitoring in healthcare applications, demonstrating its potential for use in diverse medical scenarios.

Keywords: DigiMesh protocol, ECG signal, real-time monitoring, medical platform

Procedia PDF Downloads 78
713 Impinging Acoustics Induced Combustion: An Alternative Technique to Prevent Thermoacoustic Instabilities

Authors: Sayantan Saha, Sambit Supriya Dash, Vinayak Malhotra

Abstract:

Efficient propulsive systems development is an area of major interest and concern in aerospace industry. Combustion forms the most reliable and basic form of propulsion for ground and space applications. The generation of large amount of energy from a small volume relates mostly to the flaming combustion. This study deals with instabilities associated with flaming combustion. Combustion is always accompanied by acoustics be it external or internal. Chemical propulsion oriented rockets and space systems are well known to encounter acoustic instabilities. Acoustic brings in changes in inter-energy conversion and alter the reaction rates. The modified heat fluxes, owing to wall temperature, reaction rates, and non-linear heat transfer are observed. The thermoacoustic instabilities significantly result in reduced combustion efficiency leading to uncontrolled liquid rocket engine performance, serious hazards to systems, assisted testing facilities, enormous loss of resources and every year a substantial amount of money is spent to prevent them. Present work attempts to fundamentally understand the mechanisms governing the thermoacoustic combustion in liquid rocket engine using a simplified experimental setup comprising a butane cylinder and an impinging acoustic source. Rocket engine produces sound pressure level in excess of 153 Db. The RL-10 engine generates noise of 180 Db at its base. Systematic studies are carried out for varying fuel flow rates, acoustic levels and observations are made on the flames. The work is expected to yield a good physical insight into the development of acoustic devices that when coupled with the present propulsive devices could effectively enhance combustion efficiency leading to better and safer missions. The results would be utilized to develop impinging acoustic devices that impinge sound on the combustion chambers leading to stable combustion thus, improving specific fuel consumption, specific impulse, reducing emissions, enhanced performance and fire safety. The results can be effectively applied to terrestrial and space application.

Keywords: combustion instability, fire safety, improved performance, liquid rocket engines, thermoacoustics

Procedia PDF Downloads 141
712 The Effect of Mesenchymal Stem Cells on Full Thickness Skin Wound Healing in Albino Rats

Authors: Abir O. El Sadik

Abstract:

Introduction: Wound healing involves the interaction of multiple biological processes among different types of cells, intercellular matrix and specific signaling factors producing enhancement of cell proliferation of the epidermis over dermal granulation tissue. Several studies investigated multiple strategies to promote wound healing and to minimize infection and fluid losses. However, burn crisis, and its related morbidity and mortality are still elevated. The aim of the present study was to examine the effects of mesenchymal stem cells (MSCs) in accelerating wound healing and to compare the most efficient route of administration of MSCs, either intradermal or systemic injection, with focusing on the mechanisms producing epidermal and dermal cell regeneration. Material and methods: Forty-two adult male Sprague Dawley albino rats were divided into three equal groups (fourteen rats in each group): control group (group I); full thickness surgical skin wound model, Group II: Wound treated with systemic injection of MSCs and Group III: Wound treated with intradermal injection of MSCs. The healing ulcer was examined on day 2, 6, 10 and 15 for gross morphological evaluation and on day 10 and 15 for fluorescent, histological and immunohistochemical studies. Results: The wounds of the control group did not reach complete closure up to the end of the experiment. In MSCs treated groups, better and faster healing of wounds were detected more than the control group. Moreover, the intradermal route of administration of stem cells increased the rate of healing of the wounds more than the systemic injection. In addition, the wounds were found completely healed by the end of the fifteenth day of the experiment in all rats of the group injected intradermally. Microscopically, the wound areas of group III were hardly distinguished from the adjacent normal skin with complete regeneration of all skin layers; epidermis, dermis, hypodermis and underlying muscle layer. Fully regenerated hair follicles and sebaceous glands in the dermis of the healed areas surrounded by different arrangement of collagen fibers with a significant increase in their area percent were recorded in this group more than in other groups. Conclusion: MSCs accelerate the healing process of wound closure. The route of administration of MSCs has a great influence on wound healing as intradermal injection of MSCs was more effective in enhancement of wound healing than systemic injection.

Keywords: intradermal, mesenchymal stem cells, morphology, skin wound, systemic injection

Procedia PDF Downloads 201
711 The Significance of Picture Mining in the Fashion and Design as a New Research Method

Authors: Katsue Edo, Yu Hiroi

Abstract:

T Increasing attention has been paid to using pictures and photographs in research since the beginning of the 21th century in social sciences. Meanwhile we have been studying the usefulness of Picture mining, which is one of the new ways for a these picture using researches. Picture Mining is an explorative research analysis method that takes useful information from pictures, photographs and static or moving images. It is often compared with the methods of text mining. The Picture Mining concept includes observational research in the broad sense, because it also aims to analyze moving images (Ochihara and Edo 2013). In the recent literature, studies and reports using pictures are increasing due to the environmental changes. These are identified as technological and social changes (Edo et.al. 2013). Low price digital cameras and i-phones, high information transmission speed, low costs for information transferring and high performance and resolution of the cameras of mobile phones have changed the photographing behavior of people. Consequently, there is less resistance in taking and processing photographs for most of the people in the developing countries. In these studies, this method of collecting data from respondents is often called as ‘participant-generated photography’ or ‘respondent-generated visual imagery’, which focuses on the collection of data and its analysis (Pauwels 2011, Snyder 2012). But there are few systematical and conceptual studies that supports it significance of these methods. We have discussed in the recent years to conceptualize these picture using research methods and formalize theoretical findings (Edo et. al. 2014). We have identified the most efficient fields of Picture mining in the following areas inductively and in case studies; 1) Research in Consumer and Customer Lifestyles. 2) New Product Development. 3) Research in Fashion and Design. Though we have found that it will be useful in these fields and areas, we must verify these assumptions. In this study we will focus on the field of fashion and design, to determine whether picture mining methods are really reliable in this area. In order to do so we have conducted an empirical research of the respondents’ attitudes and behavior concerning pictures and photographs. We compared the attitudes and behavior of pictures toward fashion to meals, and found out that taking pictures of fashion is not as easy as taking meals and food. Respondents do not often take pictures of fashion and upload their pictures online, such as Facebook and Instagram, compared to meals and food because of the difficulty of taking them. We concluded that we should be more careful in analyzing pictures in the fashion area for there still might be some kind of bias existing even if the environment of pictures have drastically changed in these years.

Keywords: empirical research, fashion and design, Picture Mining, qualitative research

Procedia PDF Downloads 362
710 A Study on Inverse Determination of Impact Force on a Honeycomb Composite Panel

Authors: Hamed Kalhori, Lin Ye

Abstract:

In this study, an inverse method was developed to reconstruct the magnitude and duration of impact forces exerted to a rectangular carbon fibre-epoxy composite honeycomb sandwich panel. The dynamic signals captured by Piezoelectric (PZT) sensors installed on the panel remotely from the impact locations were utilized to reconstruct the impact force generated by an instrumented hammer through an extended deconvolution approach. Two discretized forms of convolution integral are considered; the traditional one with an explicit transfer function and the modified one without an explicit transfer function. Deconvolution, usually applied to reconstruct the time history (e.g. magnitude) of a stochastic force at a defined location, is extended to identify both the location and magnitude of the impact force among a number of potential impact locations. It is assumed that a number of impact forces are simultaneously exerted to all potential locations, but the magnitude of all forces except one is zero, implicating that the impact occurs only at one location. The extended deconvolution is then applied to determine the magnitude as well as location (among the potential ones), incorporating the linear superposition of responses resulted from impact at each potential location. The problem can be categorized into under-determined (the number of sensors is less than that of impact locations), even-determined (the number of sensors equals that of impact locations), or over-determined (the number of sensors is greater than that of impact locations) cases. For an under-determined case, it comprises three potential impact locations and one PZT sensor for the rectangular carbon fibre-epoxy composite honeycomb sandwich panel. Assessments are conducted to evaluate the factors affecting the precision of the reconstructed force. Truncated Singular Value Decomposition (TSVD) and the Tikhonov regularization are independently chosen to regularize the problem to find the most suitable method for this system. The selection of optimal value of the regularization parameter is investigated through L-curve and Generalized Cross Validation (GCV) methods. In addition, the effect of different width of signal windows on the reconstructed force is examined. It is observed that the impact force generated by the instrumented impact hammer is sensitive to the impact locations of the structure, having a shape from a simple half-sine to a complicated one. The accuracy of the reconstructed impact force is evaluated using the correlation co-efficient between the reconstructed force and the actual one. Based on this criterion, it is concluded that the forces reconstructed by using the extended deconvolution without an explicit transfer function together with Tikhonov regularization match well with the actual forces in terms of magnitude and duration.

Keywords: honeycomb composite panel, deconvolution, impact localization, force reconstruction

Procedia PDF Downloads 533
709 Parallel Fuzzy Rough Support Vector Machine for Data Classification in Cloud Environment

Authors: Arindam Chaudhuri

Abstract:

Classification of data has been actively used for most effective and efficient means of conveying knowledge and information to users. The prima face has always been upon techniques for extracting useful knowledge from data such that returns are maximized. With emergence of huge datasets the existing classification techniques often fail to produce desirable results. The challenge lies in analyzing and understanding characteristics of massive data sets by retrieving useful geometric and statistical patterns. We propose a supervised parallel fuzzy rough support vector machine (PFRSVM) for data classification in cloud environment. The classification is performed by PFRSVM using hyperbolic tangent kernel. The fuzzy rough set model takes care of sensitiveness of noisy samples and handles impreciseness in training samples bringing robustness to results. The membership function is function of center and radius of each class in feature space and is represented with kernel. It plays an important role towards sampling the decision surface. The success of PFRSVM is governed by choosing appropriate parameter values. The training samples are either linear or nonlinear separable. The different input points make unique contributions to decision surface. The algorithm is parallelized with a view to reduce training times. The system is built on support vector machine library using Hadoop implementation of MapReduce. The algorithm is tested on large data sets to check its feasibility and convergence. The performance of classifier is also assessed in terms of number of support vectors. The challenges encountered towards implementing big data classification in machine learning frameworks are also discussed. The experiments are done on the cloud environment available at University of Technology and Management, India. The results are illustrated for Gaussian RBF and Bayesian kernels. The effect of variability in prediction and generalization of PFRSVM is examined with respect to values of parameter C. It effectively resolves outliers’ effects, imbalance and overlapping class problems, normalizes to unseen data and relaxes dependency between features and labels. The average classification accuracy for PFRSVM is better than other classifiers for both Gaussian RBF and Bayesian kernels. The experimental results on both synthetic and real data sets clearly demonstrate the superiority of the proposed technique.

Keywords: FRSVM, Hadoop, MapReduce, PFRSVM

Procedia PDF Downloads 489
708 Setting up a Prototype for the Artificial Interactive Reality Unified System to Transform Psychosocial Intervention in Occupational Therapy

Authors: Tsang K. L. V., Lewis L. A., Griffith S., Tucker P.

Abstract:

Background:  Many children with high incidence disabilities, such as autism spectrum disorder (ASD), struggle to participate in the community in a socially acceptable manner. There are limitations for clinical settings to provide natural, real-life scenarios for them to practice the life skills needed to meet their real-life challenges. Virtual reality (VR) offers potential solutions to resolve the existing limitations faced by clinicians to create simulated natural environments for their clients to generalize the facilitated skills. Research design: The research aimed to develop a prototype of an interactive VR system to provide realistic and immersive environments for clients to practice skills. The descriptive qualitative methodology is employed to design and develop the Artificial Interactive Reality Unified System (AIRUS) prototype, which provided insights on how to use advanced VR technology to create simulated real-life social scenarios and enable users to interact with the objects and people inside the virtual environment using natural eye-gazes, hand and body movements. The eye tracking (e.g., selective or joint attention), hand- or body-tracking (e.g., repetitive stimming or fidgeting), and facial tracking (e.g., emotion recognition) functions allowed behavioral data to be captured and managed in the AIRUS architecture. Impact of project: Instead of using external controllers or sensors, hand tracking software enabled the users to interact naturally with the simulated environment using daily life behavior such as handshaking and waving to control and interact with the virtual objects and people. The AIRUS protocol offers opportunities for breakthroughs in future VR-based psychosocial assessment and intervention in occupational therapy. Implications for future projects: AI technology can allow more efficient data capturing and interpretation of object identification and human facial emotion recognition at any given moment. The data points captured can be used to pinpoint our users’ focus and where their interests lie. AI can further help advance the data interpretation system.

Keywords: occupational therapy, psychosocial assessment and intervention, simulated interactive environment, virtual reality

Procedia PDF Downloads 34
707 Virtual Reality and Other Real-Time Visualization Technologies for Architecture Energy Certifications

Authors: Román Rodríguez Echegoyen, Fernando Carlos López Hernández, José Manuel López Ujaque

Abstract:

Interactive management of energy certification ratings has remained on the sidelines of the evolution of virtual reality (VR) despite related advances in architecture in other areas such as BIM and real-time working programs. This research studies to what extent VR software can help the stakeholders to better understand energy efficiency parameters in order to obtain reliable ratings assigned to the parts of the building. To evaluate this hypothesis, the methodology has included the construction of a software prototype. Current energy certification systems do not follow an intuitive data entry system; neither do they provide a simple or visual verification of the technical values included in the certification by manufacturers or other users. This software, by means of real-time visualization and a graphical user interface, proposes different improvements to the current energy certification systems that ease the understanding of how the certification parameters work in a building. Furthermore, the difficulty of using current interfaces, which are not friendly or intuitive for the user, means that untrained users usually get a poor idea of the grounds for certification and how the program works. In addition, the proposed software allows users to add further information, such as financial and CO₂ savings, energy efficiency, and an explanatory analysis of results for the least efficient areas of the building through a new visual mode. The software also helps the user to evaluate whether or not an investment to improve the materials of an installation is worth the cost of the different energy certification parameters. The evaluated prototype (named VEE-IS) shows promising results when it comes to representing in a more intuitive and simple manner the energy rating of the different elements of the building. Users can also personalize all the inputs necessary to create a correct certification, such as floor materials, walls, installations, or other important parameters. Working in real-time through VR allows for efficiently comparing, analyzing, and improving the rated elements, as well as the parameters that we must enter to calculate the final certification. The prototype also allows for visualizing the building in efficiency mode, which lets us move over the building to analyze thermal bridges or other energy efficiency data. This research also finds that the visual representation of energy efficiency certifications makes it easy for the stakeholders to examine improvements progressively, which adds value to the different phases of design and sale.

Keywords: energetic certification, virtual reality, augmented reality, sustainability

Procedia PDF Downloads 186
706 Development of Risk Index and Corporate Governance Index: An Application on Indian PSUs

Authors: M. V. Shivaani, P. K. Jain, Surendra S. Yadav

Abstract:

Public Sector Undertakings (PSUs), being government-owned organizations have commitments for the economic and social wellbeing of the society; this commitment needs to be reflected in their risk-taking, decision-making and governance structures. Therefore, the primary objective of the study is to suggest measures that may lead to improvement in performance of PSUs. To achieve this objective two normative frameworks (one relating to risk levels and other relating to governance structure) are being put forth. The risk index is based on nine risks, such as, solvency risk, liquidity risk, accounting risk, etc. and each of the risks have been scored on a scale of 1 to 5. The governance index is based on eleven variables, such as, board independence, diversity, risk management committee, etc. Each of them are scored on a scale of 1 to five. The sample consists of 39 PSUs that featured in Nifty 500 index and, the study covers a 10 year period from April 1, 2005 to March, 31, 2015. Return on assets (ROA) and return on equity (ROE) have been used as proxies of firm performance. The control variables used in the model include, age of firm, growth rate of firm and size of firm. A dummy variable has also been used to factor in the effects of recession. Given the panel nature of data and possibility of endogeneity, dynamic panel data- generalized method of moments (Diff-GMM) regression has been used. It is worth noting that the corporate governance index is positively related to both ROA and ROE, indicating that with the improvement in governance structure, PSUs tend to perform better. Considering the components of CGI, it may be suggested that (i). PSUs ensure adequate representation of women on Board, (ii). appoint a Chief Risk Officer, and (iii). constitute a risk management committee. The results also indicate that there is a negative association between risk index and returns. These results not only validate the framework used to develop the risk index but also provide a yardstick to PSUs benchmark their risk-taking if they want to maximize their ROA and ROE. While constructing the CGI, certain non-compliances were observed, even in terms of mandatory requirements, such as, proportion of independent directors. Such infringements call for stringent penal provisions and better monitoring of PSUs. Further, if the Securities and Exchange Board of India (SEBI) and Ministry of Corporate Affairs (MCA) bring about such reforms in the PSUs and make mandatory the adherence to the normative frameworks put forth in the study, PSUs may have more effective and efficient decision-making, lower risks and hassle free management; all these ultimately leading to better ROA and ROE.

Keywords: PSU, risk governance, diff-GMM, firm performance, the risk index

Procedia PDF Downloads 157
705 A Review of Atomization Mechanisms Used for Spray Flash Evaporation: Their Effectiveness and Proposal of Rotary Bell Atomizer for Flashing Application

Authors: Murad A. Channa, Mehdi Khiadani. Yasir Al-Abdeli

Abstract:

Considering the severity of water scarcity around the world and its widening at an alarming rate, practical improvements in desalination techniques need to be engineered at the earliest. Atomization is the major aspect of flashing phenomena, yet it has been paid less attention to until now. There is a need to test efficient ways of atomization for the flashing process. Flash evaporation together with reverse osmosis is also a commercially matured desalination technique commonly famous as Multi-stage Flash (MSF). Even though reverse osmosis is massively practical, it is not economical or sustainable compared to flash evaporation. However, flashing evaporation has its drawbacks as well such as lower efficiency of water production per higher consumption of power and time. Flash evaporation is simply the instant boiling of a subcooled liquid which is introduced as droplets in a well-maintained negative environment. This negative pressure inside the vacuum increases the temperature of the liquid droplets far above their boiling point, which results in the release of latent heat, and the liquid droplets turn into vapor which is collected to be condensed back into an impurity-free liquid in a condenser. Atomization is the main difference between pool and spray flash evaporation. Atomization is the heart of the flash evaporation process as it increases the evaporating surface area per drop atomized. Atomization can be categorized into many levels depending on its drop size, which again becomes crucial for increasing the droplet density (drop count) per given flow rate. This review comprehensively summarizes the selective results relating to the methods of atomization and their effectiveness on the evaporation rate from earlier works to date. In addition, the reviewers propose using centrifugal atomization for the flashing application, which brings several advantages viz ultra-fine droplets, uniform droplet density, and the swirling geometry of the spray with kinetically more energetic sprays during their flight. Finally, several challenges of using rotary bell atomizer (RBA) and RBA Sprays inside the chamber have been identified which will be explored in detail. A schematic of rotary bell atomizer (RBA) integration with the chamber has been designed. This powerful centrifugal atomization has the potential to increase potable water production in commercial multi-stage flash evaporators, where it would be preferably advantageous.

Keywords: atomization, desalination, flash evaporation, rotary bell atomizer

Procedia PDF Downloads 83
704 Microplastics in the Seine River Catchment: Results and Lessons from a Pluriannual Research Programme

Authors: Bruno Tassin, Robin Treilles, Cleo Stratmann, Minh Trang Nguyen, Sam Azimi, Vincent Rocher, Rachid Dris, Johnny Gasperi

Abstract:

Microplastics (<5mm) in the environment and in hydro systems is one of the major present environmental issues. Over the last five years a research programme was conducted in order to assess the behavior of microplastics in the Seine river catchment, in a Man-Land-Sea continuum approach. Results show that microplastic concentration varies at the seasonal scale, but also at much smaller scales, during flood events and with tides in the estuary for instance. Moreover, microplastic sampling and characterization issues emerged throughout this work. The Seine river is a 750km long river flowing in Northwestern France. It crosses the Paris megacity (12 millions inhabitants) and reaches the English Channel after a 170 km long estuary. This site is a very relevant one to assess the effect of anthropogenic pollution as the mean river flow is low (mean flow around 350m³/s) while the human presence and activities are very intense. Monthly monitoring of the microplastic concentration took place over a 19-month period and showed significant temporal variations at all sampling stations but no significant upstream-downstream increase, indicating a possible major sink to the sediment. At the scale of a major flood event (winter and spring 2018), microplastic concentration shows an evolution similar to the well-known suspended solids concentration, with an increase during the increase of the flow and a decrease during the decrease of the flow. Assessing the position of the concentration peak in relation to the flow peak was unfortunately impossible. In the estuary, concentrations vary with time in connection with tides movements and in the water column in relation to the salinity and the turbidity. Although major gains of knowledge on the microplastic dynamics in the Seine river have been obtained over the last years, major gaps remain to deal mostly with the interaction with the dynamics of the suspended solids, the selling processes in the water column and the resuspension by navigation or shear stress increase. Moreover, the development of efficient chemical characterization techniques during the 5 year period of this pluriannual research programme led to the improvement of the sampling techniques in order to access smaller microplastics (>10µm) as well as larger but rare ones (>500µm).

Keywords: microplastics, Paris megacity, seine river, suspended solids

Procedia PDF Downloads 197
703 The Role of Risk Attitudes and Networks on the Migration Decision: Empirical Evidence from the United States

Authors: Tamanna Rimi

Abstract:

A large body of literature has discussed the determinants of migration decision. However, the potential role of individual risk attitudes on migration decision has so far been overlooked. The research on migration literature has studied how the expected income differential influences migration flows for a risk neutral individual. However, migration takes place when there is no expected income differential or even the variability of income appears as lower than in the current location. This migration puzzle motivates a recent trend in the literature that analyzes how attitudes towards risk influence the decision to migrate. However, the significance of risk attitudes on migration decision has been addressed mostly in a theoretical perspective in the mainstream migration literature. The efficient outcome of labor market and overall economy are largely influenced by migration in many countries. Therefore, attitudes towards risk as a determinant of migration should get more attention in empirical studies. To author’s best knowledge, this is the first study that has examined the relationship between relative risk aversion and migration decision in US market. This paper considers movement across United States as a means of migration. In addition, this paper also explores the network effect due to the increasing size of one’s own ethnic group to a source location on the migration decision and how attitudes towards risk vary with network effect. Two ethnic groups (i.e. Asian and Hispanic) have been considered in this regard. For the empirical estimation, this paper uses two sources of data: 1) U.S. census data for social, economic, and health research, 2010 (IPUMPS) and 2) University of Michigan Health and Retirement Study, 2010 (HRS). In order to measure relative risk aversion, this study uses the ‘Two Sample Two-Stage Instrumental Variable (TS2SIV)’ technique. This is a similar method of Angrist (1990) and Angrist and Kruegers’ (1992) ‘Two Sample Instrumental Variable (TSIV)’ technique. Using a probit model, the empirical investigation yields the following results: (i) risk attitude has a significantly large impact on migration decision where more risk averse people are less likely to migrate; (ii) the impact of risk attitude on migration varies by other demographic characteristics such as age and sex; (iii) people with higher concentration of same ethnic households living in a particular place are expected to migrate less from their current place; (iv) the risk attitudes on migration vary with network effect. The overall findings of this paper relating risk attitude, migration decision and network effect can be a significant contribution addressing the gap between migration theory and empirical study in migration literature.

Keywords: migration, network effect, risk attitude, U.S. market

Procedia PDF Downloads 162
702 Acrylic Microspheres-Based Microbial Bio-Optode for Nitrite Ion Detection

Authors: Siti Nur Syazni Mohd Zuki, Tan Ling Ling, Nina Suhaity Azmi, Chong Kwok Feng, Lee Yook Heng

Abstract:

Nitrite (NO2-) ion is used prevalently as a preservative in processed meat. Elevated levels of nitrite also found in edible bird’s nests (EBNs). Consumption of NO2- ion at levels above the health-based risk may cause cancer in humans. Spectrophotometric Griess test is the simplest established standard method for NO2- ion detection, however, it requires careful control of pH of each reaction step and susceptible to strong oxidants and dyeing interferences. Other traditional methods rely on the use of laboratory-scale instruments such as GC-MS, HPLC and ion chromatography, which cannot give real-time response. Therefore, it is of significant need for devices capable of measuring nitrite concentration in-situ, rapidly and without reagents, sample pretreatment or extraction step. Herein, we constructed a microspheres-based microbial optode for visual quantitation of NO2- ion. Raoutella planticola, the bacterium expressing NAD(P)H nitrite reductase (NiR) enzyme has been successfully extracted by microbial technique from EBN collected from local birdhouse. The whole cells and the lipophilic Nile Blue chromoionophore were physically absorbed on the photocurable poly(n-butyl acrylate-N-acryloxysuccinimide) [poly (nBA-NAS)] microspheres, whilst the reduced coenzyme NAD(P)H was covalently immobilized on the succinimide-functionalized acrylic microspheres to produce a reagentless biosensing system. Upon the NiR enzyme catalyzes the oxidation of NAD(P)H to NAD(P)+, NO2- ion is reduced to ammonium hydroxide, and that a colour change from blue to pink of the immobilized Nile Blue chromoionophore is perceived as a result of deprotonation reaction increasing the local pH in the microspheres membrane. The microspheres-based optosensor was optimized with a reflectance spectrophotometer at 639 nm and pH 8. The resulting microbial bio-optode membrane could quantify NO2- ion at 0.1 ppm and had a linear response up to 400 ppm. Due to the large surface area to mass ratio of the acrylic microspheres, it allows efficient solid state diffusional mass transfer of the substrate to the bio-recognition phase, and achieve the steady state response as fast as 5 min. The proposed optical microbial biosensor requires no sample pre-treatment step and possesses high stability as the whole cell biocatalyst provides protection to the enzymes from interfering substances, hence it is suitable for measurements in contaminated samples.

Keywords: acrylic microspheres, microbial bio-optode, nitrite ion, reflectometric

Procedia PDF Downloads 445
701 Understanding the Benefits of Multiple-Use Water Systems (MUS) for Smallholder Farmers in the Rural Hills of Nepal

Authors: RAJ KUMAR G.C.

Abstract:

There are tremendous opportunities to maximize smallholder farmers’ income from small-scale water resource development through micro irrigation and multiple-use water systems (MUS). MUS are an improved water management approach, developed and tested successfully by iDE that pipes water to a community both for domestic use and for agriculture using efficient micro irrigation. Different MUS models address different landscape constraints, water demand, and users’ preferences. MUS are complemented by micro irrigation kits, which were developed by iDE to enable farmers to grow high-value crops year-round and to use limited water resources efficiently. Over the last 15 years, iDE’s promotion of the MUS approach has encouraged government and other key stakeholders to invest in MUS for better planning of scarce water resources. Currently, about 60% of the cost of MUS construction is covered by the government and community. Based on iDE’s experience, a gravity-fed MUS costs approximately $125 USD per household to construct, and it can increase household income by $300 USD per year. A key element of the MUS approach is keeping farmers well linked to input supply systems and local produce collection centers, which helps to ensure that the farmers can produce a sufficient quantity of high-quality produce that earns a fair price. This process in turn creates an enabling environment for smallholders to invest in MUS and micro irrigation. Therefore, MUS should be seen as an integrated package of interventions –the end users, water sources, technologies, and the marketplace– that together enhance technical, financial, and institutional sustainability. Communities are trained to participate in sustainable water resource management as a part of the MUS planning and construction process. The MUS approach is cost-effective, improves community governance of scarce water resources, helps smallholder farmers to improve rural health and livelihoods, and promotes gender equity. MUS systems are simple to maintain and communities are trained to ensure that they can undertake minor maintenance procedures themselves. All in all, the iDE Nepal MUS offers multiple benefits and represents a practical and sustainable model of the MUS approach. Moreover, there is a growing national consensus that rural water supply systems should be designed for multiple uses, acknowledging that substantial work remains in developing national-level and local capacity and policies for scale-up.

Keywords: multiple-use water systems , small scale water resources, rural livelihoods, practical and sustainable model

Procedia PDF Downloads 288
700 Emissions and Total Cost of Ownership Assessment of Hybrid Propulsion Concepts for Bus Transport with Compressed Natural Gases or Diesel Engine

Authors: Volker Landersheim, Daria Manushyna, Thinh Pham, Dai-Duong Tran, Thomas Geury, Omar Hegazy, Steven Wilkins

Abstract:

Air pollution is one of the emerging problems in our society. Targets of reduction of CO₂ emissions address low-carbon and resource-efficient transport. (Plug-in) hybrid electric propulsion concepts offer the possibility to reduce total cost of ownership (TCO) and emissions for public transport vehicles (e.g., bus application). In this context, typically, diesel engines are used to form the hybrid propulsion system of the vehicle. Though the technological development of diesel engines experience major advantages, some challenges such as the high amount of particle emissions remain relevant. Gaseous fuels (i.e., compressed natural gases (CNGs) or liquefied petroleum gases (LPGs) represent an attractive alternative to diesel because of their composition. In the framework of the research project 'Optimised Real-world Cost-Competitive Modular Hybrid Architecture' (ORCA), which was funded by the EU, two different hybrid-electric propulsion concepts have been investigated: one using a diesel engine as internal combustion engine and one using CNG as fuel. The aim of the current study is to analyze specific benefits for the aforementioned hybrid propulsion systems for predefined driving scenarios with regard to emissions and total cost of ownership in bus application. Engine models based on experimental data for diesel and CNG were developed. For the purpose of designing optimal energy management strategies for each propulsion system, maps-driven or quasi-static models for specific engine types are used in the simulation framework. An analogous modelling approach has been chosen to represent emissions. This paper compares the two concepts regarding their CO₂ and NOx emissions. This comparison is performed for relevant bus missions (urban, suburban, with and without zero-emission zone) and with different energy management strategies. In addition to the emissions, also the downsizing potential of the combustion engine has been analysed to minimize the powertrain TCO (pTCO) for plug-in hybrid electric buses. The results of the performed analyses show that the hybrid vehicle concept using the CNG engine shows advantages both with respect to emissions as well as to pTCO. The pTCO is 10% lower, CO₂ emissions are 13% lower, and the NOx emissions are more than 50% lower than with the diesel combustion engine. These results are consistent across all usage profiles under investigation.

Keywords: bus transport, emissions, hybrid propulsion, pTCO, CNG

Procedia PDF Downloads 146
699 Apatite Flotation Using Fruits' Oil as Collector and Sorghum as Depressant

Authors: Elenice Maria Schons Silva, Andre Carlos Silva

Abstract:

The crescent demand for raw material has increased mining activities. Mineral industry faces the challenge of process more complexes ores, with very small particles and low grade, together with constant pressure to reduce production costs and environment impacts. Froth flotation deserves special attention among the concentration methods for mineral processing. Besides its great selectivity for different minerals, flotation is a high efficient method to process fine particles. The process is based on the minerals surficial physicochemical properties and the separation is only possible with the aid of chemicals such as collectors, frothers, modifiers, and depressants. In order to use sustainable and eco-friendly reagents, oils extracted from three different vegetable species (pequi’s pulp, macauba’s nut and pulp, and Jatropha curcas) were studied and tested as apatite collectors. Since the oils are not soluble in water, an alkaline hydrolysis (or saponification), was necessary before their contact with the minerals. The saponification was performed at room temperature. The tests with the new collectors were carried out at pH 9 and Flotigam 5806, a synthetic mix of fatty acids industrially adopted as apatite collector manufactured by Clariant, was used as benchmark. In order to find a feasible replacement for cornstarch the flour and starch of a graniferous variety of sorghum was tested as depressant. Apatite samples were used in the flotation tests. XRF (X-ray fluorescence), XRD (X-ray diffraction), and SEM/EDS (Scanning Electron Microscopy with Energy Dispersive Spectroscopy) were used to characterize the apatite samples. Zeta potential measurements were performed in the pH range from 3.5 to 12.5. A commercial cornstarch was used as depressant benchmark. Four depressants dosages and pH values were tested. A statistical test was used to verify the pH, dosage, and starch type influence on the minerals recoveries. For dosages equal or higher than 7.5 mg/L, pequi oil recovered almost all apatite particles. In one hand, macauba’s pulp oil showed excellent results for all dosages, with more than 90% of apatite recovery, but in the other hand, with the nut oil, the higher recovery found was around 84%. Jatropha curcas oil was the second best oil tested and more than 90% of the apatite particles were recovered for the dosage of 7.5 mg/L. Regarding the depressant, the lower apatite recovery with sorghum starch were found for a dosage of 1,200 g/t and pH 11, resulting in a recovery of 1.99%. The apatite recovery for the same conditions as 1.40% for sorghum flour (approximately 30% lower). When comparing with cornstarch at the same conditions sorghum flour produced an apatite recovery 91% lower.

Keywords: collectors, depressants, flotation, mineral processing

Procedia PDF Downloads 151
698 Fracture Behaviour of Functionally Graded Materials Using Graded Finite Elements

Authors: Mohamad Molavi Nojumi, Xiaodong Wang

Abstract:

In this research fracture behaviour of linear elastic isotropic functionally graded materials (FGMs) are investigated using modified finite element method (FEM). FGMs are advantageous because they enhance the bonding strength of two incompatible materials, and reduce the residual stress and thermal stress. Ceramic/metals are a main type of FGMs. Ceramic materials are brittle. So, there is high possibility of crack existence during fabrication or in-service loading. In addition, damage analysis is necessary for a safe and efficient design. FEM is a strong numerical tool for analyzing complicated problems. Thus, FEM is used to investigate the fracture behaviour of FGMs. Here an accurate 9-node biquadratic quadrilateral graded element is proposed in which the influence of the variation of material properties is considered at the element level. The stiffness matrix of graded elements is obtained using the principle of minimum potential energy. The implementation of graded elements prevents the forced sudden jump of material properties in traditional finite elements for modelling FGMs. Numerical results are verified with existing solutions. Different numerical simulations are carried out to model stationary crack problems in nonhomogeneous plates. In these simulations, material variation is supposed to happen in directions perpendicular and parallel to the crack line. Two special linear and exponential functions have been utilized to model the material gradient as they are mostly discussed in literature. Also, various sizes of the crack length are considered. A major difference in the fracture behaviour of FGMs and homogeneous materials is related to the break of material symmetry. For example, when the material gradation direction is normal to the crack line, even under applying the mode I loading there exists coupled modes I and II of fracture which originates from the induced shear in the model. Therefore, the necessity of the proper modelling of the material variation should be considered in capturing the fracture behaviour of FGMs specially, when the material gradient index is high. Fracture properties such as mode I and mode II stress intensity factors (SIFs), energy release rates, and field variables near the crack tip are investigated and compared with results obtained using conventional homogeneous elements. It is revealed that graded elements provide higher accuracy with less effort in comparison with conventional homogeneous elements.

Keywords: finite element, fracture mechanics, functionally graded materials, graded element

Procedia PDF Downloads 172
697 Enhancing Photocatalytic Hydrogen Production: Modification of TiO₂ by Coupling with Semiconductor Nanoparticles

Authors: Saud Hamdan Alshammari

Abstract:

Photocatalytic water splitting to produce hydrogen (H₂) has obtained significant attention as an environmentally friendly technology. This process, which produces hydrogen from water and sunlight, represents a renewable energy source. Titanium dioxide (TiO₂) plays a critical role in photocatalytic hydrogen production due to its chemical stability, availability, and low cost. Nevertheless, TiO₂'s wide band gap (3.2 eV) limits its visible light absorption and might affect the effectiveness of the photocatalytic. Coupling TiO₂ with other semiconductors is a strategy that can enhance TiO₂ by narrowing its band gap and improving visible light absorption. This paper studies the modification of TiO₂ by coupling it with another semiconductor such as CdS nanoparticles using a reflux reactor and autoclave reactor that helps form a core-shell structure. Characterization techniques, including TEM and UV-Vis spectroscopy, confirmed successful coating of TiO₂ on CdS core, reduction of the band gap from 3.28 eV to 3.1 eV, and enhanced light absorption in the visible region. These modifications are attributed to the heterojunction structure between TiO₂ and CdS.The essential goal of this study is to improve TiO₂ for use in photocatalytic water splitting to enhance hydrogen production. The core-shell TiO₂@CdS nanoparticles exhibited promising results, due to band gap narrowing and improved light absorption. Future work will involve adding Pt as a co-catalyst, which is known to increase surface reaction activity by enhancing proton adsorption. Evaluation of the TiO₂@CdS@Pt catalyst will include performance assessments and hydrogen productivity tests, considering factors such as effective shapes and material ratios. Moreover, the study could be enhanced by studying further modifications to the catalyst and displaying additional performance evaluations. For instance, doping TiO₂ with metals such as nickel (Ni), iron (Fe), and cobalt (Co) and non-metals such as nitrogen (N), carbon (C), and sulfur (S) could positively influence the catalyst by reducing the band gap, enhancing the separation of photogenerated electron-hole pairs, and increasing the surface area, respectively. Additionally, to further improve catalytic performance, examining different catalyst morphologies, such as nanorods, nanowires, and nanosheets, in hydrogen production could be highly beneficial. Optimizing photoreactor design for efficient photon delivery and illumination will further enhance the photocatalytic process. These strategies collectively aim to overcome current challenges and improve the efficiency of hydrogen production via photocatalysis.

Keywords: hydrogen production, photocatalytic, water spliiting, semiconductor, nanoparticles

Procedia PDF Downloads 19
696 Reading Informational or Fictional Texts to Students: Choices and Perceptions of Preschool and Primary Grade Teachers

Authors: Anne-Marie Dionne

Abstract:

Teacher reading aloud to students is a practice that is well established in preschool and primary classrooms. Many benefits of this pedagogical activity have been highlighted in multiple studies. However, it has also been shown that teachers are not keen on choosing informational texts for their read aloud, as their selections for this venue are mainly fictional stories, mostly written in a unique narrative story-like structure. Considering that students soon have to read complex informational texts by themselves as they go from one grade to another, there is cause for concern because those who do not benefit from an early exposure to informational texts could be lacking knowledge of informational text structures that they will encounter regularly in their reading. Exposing students to informational texts could be done in different ways in classrooms. However, since read aloud appears to be such a common and efficient practice in preschool and primary grades, it is important to examine more deeply the factors taken into account by teachers when they are selecting their readings for this important teaching activity. Moreover, it seems critical to know why teachers are not inclined to choose more often informational texts when they are reading aloud to their pupils. A group of 22 preschool or primary grade teachers participated in this study. The data collection was done by a survey and an individual semi-structured interview. The survey was conducted in order to get quantitative data on the read-aloud practices of teachers. As for the interviews, they were organized around three categories of questions (exploratory, analytical, opinion) regarding the process of selecting the texts for the read-aloud sessions. A statistical analysis was conducted on the data obtained by the survey. As for the interviews, they were subjected to a content analysis aiming to classify the information collected in predetermined categories such as the reasons given to favor fictional texts over informative texts, the reasons given for avoiding informative texts for reading aloud, the perceptions of the challenges that the informative texts could bring when they are read aloud to students, and the perceived advantages that they would present if they were chosen more often for this activity. Results are showing variable factors that are guiding the teachers when they are making their selection of the texts to be read aloud. As for example, some of them are choosing solely fictional texts because of their convictions that these are more interesting for their students. They also perceive that the informational texts are not good choices because they are not suitable for pleasure reading. In that matter, results are pointing to some interesting elements. Many teachers perceive that read aloud of fictional or informational texts have different goals: fictional texts are read for pleasure and informational texts are read mostly for academic purposes. These results bring out the urgency for teachers to become aware of the numerous benefits that the reading aloud of each type of texts could bring to their students, especially the informational texts. The possible consequences of teachers’ perceptions will be discussed further in our presentation.

Keywords: fictional texts, informational texts, preschool or primary grade teachers, reading aloud

Procedia PDF Downloads 149
695 Low Cost Webcam Camera and GNSS Integration for Updating Home Data Using AI Principles

Authors: Mohkammad Nur Cahyadi, Hepi Hapsari Handayani, Agus Budi Raharjo, Ronny Mardianto, Daud Wahyu Imani, Arizal Bawazir, Luki Adi Triawan

Abstract:

PDAM (local water company) determines customer charges by considering the customer's building or house. Charges determination significantly affects PDAM income and customer costs because the PDAM applies a subsidy policy for customers classified as small households. Periodic updates are needed so that pricing is in line with the target. A thorough customer survey in Surabaya is needed to update customer building data. However, the survey that has been carried out so far has been by deploying officers to conduct one-by-one surveys for each PDAM customer. Surveys with this method require a lot of effort and cost. For this reason, this research offers a technology called moblie mapping, a mapping method that is more efficient in terms of time and cost. The use of this tool is also quite simple, where the device will be installed in the car so that it can record the surrounding buildings while the car is running. Mobile mapping technology generally uses lidar sensors equipped with GNSS, but this technology requires high costs. In overcoming this problem, this research develops low-cost mobile mapping technology using a webcam camera sensor added to the GNSS and IMU sensors. The camera used has specifications of 3MP with a resolution of 720 and a diagonal field of view of 78⁰. The principle of this invention is to integrate four camera sensors, a GNSS webcam, and GPS to acquire photo data, which is equipped with location data (latitude, longitude) and IMU (roll, pitch, yaw). This device is also equipped with a tripod and a vacuum cleaner to attach to the car's roof so it doesn't fall off while running. The output data from this technology will be analyzed with artificial intelligence to reduce similar data (Cosine Similarity) and then classify building types. Data reduction is used to eliminate similar data and maintain the image that displays the complete house so that it can be processed for later classification of buildings. The AI method used is transfer learning by utilizing a trained model named VGG-16. From the analysis of similarity data, it was found that the data reduction reached 50%. Then georeferencing is done using the Google Maps API to get address information according to the coordinates in the data. After that, geographic join is done to link survey data with customer data already owned by PDAM Surya Sembada Surabaya.

Keywords: mobile mapping, GNSS, IMU, similarity, classification

Procedia PDF Downloads 81
694 Chongqing, a Megalopolis Disconnected with Its Rivers: An Assessment of Urban-Waterside Disconnect in a Chinese Megacity and Proposed Improvement Strategies, Chongqing City as a Case Study

Authors: Jaime E. Salazar Lagos

Abstract:

Chongqing is located in southwest China and is becoming one of the most significant cities in the world. Its urban territories and metropolitan-related areas have one of the largest urban populations in China and are partitioned and shaped by two of the biggest and longest rivers on Earth, the Yangtze and Jialing Rivers, making Chongqing a megalopolis intersected by rivers. Historically, Chongqing City enjoyed fundamental connections with its rivers; however, current urban development of Chongqing City has lost effective integration of the riverbanks within the urban space and structural dynamics of the city. Therefore, there exists a critical lack of physical and urban space conjoined with the rivers, which diminishes the economic, tourist, and environmental development of Chongqing. Using multi-scale satellite-map site verification the study confirmed the hypothesis and urban-waterside disconnect. Collected data demonstrated that the Chongqing urban zone, an area of 5292 square-kilometers and a water front of 203.4 kilometers, has only 23.49 kilometers of extension (just 11.5%) with high-quality physical and spatial urban-waterside connection. Compared with other metropolises around the world, this figure represents a significant lack of spatial development along the rivers, an issue that has not been successfully addressed in the last 10 years of urban development. On a macro scale, the study categorized the different kinds of relationships between the city and its riverbanks. This data was then utilized in the creation of an urban-waterfront relationship map that can be a tool for future city planning decisions and real estate development. On a micro scale, we discovered there are three primary elements that are causing the urban-waterside disconnect: extensive highways along the most dense areas and city center, large private real estate developments that do not provide adequate riverside access, and large industrial complexes that almost completely lack riverside utilization. Finally, as part of the suggested strategies, the study concludes that the most efficient and practical way to improve this situation is to follow the historic master-planning of Chongqing and create connective nodes in critical urban locations along the river, a strategy that has been used for centuries to handle the same urban-waterside relationship. Reviewing and implementing this strategy will allow the city to better connect with the rivers, reducing the various impacts of disconnect and urban transformation.

Keywords: Chongqing City, megalopolis, nodes, riverbanks disconnection, urban

Procedia PDF Downloads 224
693 Optimizing Hydrogen Production from Biomass Pyro-Gasification in a Multi-Staged Fluidized Bed Reactor

Authors: Chetna Mohabeer, Luis Reyes, Lokmane Abdelouahed, Bechara Taouk

Abstract:

In the transition to sustainability and the increasing use of renewable energy, hydrogen will play a key role as an energy carrier. Biomass has the potential to accelerate the realization of hydrogen as a major fuel of the future. Pyro-gasification allows the conversion of organic matter mainly into synthesis gas, or “syngas”, majorly constituted by CO, H2, CH4, and CO2. A second, condensable fraction of biomass pyro-gasification products are “tars”. Under certain conditions, tars may decompose into hydrogen and other light hydrocarbons. These conditions include two types of cracking: homogeneous cracking, where tars decompose under the effect of temperature ( > 1000 °C), and heterogeneous cracking, where catalysts such as olivine, dolomite or biochar are used. The latter process favors cracking of tars at temperatures close to pyro-gasification temperatures (~ 850 °C). Pyro-gasification of biomass coupled with water-gas shift is the most widely practiced process route for biomass to hydrogen today. In this work, an innovating solution will be proposed for this conversion route, in that all the pyro-gasification products, not only methane, will undergo processes that aim to optimize hydrogen production. First, a heterogeneous cracking step was included in the reaction scheme, using biochar (remaining solid from the pyro-gasification reaction) as catalyst and CO2 and H2O as gasifying agents. This process was followed by a catalytic steam methane reforming (SMR) step. For this, a Ni-based catalyst was tested under different reaction conditions to optimize H2 yield. Finally, a water-gas shift (WGS) reaction step with a Fe-based catalyst was added to optimize the H2 yield from CO. The reactor used for cracking was a fluidized bed reactor, and the one used for SMR and WGS was a fixed bed reactor. The gaseous products were analyzed continuously using a µ-GC (Fusion PN 074-594-P1F). With biochar as bed material, it was seen that more H2 was obtained with steam as a gasifying agent (32 mol. % vs. 15 mol. % with CO2 at 900 °C). CO and CH4 productions were also higher with steam than with CO2. Steam as gasifying agent and biochar as bed material were hence deemed efficient parameters for the first step. Among all parameters tested, CH4 conversions approaching 100 % were obtained from SMR reactions using Ni/γ-Al2O3 as a catalyst, 800 °C, and a steam/methane ratio of 5. This gave rise to about 45 mol % H2. Experiments about WGS reaction are currently being conducted. At the end of this phase, the four reactions are performed consecutively, and the results analyzed. The final aim is the development of a global kinetic model of the whole system in a multi-stage fluidized bed reactor that can be transferred on ASPEN PlusTM.

Keywords: multi-staged fluidized bed reactor, pyro-gasification, steam methane reforming, water-gas shift

Procedia PDF Downloads 136
692 An Effort at Improving Reliability of Laboratory Data in Titrimetric Analysis for Zinc Sulphate Tablets Using Validated Spreadsheet Calculators

Authors: M. A. Okezue, K. L. Clase, S. R. Byrn

Abstract:

The requirement for maintaining data integrity in laboratory operations is critical for regulatory compliance. Automation of procedures reduces incidence of human errors. Quality control laboratories located in low-income economies may face some barriers in attempts to automate their processes. Since data from quality control tests on pharmaceutical products are used in making regulatory decisions, it is important that laboratory reports are accurate and reliable. Zinc Sulphate (ZnSO4) tablets is used in treatment of diarrhea in pediatric population, and as an adjunct therapy for COVID-19 regimen. Unfortunately, zinc content in these formulations is determined titrimetrically; a manual analytical procedure. The assay for ZnSO4 tablets involves time-consuming steps that contain mathematical formulae prone to calculation errors. To achieve consistency, save costs, and improve data integrity, validated spreadsheets were developed to simplify the two critical steps in the analysis of ZnSO4 tablets: standardization of 0.1M Sodium Edetate (EDTA) solution, and the complexometric titration assay procedure. The assay method in the United States Pharmacopoeia was used to create a process flow for ZnSO4 tablets. For each step in the process, different formulae were input into two spreadsheets to automate calculations. Further checks were created within the automated system to ensure validity of replicate analysis in titrimetric procedures. Validations were conducted using five data sets of manually computed assay results. The acceptance criteria set for the protocol were met. Significant p-values (p < 0.05, α = 0.05, at 95% Confidence Interval) were obtained from students’ t-test evaluation of the mean values for manual-calculated and spreadsheet results at all levels of the analysis flow. Right-first-time analysis and principles of data integrity were enhanced by use of the validated spreadsheet calculators in titrimetric evaluations of ZnSO4 tablets. Human errors were minimized in calculations when procedures were automated in quality control laboratories. The assay procedure for the formulation was achieved in a time-efficient manner with greater level of accuracy. This project is expected to promote cost savings for laboratory business models.

Keywords: data integrity, spreadsheets, titrimetry, validation, zinc sulphate tablets

Procedia PDF Downloads 168
691 Development and Structural Characterization of a Snack Food with Added Type 4 Extruded Resistant Starch

Authors: Alberto A. Escobar Puentes, G. Adriana García, Luis F. Cuevas G., Alejandro P. Zepeda, Fernando B. Martínez, Susana A. Rincón

Abstract:

Snack foods are usually classified as ‘junk food’ because have little nutritional value. However, due to the increase on the demand and third generation (3G) snacks market, low price and easy to prepare, can be considered as carriers of compounds with certain nutritional value. Resistant starch (RS) is classified as a prebiotic fiber it helps to control metabolic problems and has anti-cancer colon properties. The active compound can be developed by chemical cross-linking of starch with phosphate salts to obtain a type 4 resistant starch (RS4). The chemical reaction can be achieved by extrusion, a process widely used to produce snack foods, since it's versatile and a low-cost procedure. Starch is the major ingredient for snacks 3G manufacture, and the seeds of sorghum contain high levels of starch (70%), the most drought-tolerant gluten-free cereal. Due to this, the aim of this research was to develop a snack (3G), with RS4 in optimal conditions extrusion (previously determined) from sorghum starch, and carry on a sensory, chemically and structural characterization. A sample (200 g) of sorghum starch was conditioned with 4% sodium trimetaphosphate/ sodium tripolyphosphate (99:1) and set to 28.5% of moisture content. Then, the sample was processed in a single screw extruder equipped with rectangular die. The inlet, transport and output temperatures were 60°C, 134°C and 70°C, respectively. The resulting pellets were expanded in a microwave oven. The expansion index (EI), penetration force (PF) and sensory analysis were evaluated in the expanded pellets. The pellets were milled to obtain flour and RS content, degree of substitution (DS), and percentage of phosphorus (% P) were measured. Spectroscopy [Fourier Transform Infrared (FTIR)], X-ray diffraction, differential scanning calorimetry (DSC) and scanning electron microscopy (SEM) analysis were performed in order to determine structural changes after the process. The results in 3G were as follows: RS, 17.14 ± 0.29%; EI, 5.66 ± 0.35 and PF, 5.73 ± 0.15 (N). Groups of phosphate were identified in the starch molecule by FTIR: DS, 0.024 ± 0.003 and %P, 0.35±0.15 [values permitted as food additives (<4 %P)]. In this work an increase of the gelatinization temperature after the crosslinking of starch was detected; the loss of granular and vapor bubbles after expansion were observed by SEM; By using X-ray diffraction, loss of crystallinity was observed after extrusion process. Finally, a snack (3G) was obtained with RS4 developed by extrusion technology. The sorghum starch was efficient for snack 3G production.

Keywords: extrusion, resistant starch, snack (3G), Sorghum

Procedia PDF Downloads 309
690 Collaboration with Governmental Stakeholders in Positioning Reputation on Value

Authors: Zeynep Genel

Abstract:

The concept of reputation in corporate development comes to the fore as one of the most frequently discussed topics in recent years. Many organizations, which make worldwide investments, make effort in order to adapt themselves to the topics within the scope of this concept and to promote the name of the organization through the values that might become prominent. The stakeholder groups are considered as the most important actors determining the reputation. Even, the effect of stakeholders is not evaluated as a direct factor; it is signed as indirect effects of their perception are a very strong on ultimate reputation. It is foreseen that the parallelism between the projected reputation and the perceived c reputation, which is established as a result of communication experiences perceived by the stakeholders, has an important effect on achieving these objectives. In assessing the efficiency of these efforts, the opinions of stakeholders are widely utilized. In other words, the projected reputation, in which the positive and/or negative reflections of corporate communication play effective role, is measured through how the stakeholders perceptively position the organization. From this perspective, it is thought that the interaction and cooperation of corporate communication professionals with different stakeholder groups during the reputation positioning efforts play significant role in achieving the targeted reputation or in sustainability of this value. The governmental stakeholders having intense communication with mass stakeholder groups are within the most effective stakeholder groups of organization. The most important reason of this is that the organizations, regarding which the governmental stakeholders have positive perception, inspire more confidence to the mass stakeholders. At this point, the organizations carrying out joint projects with governmental stakeholders in parallel with sustainable communication approach come to the fore as the organizations having strong reputation, whereas the reputation of organizations, which fall behind in this regard or which cannot establish the efficiency from this aspect, is thought to be perceived as weak. Similarly, the social responsibility campaigns, in which the governmental stakeholders are involved and which play efficient role in strengthening the reputation, are thought to draw more attention. From this perspective, the role and effect of governmental stakeholders on the reputation positioning is discussed in this study. In parallel with this objective, it is aimed to reveal perspectives of seven governmental stakeholders towards the cooperation in reputation positioning. The sample group representing the governmental stakeholders is examined under the lights of results obtained from in-depth interviews with the executives of different ministries. It is asserted that this study, which aims to express the importance of stakeholder participation in corporate reputation positioning especially in Turkey and the effective role of governmental stakeholders in strong reputation, might provide a new perspective on measuring the corporate reputation, as well as establishing an important source to contribute to the studies in both academic and practical domains.

Keywords: collaborative communications, reputation management, stakeholder engagement, ultimate reputation

Procedia PDF Downloads 224