Search results for: optimization algorithms
502 Predicting and Optimizing the Mechanical Behavior of a Flax Reinforced Composite
Authors: Georgios Koronis, Arlindo Silva
Abstract:
This study seeks to understand the mechanical behavior of a natural fiber reinforced composite (epoxy/flax) in more depth, utilizing both experimental and numerical methods. It is attempted to identify relationships between the design parameters and the product performance, understand the effect of noise factors and reduce process variations. Optimization of the mechanical performance of manufactured goods has recently been implemented by numerous studies for green composites. However, these studies are limited and have explored in principal mass production processes. It is expected here to discover knowledge about composite’s manufacturing that can be used to design artifacts that are of low batch and tailored to niche markets. The goal is to reach greater consistency in the performance and further understand which factors play significant roles in obtaining the best mechanical performance. A prediction of response function (in various operating conditions) of the process is modeled by the DoE. Normally, a full factorial designed experiment is required and consists of all possible combinations of levels for all factors. An analytical assessment is possible though with just a fraction of the full factorial experiment. The outline of the research approach will comprise of evaluating the influence that these variables have and how they affect the composite mechanical behavior. The coupons will be fabricated by the vacuum infusion process defined by three process parameters: flow rate, injection point position and fiber treatment. Each process parameter is studied at 2-levels along with their interactions. Moreover, the tensile and flexural properties will be obtained through mechanical testing to discover the key process parameters. In this setting, an experimental phase will be followed in which a number of fabricated coupons will be tested to allow for a validation of the design of the experiment’s setup. Finally, the results are validated by performing the optimum set of in a final set of experiments as indicated by the DoE. It is expected that after a good agreement between the predicted and the verification experimental values, the optimal processing parameter of the biocomposite lamina will be effectively determined.Keywords: design of experiments, flax fabrics, mechanical performance, natural fiber reinforced composites
Procedia PDF Downloads 204501 Optimizing Foaming Agents by Air Compression to Unload a Liquid Loaded Gas Well
Authors: Mhenga Agneta, Li Zhaomin, Zhang Chao
Abstract:
When velocity is high enough, gas can entrain fluid and carry to the surface, but as time passes by, velocity drops to a critical point where fluids will start to hold up in the tubing and cause liquid loading which prevents gas production and may lead to the death of the well. Foam injection is widely used as one of the methods to unload liquid. Since wells have different characteristics, it is not guaranteed that foam can be applied in all of them and bring successful results. This research presents a technology to optimize the efficiency of foam to unload liquid by air compression. Two methods are used to explain optimization; (i) mathematical formulas are used to solve and explain the myth of how density and critical velocity could be minimized when air is compressed into foaming agents, then the relationship between flow rates and pressure increase which would boost up the bottom hole pressure and increase the velocity to lift liquid to the surface. (ii) Experiments to test foam carryover capacity and stability as a function of time and surfactant concentration whereby three surfactants anionic sodium dodecyl sulfate (SDS), nonionic Triton 100 and cationic hexadecyltrimethylammonium bromide (HDTAB) were probed. The best foaming agents were injected to lift liquid loaded in a created vertical well model of 2.5 cm diameter and 390 cm high steel tubing covered by a transparent glass casing of 5 cm diameter and 450 cm high. The results show that, after injecting foaming agents, liquid unloading was successful by 75%; however, the efficiency of foaming agents to unload liquid increased by 10% with an addition of compressed air at a ratio of 1:1. Measured values and calculated values were compared and brought about ± 3% difference which is a good number. The successful application of the technology indicates that engineers and stakeholders could bring water flooded gas wells back to production with optimized results by firstly paying attention to the type of surfactants (foaming agents) used, concentration of surfactants, flow rates of the injected surfactants then compressing air to the foaming agents at a proper ratio.Keywords: air compression, foaming agents, gas well, liquid loading
Procedia PDF Downloads 135500 Phase Optimized Ternary Alloy Material for Gas Turbines
Authors: Mayandi Ramanathan
Abstract:
Gas turbine blades see the most aggressive thermal stress conditions within the engine, due to Turbine Entry Temperatures in the range of 1500 to 1600°C, but in synchronization with other functional components, they must readily deliver efficient performance, whilst incurring minimal overhaul and repair costs during its service life up to 5 million flying miles. The blades rotate at very high rotation rates and remove significant amount of thermal power from the gas stream. At high temperatures the major component failure mechanism is creep. During its service over time under high temperatures and loads, the blade will deform, lengthen and rupture. High strength and stiffness in the longitudinal direction up to elevated service temperatures are certainly the most needed properties of turbine blades. The proposed advanced Ti alloy material needs a process that provides strategic orientation of metallic ordering, uniformity in composition and high metallic strength. 25% Ta/(Al+Ta) ratio ensures TaAl3 phase formation, where as 51% Al/(Al+Ti) ratio ensures formation of α-Ti3Al and γ-TiAl mixed phases fand the three phase combination ensures minimal Al excess (~1.4% Al excess), unlike Ti-47Al-2Cr-2Nb which has significant excess Al (~5% Al excess) that could affect the service life of turbine blades. This presentation will involve the summary of additive manufacturing and heat treatment process conditions to fabricate turbine blade with Ti-43Al matrix alloyed with optimized amount of refractory Ta metal. Summary of thermo-mechanical test results such as high temperature tensile strength, creep strain rate, thermal expansion coefficient and fracture toughness will be presented. Improvement in service temperature of the turbine blades and corrosion resistance dependence on coercivity of the alloy material will be reported. Phase compositions will be quantified, and a summary of its correlation with creep strain rate will be presented.Keywords: gas turbine, aerospace, specific strength, creep, high temperature materials, alloys, phase optimization
Procedia PDF Downloads 181499 Environmental Risk Assessment of Mechanization Waste Collection Scheme in Tehran
Authors: Amin Padash, Javad Kazem Zadeh Khoiy, Hossein Vahidi
Abstract:
Purpose: The mechanization system for the urban services was implemented in Tehran City in the year 2004 to promote the collection of domestic wastes; in 2010, in order to achieve the objectives of the project of urban services mechanization and qualitative promotion and improve the urban living environment, sustainable development and optimization of the recyclable solid wastes collection systems as well as other dry and non-organic wastes and conformity of the same to the modern urban management methods regarding integration of the mechanized urban services contractors and recycling contractors and in order to better and more correct fulfillment of the waste separation and considering the success of the mechanization plan of the dry wastes in most of the modern countries. The aim of this research is analyzing of Environmental Risk Assessment of the mechanization waste collection scheme in Tehran. Case Study: Tehran, the capital of Iran, with the population of 8.2 million people, occupies 730 km land expanse, which is 4% of total area of country. Tehran generated 2,788,912 ton (7,641 ton/day) of waste in year 2008. Hospital waste generation rate in Tehran reaches 83 ton/day. Almost 87% of total waste was disposed of by placing in a landfill located in Kahrizak region. This large amount of waste causes a significant challenge for the city. Methodology: To conduct the study, the methodology proposed in the standard Mil-St-88213 is used. This method is an efficient method to examine the position in opposition to the various processes and the action is effective. The method is based on the method of Military Standard and Specialized in the military to investigate and evaluate options to locate and identify the strengths and weaknesses of powers to decide on the best determining strategy has been used. Finding and Conclusion: In this study, the current status of mechanization systems to collect waste and identify its possible effects on the environment through a survey and assessment methodology Mil-St-88213, and then the best plan for action and mitigation of environmental risk has been proposed as Environmental Management Plan (EMP).Keywords: environmental risk assessment, mechanization waste collection scheme, Mil-St-88213
Procedia PDF Downloads 440498 Lead-Free Inorganic Cesium Tin-Germanium Triiodide Perovskites for Photovoltaic Application
Authors: Seyedeh Mozhgan Seyed-Talebi, Javad Beheshtian
Abstract:
The toxicity of lead associated with the lifecycle of perovskite solar cells (PSCs( is a serious concern which may prove to be a major hurdle in the path toward their commercialization. The current proposed lead-free PSCs including Ag(I), Bi(III), Sb(III), Ti(IV), Ge(II), and Sn(II) low-toxicity cations are still plagued with the critical issues of poor stability and low efficiency. This is mainly because of their chemical stability. In the present research, utilization of all inorganic CsSnGeI3 based materials offers the advantages to enhance resistance of device to degradation, reduce the cost of cells, and minimize the carrier recombination. The presence of inorganic halide perovskite improves the photovoltaic parameters of PCSs via improved surface coverage and stability. The inverted structure of simulated devices using a 1D simulator like solar cell capacitance simulator (SCAPS) version 3308 involves TCOHTL/Perovskite/ETL/Au contact layer. PEDOT:PSS, PCBM, and CsSnGeI3 used as hole transporting layer (HTL), electron transporting layer (ETL), and perovskite absorber layer in the inverted structure for the first time. The holes are injected from highly stable and air tolerant Sn0.5Ge0.5I3 perovskite composition to HTM and electrons from the perovskite to ETL. Simulation results revealed a great dependence of power conversion efficiency (PCE) on the thickness and defect density of perovskite layer. Here the effect of an increase in operating temperature from 300 K to 400 K on the performance of CsSnGeI3 based perovskite devices is investigated. Comparison between simulated CsSnGeI3 based PCSs and similar real testified devices with spiro-OMeTAD as HTL showed that the extraction of carriers at the interfaces of perovskite absorber depends on the energy level mismatches between perovskite and HTL/ETL. We believe that optimization results reported here represent a critical avenue for fabricating the stable, low-cost, efficient, and eco-friendly all-inorganic Cs-Sn-Ge based lead-free perovskite devices.Keywords: hole transporting layer, lead-free, perovskite solar cell, SCAPS-1D, Sn-Ge based
Procedia PDF Downloads 156497 Automatic Moderation of Toxic Comments in the Face of Local Language Complexity in Senegal
Authors: Edouard Ngor Sarr, Abel Diatta, Serigne Mor Toure, Ousmane Sall, Lamine Faty
Abstract:
Thanks to Web 2, we are witnessing a form of democratization of the spoken word, an exponential increase in the number of users on the web, but also, and above all, the accumulation of a daily flow of content that is becoming, at times, uncontrollable. Added to this is the rise of a violent social fabric characterised by hateful and racial comments, insults, and other content that contravenes social rules and the platforms' terms of use. Consequently, managing and regulating this mass of new content is proving increasingly difficult, requiring substantial human, technical, and technological resources. Without regulation and with the complicity of anonymity, this toxic content can pollute discussions and make these online spaces highly conducive to abuse, which very often has serious consequences for certain internet users, ranging from anxiety to suicide, depression, or withdrawal. The toxicity of a comment is defined as anything that is rude, disrespectful, or likely to cause someone to leave a discussion or to take violent action against a person or a community. Two levels of measures are needed to deal with this deleterious situation. The first measures are being taken by governments through draft laws with a dual objective: (i) to punish the perpetrators of these abuses and (ii) to make online platforms accountable for the mistakes made by their users. The second measure comes from the platforms themselves. By assessing the content left by users, they can set up filters to block and/or delete content or decide to suspend the user in question for good. However, the speed of discussions and the volume of data involved mean that platforms are unable to properly monitor the moderation of content produced by Internet users. That's why they use human moderators, either through recruitment or outsourcing. Moderating comments on the web means assessing and monitoring users‘ comments on online platforms in order to strike the right balance between protection against abuse and users’ freedom of expression. It makes it possible to determine which publications and users are allowed to remain online and which are deleted or suspended, how authorised publications are displayed, and what actions accompany content deletions. In this study, we look at the problem of automatic moderation of toxic comments in the face of local African languages and, more specifically, on social network comments in Senegal. We review the state of the art, highlighting the different approaches, algorithms, and tools for moderating comments. We also study the issues and challenges of moderation in the face of web ecosystems with lesser-known languages, such as local languages.Keywords: moderation, local languages, Senegal, toxic comments
Procedia PDF Downloads 11496 In-silico DFT Study, Molecular Docking, ADMET Predictions, and DMS of Isoxazolidine and Isoxazoline Analogs with Anticancer Properties
Authors: Moulay Driss Mellaoui, Khadija Zaki, Khalid Abbiche, Abdallah Imjjad, Rachid Boutiddar, Abdelouahid Sbai, Aaziz Jmiai, Souad El Issami, Al Mokhtar Lamsabhi, Hanane Zejli
Abstract:
This study presents a comprehensive analysis of six isoxazolidine and isoxazoline derivatives, leveraging a multifaceted approach that combines Density Functional Theory (DFT), AdmetSAR analysis, and molecular docking simulations to explore their electronic, pharmacokinetic, and anticancer properties. Through DFT analysis, using the B3LYP-D3BJ functional and the 6-311++G(d,p) basis set, we optimized molecular geometries, analyzed vibrational frequencies, and mapped Molecular Electrostatic Potentials (MEP), identifying key sites for electrophilic attacks and hydrogen bonding. Frontier Molecular Orbital (FMO) analysis and Density of States (DOS) plots revealed varying stability levels among the compounds, with 1b, 2b, and 3b showing slightly higher stability. Chemical potential assessments indicated differences in binding affinities, suggesting stronger potential interactions for compounds 1b and 2b. AdmetSAR analysis predicted favorable human intestinal absorption (HIA) rates for all compounds, highlighting compound 3b superior oral effectiveness. Molecular docking and molecular dynamics simulations were conducted on isoxazolidine and 4-isoxazoline derivatives targeting the EGFR receptor (PDB: 1JU6). Molecular docking simulations confirmed the high affinity of these compounds towards the target protein 1JU6, particularly compound 3b, among the isoxazolidine derivatives, compound 3b exhibited the most favorable binding energy, with a g score of -8.50 kcal/mol. Molecular dynamics simulations over 100 nanoseconds demonstrated the stability and potential of compound 3b as a superior candidate for anticancer applications, further supported by structural analyses including RMSD, RMSF, Rg, and SASA values. This study underscores the promising role of compound 3b in anticancer treatments, providing a solid foundation for future drug development and optimization efforts.Keywords: isoxazolines, DFT, molecular docking, molecular dynamic, ADMET, drugs.
Procedia PDF Downloads 49495 Use of Multivariate Statistical Techniques for Water Quality Monitoring Network Assessment, Case of Study: Jequetepeque River Basin
Authors: Jose Flores, Nadia Gamboa
Abstract:
A proper water quality management requires the establishment of a monitoring network. Therefore, evaluation of the efficiency of water quality monitoring networks is needed to ensure high-quality data collection of critical quality chemical parameters. Unfortunately, in some Latin American countries water quality monitoring programs are not sustainable in terms of recording historical data or environmentally representative sites wasting time, money and valuable information. In this study, multivariate statistical techniques, such as principal components analysis (PCA) and hierarchical cluster analysis (HCA), are applied for identifying the most significant monitoring sites as well as critical water quality parameters in the monitoring network of the Jequetepeque River basin, in northern Peru. The Jequetepeque River basin, like others in Peru, shows socio-environmental conflicts due to economical activities developed in this area. Water pollution by trace elements in the upper part of the basin is mainly related with mining activity, and agricultural land lost due to salinization is caused by the extensive use of groundwater in the lower part of the basin. Since the 1980s, the water quality in the basin has been non-continuously assessed by public and private organizations, and recently the National Water Authority had established permanent water quality networks in 45 basins in Peru. Despite many countries use multivariate statistical techniques for assessing water quality monitoring networks, those instruments have never been applied for that purpose in Peru. For this reason, the main contribution of this study is to demonstrate that application of the multivariate statistical techniques could serve as an instrument that allows the optimization of monitoring networks using least number of monitoring sites as well as the most significant water quality parameters, which would reduce costs concerns and improve the water quality management in Peru. Main socio-economical activities developed and the principal stakeholders related to the water management in the basin are also identified. Finally, water quality management programs will also be discussed in terms of their efficiency and sustainability.Keywords: PCA, HCA, Jequetepeque, multivariate statistical
Procedia PDF Downloads 356494 Malate Dehydrogenase Enabled ZnO Nanowires as an Optical Tool for Malic Acid Detection in Horticultural Products
Authors: Rana Tabassum, Ravi Kant, Banshi D. Gupta
Abstract:
Malic acid is an extensively distributed organic acid in numerous horticultural products in minute amounts which significantly contributes towards taste determination by balancing sugar and acid fractions. An enhanced concentration of malic acid is utilized as an indicator of fruit maturity. In addition, malic acid is also a crucial constituent of several cosmetics and pharmaceutical products. An efficient detection and quantification protocol for malic acid is thus highly demanded. In this study, we report a novel detection scheme for malic acid by synergistically collaborating fiber optic surface plasmon resonance (FOSPR) and distinctive features of nanomaterials favorable for sensing applications. The design blueprint involves the deposition of an assembly of malate dehydrogenase enzyme entrapped in ZnO nanowires forming the sensing route over silver coated central unclad core region of an optical fiber. The formation and subsequent decomposition of the enzyme-analyte complex on exposure of the sensing layer to malic acid solutions of diverse concentration results in modification of the dielectric function of the sensing layer which is manifested in terms of shift in resonance wavelength. Optimization of experimental variables such as enzyme concentration entrapped in ZnO nanowires, dip time of probe for deposition of sensing layer and working pH range of the sensing probe have been accomplished through SPR measurements. The optimized sensing probe displays high sensitivity, broad working range and a minimum limit of detection value and has been successfully tested for malic acid determination in real samples of fruit juices. The current work presents a novel perspective towards malic acid determination as the unique and cooperative combination of FOSPR and nanomaterials provides myriad advantages such as enhanced sensitivity, specificity, compactness together with the possibility of online monitoring and remote sensing.Keywords: surface plasmon resonance, optical fiber, sensor, malic acid
Procedia PDF Downloads 380493 Advanced Techniques in Semiconductor Defect Detection: An Overview of Current Technologies and Future Trends
Authors: Zheng Yuxun
Abstract:
This review critically assesses the advancements and prospective developments in defect detection methodologies within the semiconductor industry, an essential domain that significantly affects the operational efficiency and reliability of electronic components. As semiconductor devices continue to decrease in size and increase in complexity, the precision and efficacy of defect detection strategies become increasingly critical. Tracing the evolution from traditional manual inspections to the adoption of advanced technologies employing automated vision systems, artificial intelligence (AI), and machine learning (ML), the paper highlights the significance of precise defect detection in semiconductor manufacturing by discussing various defect types, such as crystallographic errors, surface anomalies, and chemical impurities, which profoundly influence the functionality and durability of semiconductor devices, underscoring the necessity for their precise identification. The narrative transitions to the technological evolution in defect detection, depicting a shift from rudimentary methods like optical microscopy and basic electronic tests to more sophisticated techniques including electron microscopy, X-ray imaging, and infrared spectroscopy. The incorporation of AI and ML marks a pivotal advancement towards more adaptive, accurate, and expedited defect detection mechanisms. The paper addresses current challenges, particularly the constraints imposed by the diminutive scale of contemporary semiconductor devices, the elevated costs associated with advanced imaging technologies, and the demand for rapid processing that aligns with mass production standards. A critical gap is identified between the capabilities of existing technologies and the industry's requirements, especially concerning scalability and processing velocities. Future research directions are proposed to bridge these gaps, suggesting enhancements in the computational efficiency of AI algorithms, the development of novel materials to improve imaging contrast in defect detection, and the seamless integration of these systems into semiconductor production lines. By offering a synthesis of existing technologies and forecasting upcoming trends, this review aims to foster the dialogue and development of more effective defect detection methods, thereby facilitating the production of more dependable and robust semiconductor devices. This thorough analysis not only elucidates the current technological landscape but also paves the way for forthcoming innovations in semiconductor defect detection.Keywords: semiconductor defect detection, artificial intelligence in semiconductor manufacturing, machine learning applications, technological evolution in defect analysis
Procedia PDF Downloads 53492 Understanding the Qualitative Nature of Product Reviews by Integrating Text Processing Algorithm and Usability Feature Extraction
Authors: Cherry Yieng Siang Ling, Joong Hee Lee, Myung Hwan Yun
Abstract:
The quality of a product to be usable has become the basic requirement in consumer’s perspective while failing the requirement ends up the customer from not using the product. Identifying usability issues from analyzing quantitative and qualitative data collected from usability testing and evaluation activities aids in the process of product design, yet the lack of studies and researches regarding analysis methodologies in qualitative text data of usability field inhibits the potential of these data for more useful applications. While the possibility of analyzing qualitative text data found with the rapid development of data analysis studies such as natural language processing field in understanding human language in computer, and machine learning field in providing predictive model and clustering tool. Therefore, this research aims to study the application capability of text processing algorithm in analysis of qualitative text data collected from usability activities. This research utilized datasets collected from LG neckband headset usability experiment in which the datasets consist of headset survey text data, subject’s data and product physical data. In the analysis procedure, which integrated with the text-processing algorithm, the process includes training of comments onto vector space, labeling them with the subject and product physical feature data, and clustering to validate the result of comment vector clustering. The result shows 'volume and music control button' as the usability feature that matches best with the cluster of comment vectors where centroid comments of a cluster emphasized more on button positions, while centroid comments of the other cluster emphasized more on button interface issues. When volume and music control buttons are designed separately, the participant experienced less confusion, and thus, the comments mentioned only about the buttons' positions. While in the situation where the volume and music control buttons are designed as a single button, the participants experienced interface issues regarding the buttons such as operating methods of functions and confusion of functions' buttons. The relevance of the cluster centroid comments with the extracted feature explained the capability of text processing algorithms in analyzing qualitative text data from usability testing and evaluations.Keywords: usability, qualitative data, text-processing algorithm, natural language processing
Procedia PDF Downloads 285491 Robust Electrical Segmentation for Zone Coherency Delimitation Base on Multiplex Graph Community Detection
Authors: Noureddine Henka, Sami Tazi, Mohamad Assaad
Abstract:
The electrical grid is a highly intricate system designed to transfer electricity from production areas to consumption areas. The Transmission System Operator (TSO) is responsible for ensuring the efficient distribution of electricity and maintaining the grid's safety and quality. However, due to the increasing integration of intermittent renewable energy sources, there is a growing level of uncertainty, which requires a faster responsive approach. A potential solution involves the use of electrical segmentation, which involves creating coherence zones where electrical disturbances mainly remain within the zone. Indeed, by means of coherent electrical zones, it becomes possible to focus solely on the sub-zone, reducing the range of possibilities and aiding in managing uncertainty. It allows faster execution of operational processes and easier learning for supervised machine learning algorithms. Electrical segmentation can be applied to various applications, such as electrical control, minimizing electrical loss, and ensuring voltage stability. Since the electrical grid can be modeled as a graph, where the vertices represent electrical buses and the edges represent electrical lines, identifying coherent electrical zones can be seen as a clustering task on graphs, generally called community detection. Nevertheless, a critical criterion for the zones is their ability to remain resilient to the electrical evolution of the grid over time. This evolution is due to the constant changes in electricity generation and consumption, which are reflected in graph structure variations as well as line flow changes. One approach to creating a resilient segmentation is to design robust zones under various circumstances. This issue can be represented through a multiplex graph, where each layer represents a specific situation that may arise on the grid. Consequently, resilient segmentation can be achieved by conducting community detection on this multiplex graph. The multiplex graph is composed of multiple graphs, and all the layers share the same set of vertices. Our proposal involves a model that utilizes a unified representation to compute a flattening of all layers. This unified situation can be penalized to obtain (K) connected components representing the robust electrical segmentation clusters. We compare our robust segmentation to the segmentation based on a single reference situation. The robust segmentation proves its relevance by producing clusters with high intra-electrical perturbation and low variance of electrical perturbation. We saw through the experiences when robust electrical segmentation has a benefit and in which context.Keywords: community detection, electrical segmentation, multiplex graph, power grid
Procedia PDF Downloads 79490 Synthesis and Two-Photon Polymerization of a Cytocompatibility Tyramine Functionalized Hyaluronic Acid Hydrogel That Mimics the Chemical, Mechanical, and Structural Characteristics of Spinal Cord Tissue
Authors: James Britton, Vijaya Krishna, Manus Biggs, Abhay Pandit
Abstract:
Regeneration of the spinal cord after injury remains a great challenge due to the complexity of this organ. Inflammation and gliosis at the injury site hinder the outgrowth of axons and hence prevent synaptic reconnection and reinnervation. Hyaluronic acid (HA) is the main component of the spinal cord extracellular matrix and plays a vital role in cell proliferation and axonal guidance. In this study, we have synthesized and characterized a photo-cross-linkable HA-tyramine (tyr) hydrogel from a chemical, mechanical, electrical, biological and structural perspective. From our experimentation, we have found that HA-tyr can be synthesized with controllable degrees of tyramine substitution using click chemistry. The complex modulus (G*) of HA-tyr can be tuned to mimic the mechanical properties of the native spinal cord via optimization of the photo-initiator concentration and UV exposure. We have examined the degree of tyramine-tyramine covalent bonding (polymerization) as a function of UV exposure and photo-initiator use via Photo and Nuclear magnetic resonance spectroscopy. Both swelling and enzymatic degradation assays were conducted to examine the resilience of our 3D printed hydrogel constructs in-vitro. Using a femtosecond 780nm laser, the two-photon polymerization of HA-tyr hydrogel in the presence of riboflavin photoinitiator was optimized. A laser power of 50mW and scan speed of 30,000 μm/s produced high-resolution spatial patterning within the hydrogel with sustained mechanical integrity. Using dorsal root ganglion explants, the cytocompatibility of photo-crosslinked HA-tyr was assessed. Using potentiometry, the electrical conductivity of photo-crosslinked HA-tyr was assessed and compared to that of native spinal cord tissue as a function of frequency. In conclusion, we have developed a biocompatible hydrogel that can be used for photolithographic 3D printing to fabricate tissue engineered constructs for neural tissue regeneration applications.Keywords: 3D printing, hyaluronic acid, photolithography, spinal cord injury
Procedia PDF Downloads 152489 Development of a Systematic Approach to Assess the Applicability of Silver Coated Conductive Yarn
Authors: Y. T. Chui, W. M. Au, L. Li
Abstract:
Recently, wearable electronic textiles have been emerging in today’s market and were developed rapidly since, beside the needs for the clothing uses for leisure, fashion wear and personal protection, there also exist a high demand for the clothing to be capable for function in this electronic age, such as interactive interfaces, sensual being and tangible touch, social fabric, material witness and so on. With the requirements of wearable electronic textiles to be more comfortable, adorable, and easy caring, conductive yarn becomes one of the most important fundamental elements within the wearable electronic textile for interconnection between different functional units or creating a functional unit. The properties of conductive yarns from different companies can vary to a large extent. There are vitally important criteria for selecting the conductive yarns, which may directly affect its optimization, prospect, applicability and performance of the final garment. However, according to the literature review, few researches on conductive yarns on shelf focus on the assessment methods of conductive yarns for the scientific selection of material by a systematic way under different conditions. Therefore, in this study, direction of selecting high-quality conductive yarns is given. It is to test the stability and reliability of the conductive yarns according the problems industrialists would experience with the yarns during the every manufacturing process, in which, this assessment system can be classified into four stage. That is 1) Yarn stage, 2) Fabric stage, 3) Apparel stage and 4) End user stage. Several tests with clear experiment procedures and parameters are suggested to be carried out in each stage. This assessment method suggested that the optimal conducting yarns should be stable in property and resistant to various corrosions at every production stage or during using them. It is expected that this demonstration of assessment method can serve as a pilot study that assesses the stability of Ag/nylon yarns systematically at various conditions, i.e. during mass production with textile industry procedures, and from the consumer perspective. It aims to assist industrialists to understand the qualities and properties of conductive yarns and suggesting a few important parameters that they should be reminded of for the case of higher level of suitability, precision and controllability.Keywords: applicability, assessment method, conductive yarn, wearable electronics
Procedia PDF Downloads 536488 Hydraulic Characteristics of Mine Tailings by Metaheuristics Approach
Authors: Akhila Vasudev, Himanshu Kaushik, Tadikonda Venkata Bharat
Abstract:
A large number of mine tailings are produced every year as part of the extraction process of phosphates, gold, copper, and other materials. Mine tailings are high in water content and have very slow dewatering behavior. The efficient design of tailings dam and economical disposal of these slurries requires the knowledge of tailings consolidation behavior. The large-strain consolidation theory closely predicts the self-weight consolidation of these slurries as the theory considers the conservation of mass and momentum conservation and considers the hydraulic conductivity as a function of void ratio. Classical laboratory techniques, such as settling column test, seepage consolidation test, etc., are expensive and time-consuming for the estimation of hydraulic conductivity variation with void ratio. Inverse estimation of the constitutive relationships from the measured settlement versus time curves is explored. In this work, inverse analysis based on metaheuristics techniques will be explored for predicting the hydraulic conductivity parameters for mine tailings from the base excess pore water pressure dissipation curve and the initial conditions of the mine tailings. The proposed inverse model uses particle swarm optimization (PSO) algorithm, which is based on the social behavior of animals searching for food sources. The finite-difference numerical solution of the forward analytical model is integrated with the PSO algorithm to solve the inverse problem. The method is tested on synthetic data of base excess pore pressure dissipation curves generated using the finite difference method. The effectiveness of the method is verified using base excess pore pressure dissipation curve obtained from a settling column experiment and further ensured through comparison with available predicted hydraulic conductivity parameters.Keywords: base excess pore pressure, hydraulic conductivity, large strain consolidation, mine tailings
Procedia PDF Downloads 136487 A Comparative Study of the Techno-Economic Performance of the Linear Fresnel Reflector Using Direct and Indirect Steam Generation: A Case Study under High Direct Normal Irradiance
Authors: Ahmed Aljudaya, Derek Ingham, Lin Ma, Kevin Hughes, Mohammed Pourkashanian
Abstract:
Researchers, power companies, and state politicians have given concentrated solar power (CSP) much attention due to its capacity to generate large amounts of electricity whereas overcoming the intermittent nature of solar resources. The Linear Fresnel Reflector (LFR) is a well-known CSP technology type for being inexpensive, having a low land use factor, and suffering from low optical efficiency. The LFR was considered a cost-effective alternative option to the Parabolic Trough Collector (PTC) because of its simplistic design, and this often outweighs its lower efficiency. The LFR has been found to be a promising option for directly producing steam to a thermal cycle in order to generate low-cost electricity, but also it has been shown to be promising for indirect steam generation. The purpose of this important analysis is to compare the annual performance of the Direct Steam Generation (DSG) and Indirect Steam Generation (ISG) of LFR power plants using molten salt and other different Heat Transfer Fluids (HTF) to investigate their technical and economic effects. A 50 MWe solar-only system is examined as a case study for both steam production methods in extreme weather conditions. In addition, a parametric analysis is carried out to determine the optimal solar field size that provides the lowest Levelized Cost of Electricity (LCOE) while achieving the highest technical performance. As a result of optimizing the optimum solar field size, the solar multiple (SM) is found to be between 1.2 – 1.5 in order to achieve as low as 9 Cent/KWh for the direct steam generation of the linear Fresnel reflector. In addition, the power plant is capable of producing around 141 GWh annually and up to 36% of the capacity factor, whereas the ISG produces less energy at a higher cost. The optimization results show that the DSG’s performance overcomes the ISG in producing around 3% more annual energy, 2% lower LCOE, and 28% less capital cost.Keywords: concentrated solar power, levelized cost of electricity, linear Fresnel reflectors, steam generation
Procedia PDF Downloads 111486 User Experience in Relation to Eye Tracking Behaviour in VR Gallery
Authors: Veslava Osinska, Adam Szalach, Dominik Piotrowski
Abstract:
Contemporary VR technologies allow users to explore virtual 3D spaces where they can work, socialize, learn, and play. User's interaction with GUI and the pictures displayed implicate perceptual and also cognitive processes which can be monitored due to neuroadaptive technologies. These modalities provide valuable information about the users' intentions, situational interpretations, and emotional states, to adapt an application or interface accordingly. Virtual galleries outfitted by specialized assets have been designed using the Unity engine BITSCOPE project in the frame of CHIST-ERA IV program. Users interaction with gallery objects implies the questions about his/her visual interests in art works and styles. Moreover, an attention, curiosity, and other emotional states are possible to be monitored and analyzed. Natural gaze behavior data and eye position were recorded by built-in eye-tracking module within HTC Vive headset gogle for VR. Eye gaze results are grouped due to various users’ behavior schemes and the appropriate perpetual-cognitive styles are recognized. Parallelly usability tests and surveys were adapted to identify the basic features of a user-centered interface for the virtual environments across most of the timeline of the project. A total of sixty participants were selected from the distinct faculties of University and secondary schools. Users’ primary knowledge about art and was evaluated during pretest and this way the level of art sensitivity was described. Data were collected during two months. Each participant gave written informed consent before participation. In data analysis reducing the high-dimensional data into a relatively low-dimensional subspace ta non linear algorithms were used such as multidimensional scaling and novel technique technique t-Stochastic Neighbor Embedding. This way it can classify digital art objects by multi modal time characteristics of eye tracking measures and reveal signatures describing selected artworks. Current research establishes the optimal place on aesthetic-utility scale because contemporary interfaces of most applications require to be designed in both functional and aesthetical ways. The study concerns also an analysis of visual experience for subsamples of visitors, differentiated, e.g., in terms of frequency of museum visits, cultural interests. Eye tracking data may also show how to better allocate artefacts and paintings or increase their visibility when possible.Keywords: eye tracking, VR, UX, visual art, virtual gallery, visual communication
Procedia PDF Downloads 45485 Data Mining in Healthcare for Predictive Analytics
Authors: Ruzanna Muradyan
Abstract:
Medical data mining is a crucial field in contemporary healthcare that offers cutting-edge tactics with enormous potential to transform patient care. This abstract examines how sophisticated data mining techniques could transform the healthcare industry, with a special focus on how they might improve patient outcomes. Healthcare data repositories have dynamically evolved, producing a rich tapestry of different, multi-dimensional information that includes genetic profiles, lifestyle markers, electronic health records, and more. By utilizing data mining techniques inside this vast library, a variety of prospects for precision medicine, predictive analytics, and insight production become visible. Predictive modeling for illness prediction, risk stratification, and therapy efficacy evaluations are important points of focus. Healthcare providers may use this abundance of data to tailor treatment plans, identify high-risk patient populations, and forecast disease trajectories by applying machine learning algorithms and predictive analytics. Better patient outcomes, more efficient use of resources, and early treatments are made possible by this proactive strategy. Furthermore, data mining techniques act as catalysts to reveal complex relationships between apparently unrelated data pieces, providing enhanced insights into the cause of disease, genetic susceptibilities, and environmental factors. Healthcare practitioners can get practical insights that guide disease prevention, customized patient counseling, and focused therapies by analyzing these associations. The abstract explores the problems and ethical issues that come with using data mining techniques in the healthcare industry. In order to properly use these approaches, it is essential to find a balance between data privacy, security issues, and the interpretability of complex models. Finally, this abstract demonstrates the revolutionary power of modern data mining methodologies in transforming the healthcare sector. Healthcare practitioners and researchers can uncover unique insights, enhance clinical decision-making, and ultimately elevate patient care to unprecedented levels of precision and efficacy by employing cutting-edge methodologies.Keywords: data mining, healthcare, patient care, predictive analytics, precision medicine, electronic health records, machine learning, predictive modeling, disease prognosis, risk stratification, treatment efficacy, genetic profiles, precision health
Procedia PDF Downloads 63484 Caregiver Training Results in Accurate Reporting of Stool Frequency
Authors: Matthew Heidman, Susan Dallabrida, Analice Costa
Abstract:
Background:Accuracy of caregiver reported outcomes is essential for infant growth and tolerability study success. Crying/fussiness, stool consistencies, and other gastrointestinal characteristics are important parameters regarding tolerability, and inter-caregiver reporting can see a significant amount of subjectivity and vary greatly within a study, compromising data. This study sought to elucidate how caregiver reported questions related to stool frequency are answered before and after a short amount of training and how training impacts caregivers’ understanding, and how they would answer the question. Methods:A digital survey was issued for 90 daysin the US (n=121) and 30 days in Mexico (n=88), targeting respondents with children ≤4 years of age. Respondents were asked a question in two formats, first without a line of training text and second with a line of training text. The question set was as follows, “If your baby had stool in his/her diaper and you changed the diaper and 10 min later there was more stool in the diaper, how many stools would you report this as?” followed by the same question beginning with “If you were given the instruction that IF there are at least 5 minutes in between stools, then it counts as two (2) stools…”.Four response items were provided for both questions, 1) 2 stools, 2) 1stool, 3) it depends on how much stool was in the first versus the second diaper, 4) There is not enough information to be able to answer the question. Response frequencies between questions were compared. Results: Responses to the question without training saw some variability in the US, with 69% selecting “2 stools”,11% selecting “1 stool”, 14% selecting “it depends on how much stool was in the first versus the second diaper”, and 7% selecting “There is not enough information to be able to answer the question” and in Mexico respondents selected 9%, 78%, 13%, and 0% respectively. However, responses to the question after training saw more consolidation in the US, with 85% of respondents selecting“2 stools,” representing an increase in those selecting the correct answer. Additionally in Mexico, with 84% of respondents selecting “1 episode” representing an increase in the those selecting the correct response. Conclusions: Caregiver reported outcomes are critical for infant growth and tolerability studies, however, they can be highly subjective and see a high variability of responses without guidance. Training is critical to standardize all caregivers’ perspective regarding how to answer questions accurately in order to provide an accurate dataset.Keywords: infant nutrition, clinical trial optimization, stool reporting, decentralized clinical trials
Procedia PDF Downloads 96483 Preparation of Indium Tin Oxide Nanoparticle-Modified 3-Aminopropyltrimethoxysilane-Functionalized Indium Tin Oxide Electrode for Electrochemical Sulfide Detection
Authors: Md. Abdul Aziz
Abstract:
Sulfide ion is water soluble, highly corrosive, toxic and harmful to the human beings. As a result, knowing the exact concentration of sulfide in water is very important. However, the existing detection and quantification methods have several shortcomings, such as high cost, low sensitivity, and massive instrumentation. Consequently, the development of novel sulfide sensor is relevant. Nevertheless, electrochemical methods gained enormous popularity due to a vast improvement in the technique and instrumentation, portability, low cost, rapid analysis and simplicity of design. Successful field application of electrochemical devices still requires vast improvement, which depends on the physical, chemical and electrochemical aspects of the working electrode. The working electrode made of bulk gold (Au) and platinum (Pt) are quite common, being very robust and endowed with good electrocatalytic properties. High cost, and electrode poisoning, however, have so far hindered their practical application in many industries. To overcome these obstacles, we developed a sulfide sensor based on an indium tin oxide nanoparticle (ITONP)-modified ITO electrode. To prepare ITONP-modified ITO, various methods were tested. Drop-drying of ITONPs (aq.) on aminopropyltrimethoxysilane-functionalized ITO (APTMS/ITO) was found to be the best method on the basis of voltammetric analysis of the sulfide ion. ITONP-modified APTMS/ITO (ITONP/APTMS/ITO) yielded much better electrocatalytic properties toward sulfide electro-οxidation than did bare or APTMS/ITO electrodes. The ITONPs and ITONP-modified ITO were also characterized using transmission electron microscopy and field emission scanning electron microscopy, respectively. Optimization of the type of inert electrolyte and pH yielded an ITONP/APTMS/ITO detector whose amperometrically and chronocoulοmetrically determined limits of detection for sulfide in aqueous solution were 3.0 µM and 0.90 µM, respectively. ITONP/APTMS/ITO electrodes which displayed reproducible performances were highly stable and were not susceptible to interference by common contaminants. Thus, the developed electrode can be considered as a promising tool for sensing sulfide.Keywords: amperometry, chronocoulometry, electrocatalytic properties, ITO-nanoparticle-modified ITO, sulfide sensor
Procedia PDF Downloads 131482 Ligandless Extraction and Determination of Trace Amounts of Lead in Pomegranate, Zucchini and Lettuce Samples after Dispersive Liquid-Liquid Microextraction with Ultrasonic Bath and Optimization of Extraction Condition with RSM Design
Authors: Fariba Tadayon, Elmira Hassanlou, Hasan Bagheri, Mostafa Jafarian
Abstract:
Heavy metals are released into water, plants, soil, and food by natural and human activities. Lead has toxic roles in the human body and may cause serious problems even in low concentrations, since it may have several adverse effects on human. Therefore, determination of lead in different samples is an important procedure in the studies of environmental pollution. In this work, an ultrasonic assisted-ionic liquid based-liquid-liquid microextraction (UA-IL-DLLME) procedure for the determination of lead in zucchini, pomegranate, and lettuce has been established and developed by using flame atomic absorption spectrometer (FAAS). For UA-IL-DLLME procedure, 10 mL of the sample solution containing Pb2+ was adjusted to pH=5 in a glass test tube with a conical bottom; then, 120 μL of 1-Hexyl-3-methylimidazolium hexafluoro phosphate (CMIM)(PF6) was rapidly injected into the sample solution with a microsyringe. After that, the resulting cloudy mixture was treated by ultrasonic for 5 min, then the separation of two phases was obtained by centrifugation for 5 min at 3000 rpm and IL-phase diluted with 1 cc ethanol, and the analytes were determined by FAAS. The effect of different experimental parameters in the extraction step including: ionic liquid volume, sonication time and pH was studied and optimized simultaneously by using Response Surface Methodology (RSM) employing a central composite design (CCD). The optimal conditions were determined to be an ionic liquid volume of 120 μL, sonication time of 5 min, and pH=5. The linear ranges of the calibration curve for the determination by FAAS of lead were 0.1-4 ppm with R2=0.992. Under optimized conditions, the limit of detection (LOD) for lead was 0.062 μg.mL-1, the enrichment factor (EF) was 93, and the relative standard deviation (RSD) for lead was calculated as 2.29%. The levels of lead for pomegranate, zucchini, and lettuce were calculated as 2.88 μg.g-1, 1.54 μg.g-1, 2.18 μg.g-1, respectively. Therefore, this method has been successfully applied for the analysis of the content of lead in different food samples by FAAS.Keywords: Dispersive liquid-liquid microextraction, Central composite design, Food samples, Flame atomic absorption spectrometry.
Procedia PDF Downloads 283481 Fire Safety Assessment of At-Risk Groups
Authors: Naser Kazemi Eilaki, Carolyn Ahmer, Ilona Heldal, Bjarne Christian Hagen
Abstract:
Older people and people with disabilities are recognized as at-risk groups when it comes to egress and travel from hazard zone to safe places. One's disability can negatively influence her or his escape time, and this becomes even more important when people from this target group live alone. This research deals with the fire safety of mentioned people's buildings by means of probabilistic methods. For this purpose, fire safety is addressed by modeling the egress of our target group from a hazardous zone to a safe zone. A common type of detached house with a prevalent plan has been chosen for safety analysis, and a limit state function has been developed according to the time-line evacuation model, which is based on a two-zone and smoke development model. An analytical computer model (B-Risk) is used to consider smoke development. Since most of the involved parameters in the fire development model pose uncertainty, an appropriate probability distribution function has been considered for each one of the variables with indeterministic nature. To achieve safety and reliability for the at-risk groups, the fire safety index method has been chosen to define the probability of failure (causalities) and safety index (beta index). An improved harmony search meta-heuristic optimization algorithm has been used to define the beta index. Sensitivity analysis has been done to define the most important and effective parameters for the fire safety of the at-risk group. Results showed an area of openings and intervals to egress exits are more important in buildings, and the safety of people would improve with increasing dimensions of occupant space (building). Fire growth is more critical compared to other parameters in the home without a detector and fire distinguishing system, but in a home equipped with these facilities, it is less important. Type of disabilities has a great effect on the safety level of people who live in the same home layout, and people with visual impairment encounter more risk of capturing compared to visual and movement disabilities.Keywords: fire safety, at-risk groups, zone model, egress time, uncertainty
Procedia PDF Downloads 104480 Frequent Pattern Mining for Digenic Human Traits
Authors: Atsuko Okazaki, Jurg Ott
Abstract:
Some genetic diseases (‘digenic traits’) are due to the interaction between two DNA variants. For example, certain forms of Retinitis Pigmentosa (a genetic form of blindness) occur in the presence of two mutant variants, one in the ROM1 gene and one in the RDS gene, while the occurrence of only one of these mutant variants leads to a completely normal phenotype. Detecting such digenic traits by genetic methods is difficult. A common approach to finding disease-causing variants is to compare 100,000s of variants between individuals with a trait (cases) and those without the trait (controls). Such genome-wide association studies (GWASs) have been very successful but hinge on genetic effects of single variants, that is, there should be a difference in allele or genotype frequencies between cases and controls at a disease-causing variant. Frequent pattern mining (FPM) methods offer an avenue at detecting digenic traits even in the absence of single-variant effects. The idea is to enumerate pairs of genotypes (genotype patterns) with each of the two genotypes originating from different variants that may be located at very different genomic positions. What is needed is for genotype patterns to be significantly more common in cases than in controls. Let Y = 2 refer to cases and Y = 1 to controls, with X denoting a specific genotype pattern. We are seeking association rules, ‘X → Y’, with high confidence, P(Y = 2|X), significantly higher than the proportion of cases, P(Y = 2) in the study. Clearly, generally available FPM methods are very suitable for detecting disease-associated genotype patterns. We use fpgrowth as the basic FPM algorithm and built a framework around it to enumerate high-frequency digenic genotype patterns and to evaluate their statistical significance by permutation analysis. Application to a published dataset on opioid dependence furnished results that could not be found with classical GWAS methodology. There were 143 cases and 153 healthy controls, each genotyped for 82 variants in eight genes of the opioid system. The aim was to find out whether any of these variants were disease-associated. The single-variant analysis did not lead to significant results. Application of our FPM implementation resulted in one significant (p < 0.01) genotype pattern with both genotypes in the pattern being heterozygous and originating from two variants on different chromosomes. This pattern occurred in 14 cases and none of the controls. Thus, the pattern seems quite specific to this form of substance abuse and is also rather predictive of disease. An algorithm called Multifactor Dimension Reduction (MDR) was developed some 20 years ago and has been in use in human genetics ever since. This and our algorithms share some similar properties, but they are also very different in other respects. The main difference seems to be that our algorithm focuses on patterns of genotypes while the main object of inference in MDR is the 3 × 3 table of genotypes at two variants.Keywords: digenic traits, DNA variants, epistasis, statistical genetics
Procedia PDF Downloads 124479 Regional Flood Frequency Analysis in Narmada Basin: A Case Study
Authors: Ankit Shah, R. K. Shrivastava
Abstract:
Flood and drought are two main features of hydrology which affect the human life. Floods are natural disasters which cause millions of rupees’ worth of damage each year in India and the whole world. Flood causes destruction in form of life and property. An accurate estimate of the flood damage potential is a key element to an effective, nationwide flood damage abatement program. Also, the increase in demand of water due to increase in population, industrial and agricultural growth, has let us know that though being a renewable resource it cannot be taken for granted. We have to optimize the use of water according to circumstances and conditions and need to harness it which can be done by construction of hydraulic structures. For their safe and proper functioning of hydraulic structures, we need to predict the flood magnitude and its impact. Hydraulic structures play a key role in harnessing and optimization of flood water which in turn results in safe and maximum use of water available. Mainly hydraulic structures are constructed on ungauged sites. There are two methods by which we can estimate flood viz. generation of Unit Hydrographs and Flood Frequency Analysis. In this study, Regional Flood Frequency Analysis has been employed. There are many methods for estimating the ‘Regional Flood Frequency Analysis’ viz. Index Flood Method. National Environmental and Research Council (NERC Methods), Multiple Regression Method, etc. However, none of the methods can be considered universal for every situation and location. The Narmada basin is located in Central India. It is drained by most of the tributaries, most of which are ungauged. Therefore it is very difficult to estimate flood on these tributaries and in the main river. As mentioned above Artificial Neural Network (ANN)s and Multiple Regression Method is used for determination of Regional flood Frequency. The annual peak flood data of 20 sites gauging sites of Narmada Basin is used in the present study to determine the Regional Flood relationships. Homogeneity of the considered sites is determined by using the Index Flood Method. Flood relationships obtained by both the methods are compared with each other, and it is found that ANN is more reliable than Multiple Regression Method for the present study area.Keywords: artificial neural network, index flood method, multi layer perceptrons, multiple regression, Narmada basin, regional flood frequency
Procedia PDF Downloads 420478 Psychophysiological Adaptive Automation Based on Fuzzy Controller
Authors: Liliana Villavicencio, Yohn Garcia, Pallavi Singh, Luis Fernando Cruz, Wilfrido Moreno
Abstract:
Psychophysiological adaptive automation is a concept that combines human physiological data and computer algorithms to create personalized interfaces and experiences for users. This approach aims to enhance human learning by adapting to individual needs and preferences and optimizing the interaction between humans and machines. According to neurosciences, the working memory demand during the student learning process is modified when the student is learning a new subject or topic, managing and/or fulfilling a specific task goal. A sudden increase in working memory demand modifies the level of students’ attention, engagement, and cognitive load. The proposed psychophysiological adaptive automation system will adapt the task requirements to optimize cognitive load, the process output variable, by monitoring the student's brain activity. Cognitive load changes according to the student’s previous knowledge, the type of task, the difficulty level of the task, and the overall psychophysiological state of the student. Scaling the measured cognitive load as low, medium, or high; the system will assign a task difficulty level to the next task according to the ratio between the previous-task difficulty level and student stress. For instance, if a student becomes stressed or overwhelmed during a particular task, the system detects this through signal measurements such as brain waves, heart rate variability, or any other psychophysiological variables analyzed to adjust the task difficulty level. The control of engagement and stress are considered internal variables for the hypermedia system which selects between three different types of instructional material. This work assesses the feasibility of a fuzzy controller to track a student's physiological responses and adjust the learning content and pace accordingly. Using an industrial automation approach, the proposed fuzzy logic controller is based on linguistic rules that complement the instrumentation of the system to monitor and control the delivery of instructional material to the students. From the test results, it can be proved that the implemented fuzzy controller can satisfactorily regulate the delivery of academic content based on the working memory demand without compromising students’ health. This work has a potential application in the instructional design of virtual reality environments for training and education.Keywords: fuzzy logic controller, hypermedia control system, personalized education, psychophysiological adaptive automation
Procedia PDF Downloads 82477 Digital Transformation: Actionable Insights to Optimize the Building Performance
Authors: Jovian Cheung, Thomas Kwok, Victor Wong
Abstract:
Buildings are entwined with smart city developments. Building performance relies heavily on electrical and mechanical (E&M) systems and services accounting for about 40 percent of global energy use. By cohering the advancement of technology as well as energy and operation-efficient initiatives into the buildings, people are enabled to raise building performance and enhance the sustainability of the built environment in their daily lives. Digital transformation in the buildings is the profound development of the city to leverage the changes and opportunities of digital technologies To optimize the building performance, intelligent power quality and energy management system is developed for transforming data into actions. The system is formed by interfacing and integrating legacy metering and internet of things technologies in the building and applying big data techniques. It provides operation and energy profile and actionable insights of a building, which enables to optimize the building performance through raising people awareness on E&M services and energy consumption, predicting the operation of E&M systems, benchmarking the building performance, and prioritizing assets and energy management opportunities. The intelligent power quality and energy management system comprises four elements, namely the Integrated Building Performance Map, Building Performance Dashboard, Power Quality Analysis, and Energy Performance Analysis. It provides predictive operation sequence of E&M systems response to the built environment and building activities. The system collects the live operating conditions of E&M systems over time to identify abnormal system performance, predict failure trends and alert users before anticipating system failure. The actionable insights collected can also be used for system design enhancement in future. This paper will illustrate how intelligent power quality and energy management system provides operation and energy profile to optimize the building performance and actionable insights to revitalize an existing building into a smart building. The system is driving building performance optimization and supporting in developing Hong Kong into a suitable smart city to be admired.Keywords: intelligent buildings, internet of things technologies, big data analytics, predictive operation and maintenance, building performance
Procedia PDF Downloads 159476 Exploration of Slow-Traffic System Strategies for New Urban Areas Under the Integration of Industry and City - Taking Qianfeng District of Guang’an City as an Example
Authors: Qikai Guan
Abstract:
With the deepening of China's urbanization process, the development of urban industry has entered a new period, due to the gradual compounding and diversification of urban industrial functions, urban planning has shifted from the previous single industrial space arrangement and functional design to focusing on the upgrading of the urban structure, and on the diversified needs of people. As an important part of urban activity space, ‘slow moving space’ is of great significance in alleviating urban traffic congestion, optimizing residents' travel experience and improving urban ecological space. Therefore, this paper takes the slow-moving transportation system under the perspective of industry-city integration as the starting point, through sorting out the development needs of the city in the process of industry-city integration, analyzing the characteristics of the site base, sorting out a series of compatibility between the layout of the new industrial zone and the urban slow-moving system, and integrating the design concepts. At the same time, through the analysis and summarization of domestic and international experience, the construction ideas are proposed. Finally, the following aspects of planning strategy optimization are proposed: industrial layout, urban vitality, ecological pattern, regional characteristics and landscape image. In terms of specific design, on the one hand, it builds a regional slow-moving network, puts forward a diversified design strategy for the industry-oriented and multi-functional composite central area, realizes the coexistence of pedestrian-oriented and multiple transportation modes, basically covers the public facilities, and enhances the vitality of the city. On the other hand, it improves the landscape ecosystem, creates a healthy, diversified and livable superline landscape system, helps the construction of the ‘green core’ of the central city, and improves the travel experience of the residents.Keywords: industry-city integration, slow-moving system, public space, functional integration
Procedia PDF Downloads 14475 Chemical Synthesis, Characterization and Dose Optimization of Chitosan-Based Nanoparticles of MCPA for Management of Broad-Leaved Weeds (Chenopodium album, Lathyrus aphaca, Angalis arvensis and Melilotus indica) of Wheat
Authors: Muhammad Ather Nadeem, Bilal Ahmad Khan, Tasawer Abbas
Abstract:
Nanoherbicides utilize nanotechnology to enhance the delivery of biological or chemical herbicides using combinations of nanomaterials. The aim of this research was to examine the efficacy of chitosan nanoparticles containing MCPA herbicide as a potential eco-friendly alternative for weed control in wheat crops. Scanning electron microscopy (SEM), X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FT-IR), and ultraviolet absorbance were used to analyze the developed nanoparticles. The SEM analysis indicated that the average size of the particles was 35 nm, forming clusters with a porous structure. Both nanoparticles of fluroxyper + MCPA exhibited maximal absorption peaks at a wavelength of 320 nm. The compound fluroxyper +MCPA has a strong peak at a 2θ value of 30.55°, which correlates to the 78 plane of the anatase phase. The weeds, including Chenopodium album, Lathyrus aphaca, Angalis arvensis, and Melilotus indica, were sprayed with the nanoparticles while they were in the third or fourth leaf stage. There were seven distinct dosages used: doses (D0 (Check weeds), D1 (Recommended dose of traditional herbicide, D2 (Recommended dose of Nano-herbicide (NPs-H)), D3 (NPs-H with 05-fold lower dose), D4 ((NPs-H) with 10-fold lower dose), D5 (NPs-H with 15-fold lower dose), and D6 (NPs-H with 20-fold lower dose)). The chitosan-based nanoparticles of MCPA at the prescribed dosage of conventional herbicide resulted in complete death and visual damage, with a 100% fatality rate. The dosage that was 5-fold lower exhibited the lowest levels of plant height (3.95 cm), chlorophyll content (5.63%), dry biomass (0.10 g), and fresh biomass (0.33 g) in the broad-leaved weed of wheat. The herbicide nanoparticles, when used at a dosage 10-fold lower than that of conventional herbicides, had a comparable impact on the prescribed dosage. Nano-herbicides have the potential to improve the efficiency of standard herbicides by increasing stability and lowering toxicity.Keywords: mortality, visual injury, chlorophyl contents, chitosan-based nanoparticles
Procedia PDF Downloads 65474 The Analysis of Drill Bit Optimization by the Application of New Electric Impulse Technology in Shallow Water Absheron Peninsula
Authors: Ayshan Gurbanova
Abstract:
Despite based on the fact that drill bit which is the smallest part of bottom hole assembly costs only in between 10% and 15% of the total expenses made, they are the first equipment that is in contact with the formation itself. Hence, it is consequential to choose the appropriate type and dimension of drilling bit, which will prevent majority of problems by not demanding many tripping procedure. However, within the advance in technology, it is now seamless to be beneficial in the terms of many concepts such as subsequent time of operation, energy, expenditure, power and so forth. With the intention of applying the method to Azerbaijan, the field of Shallow Water Absheron Peninsula has been suggested, where the mainland has been located 15 km away from the wildcat wells, named as “NKX01”. It has the water depth of 22 m as indicated. In 2015 and 2016, the seismic survey analysis of 2D and 3D have been conducted in contract area as well as onshore shallow water depth locations. With the aim of indicating clear elucidation, soil stability, possible submersible dangerous scenarios, geohazards and bathymetry surveys have been carried out as well. Within the seismic analysis results, the exact location of exploration wells have been determined and along with this, the correct measurement decisions have been made to divide the land into three productive zones. In the term of the method, Electric Impulse Technology (EIT) is based on discharge energies of electricity within the corrosivity in rock. Take it simply, the highest value of voltages could be created in the less range of nano time, where it is sent to the rock through electrodes’ baring as demonstrated below. These electrodes- higher voltage powered and grounded are placed on the formation which could be obscured in liquid. With the design, it is more seamless to drill horizontal well based on the advantage of loose contact of formation. There is also no chance of worn ability as there are no combustion, mechanical power exist. In the case of energy, the usage of conventional drilling accounts for 1000 𝐽/𝑐𝑚3 , where this value accounts for between 100 and 200 𝐽/𝑐𝑚3 in EIT. Last but not the least, from the test analysis, it has been yielded that it achieves the value of ROP more than 2 𝑚/ℎ𝑟 throughout 15 days. Taking everything into consideration, it is such a fact that with the comparison of data analysis, this method is highly applicable to the fields of Azerbaijan.Keywords: drilling, drill bit cost, efficiency, cost
Procedia PDF Downloads 74473 Anaerobic Co-digestion of the Halophyte Salicornia Ramosissima and Pig Manure in Lab-Scale Batch and Semi-continuous Stirred Tank Reactors: Biomethane Production and Reactor Performance
Authors: Aadila Cayenne, Hinrich Uellendahl
Abstract:
Optimization of the anaerobic digestion (AD) process of halophytic plants is essential as the biomass contains a high salt content that can inhibit the AD process. Anaerobic co-digestion, together with manure, can resolve the inhibitory effects of saline biomass in order to dilute the salt concentration and establish favorable conditions for the microbial consortia of the AD process. The present laboratory study investigated the co-digestion of S. ramosissima (Sram), and pig manure (PM) in batch and semi-continuous stirred tank reactors (CSTR) under mesophilic (38oC) conditions. The 0.5L batch reactor experiments were in mono- and co-digestion of Sram: PM using different percent volatile solid (VS) based ratios (0:100, 15:85, 25:75, 35:65, 50:50, 100:0) with an inoculum to substate (I/R) ratio of 2. Two 5L CSTR systems (R1 and R2) were operated for 133 days with a feed of PM in a control reactor (R1) and with a co-digestion feed in an increasing Sram VS ratio of Sram: PM of 15:85, 25:75, 35:65 in reactor R2 at an organic loading rate (OLR) of 2 gVS/L/d and hydraulic retention time (HRT) of 20 days. After a start-up phase of 8 weeks for both reactors R1 and R2 with PM feed alone, the halophyte biomass Sram was added to the feed of R2 in an increasing ratio of 15 – 35 %VS Sram over an 11-week period. The process performance was monitored by pH, total solid (TS), VS, total nitrogen (TN), ammonium-nitrogen (NH4 – N), volatile fatty acids (VFA), and biomethane production. In the batch experiments, biomethane yields of 423, 418, 392, 365, 315, and 214 mL-CH4/gVS were achieved for mixtures of 0:100, 15:85, 25:75, 35:65, 50:50, 100:0 %VS Sram: PM, respectively. In the semi-continuous reactor processes, the average biomethane yields were 235, 387, and 365 mL-CH4/gVS for the phase of a co-digestion feed ratio in R2 of 15:85, 25:75, and 35:65 %VS Sram: PM, respectively. The methane yield of PM alone in R1 was in the corresponding phases on average 260, 388, and 446 mL-CH4/gVS. Accordingly, in the continuous AD process, the methane yield of the halophyte Sram was highest at 386 mL-CH4/gVS in the co-digestion ratio of 25:75%VS Sram: PM and significantly lower at 15:85 %VS Sram: PM (100 mL-CH4/gVS) and at 35:65 %VS Sram (214 mL-CH4/gVS). The co-digestion process showed no signs of inhibition at 2 – 4 g/L NH4 – N, 3.5 – 4.5 g/L TN, and total VFA of 0.45 – 2.6 g/L (based on Acetic, Propionic, Butyric and Valeric acid). This study demonstrates that a stable co-digestion process of S. ramosissima and pig manure can be achieved with a feed of 25%VS Sram at HRT of 20 d and OLR of 2 gVS/L/d.Keywords: anaerobic co-digestion, biomethane production, halophytes, pig manure, salicornia ramosissima
Procedia PDF Downloads 154