Search results for: discrete automation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1097

Search results for: discrete automation

107 Variable Mapping: From Bibliometrics to Implications

Authors: Przemysław Tomczyk, Dagmara Plata-Alf, Piotr Kwiatek

Abstract:

Literature review is indispensable in research. One of the key techniques used in it is bibliometric analysis, where one of the methods is science mapping. The classic approach that dominates today in this area consists of mapping areas, keywords, terms, authors, or citations. This approach is also used in relation to the review of literature in the field of marketing. The development of technology has resulted in the fact that researchers and practitioners use the capabilities of software available on the market for this purpose. The use of science mapping software tools (e.g., VOSviewer, SciMAT, Pajek) in recent publications involves the implementation of a literature review, and it is useful in areas with a relatively high number of publications. Despite this well-grounded science mapping approach having been applied in the literature reviews, performing them is a painstaking task, especially if authors would like to draw precise conclusions about the studied literature and uncover potential research gaps. The aim of this article is to identify to what extent a new approach to science mapping, variable mapping, takes advantage of the classic science mapping approach in terms of research problem formulation and content/thematic analysis for literature reviews. To perform the analysis, a set of 5 articles on customer ideation was chosen. Next, the analysis of key words mapping results in VOSviewer science mapping software was performed and compared with the variable map prepared manually on the same articles. Seven independent expert judges (management scientists on different levels of expertise) assessed the usability of both the stage of formulating, the research problem, and content/thematic analysis. The results show the advantage of variable mapping in the formulation of the research problem and thematic/content analysis. First, the ability to identify a research gap is clearly visible due to the transparent and comprehensive analysis of the relationships between the variables, not only keywords. Second, the analysis of relationships between variables enables the creation of a story with an indication of the directions of relationships between variables. Demonstrating the advantage of the new approach over the classic one may be a significant step towards developing a new approach to the synthesis of literature and its reviews. Variable mapping seems to allow scientists to build clear and effective models presenting the scientific achievements of a chosen research area in one simple map. Additionally, the development of the software enabling the automation of the variable mapping process on large data sets may be a breakthrough change in the field of conducting literature research.

Keywords: bibliometrics, literature review, science mapping, variable mapping

Procedia PDF Downloads 90
106 Detection of Abnormal Process Behavior in Copper Solvent Extraction by Principal Component Analysis

Authors: Kirill Filianin, Satu-Pia Reinikainen, Tuomo Sainio

Abstract:

Frequent measurements of product steam quality create a data overload that becomes more and more difficult to handle. In the current study, plant history data with multiple variables was successfully treated by principal component analysis to detect abnormal process behavior, particularly, in copper solvent extraction. The multivariate model is based on the concentration levels of main process metals recorded by the industrial on-stream x-ray fluorescence analyzer. After mean-centering and normalization of concentration data set, two-dimensional multivariate model under principal component analysis algorithm was constructed. Normal operating conditions were defined through control limits that were assigned to squared score values on x-axis and to residual values on y-axis. 80 percent of the data set were taken as the training set and the multivariate model was tested with the remaining 20 percent of data. Model testing showed successful application of control limits to detect abnormal behavior of copper solvent extraction process as early warnings. Compared to the conventional techniques of analyzing one variable at a time, the proposed model allows to detect on-line a process failure using information from all process variables simultaneously. Complex industrial equipment combined with advanced mathematical tools may be used for on-line monitoring both of process streams’ composition and final product quality. Defining normal operating conditions of the process supports reliable decision making in a process control room. Thus, industrial x-ray fluorescence analyzers equipped with integrated data processing toolbox allows more flexibility in copper plant operation. The additional multivariate process control and monitoring procedures are recommended to apply separately for the major components and for the impurities. Principal component analysis may be utilized not only in control of major elements’ content in process streams, but also for continuous monitoring of plant feed. The proposed approach has a potential in on-line instrumentation providing fast, robust and cheap application with automation abilities.

Keywords: abnormal process behavior, failure detection, principal component analysis, solvent extraction

Procedia PDF Downloads 284
105 Evaluation of Teaching Team Stress Factors in Two Engineering Education Programs

Authors: Kari Bjorn

Abstract:

Team learning has been studied and modeled as double loop model and its variations. Also, metacognition has been suggested as a concept to describe the nature of team learning to be more than a simple sum of individual learning of the team members. Team learning has a positive correlation with both individual motivation of its members, as well as the collective factors within the team. Team learning of previously very independent members of two teaching teams is analyzed. Applied Science Universities are training future professionals with ever more diversified and multidisciplinary skills. The size of the units of teaching and learning are increasingly larger for several reasons. First, multi-disciplinary skill development requires more active learning and richer learning environments and learning experiences. This occurs on students teams. Secondly, teaching of multidisciplinary skills requires a multidisciplinary and team-based teaching from the teachers as well. Team formation phases have been identifies and widely accepted. Team role stress has been analyzed in project teams. Projects typically have a well-defined goal and organization. This paper explores team stress of two teacher teams in a parallel running two course units in engineering education. The first is an Industrial Automation Technology and the second is Development of Medical Devices. The courses have a separate student group, and they are in different campuses. Both are run in parallel within 8 week time. Both of them are taught by a group of four teachers with several years of teaching experience, but individually. The team role stress scale items - the survey is done to both teaching groups at the beginning of the course and at the end of the course. The inventory of questions covers the factors of ambiguity, conflict, quantitative role overload and qualitative role overload. Some comparison to the study on project teams can be drawn. Team development stage of the two teaching groups is different. Relating the team role stress factors to the development stage of the group can reveal the potential of management actions to promote team building and to understand the maturity of functional and well-established teams. Mature teams indicate higher job satisfaction and deliver higher performance. Especially, teaching teams who deliver highly intangible results of learning outcome are sensitive to issues in the job satisfaction and team conflicts. Because team teaching is increasing, the paper provides a review of the relevant theories and initial comparative and longitudinal results of the team role stress factors applied to teaching teams.

Keywords: engineering education, stress, team role, team teaching

Procedia PDF Downloads 201
104 Superlyophobic Surfaces for Increased Heat Transfer during Condensation of CO₂

Authors: Ingrid Snustad, Asmund Ervik, Anders Austegard, Amy Brunsvold, Jianying He, Zhiliang Zhang

Abstract:

CO₂ capture, transport and storage (CCS) is essential to mitigate global anthropogenic CO₂ emissions. To make CCS a widely implemented technology in, e.g. the power sector, the reduction of costs is crucial. For a large cost reduction, every part of the CCS chain must contribute. By increasing the heat transfer efficiency during liquefaction of CO₂, which is a necessary step, e.g. ship transportation, the costs associated with the process are reduced. Heat transfer rates during dropwise condensation are up to one order of magnitude higher than during filmwise condensation. Dropwise condensation usually occurs on a non-wetting surface (Superlyophobic surface). The vapour condenses in discrete droplets, and the non-wetting nature of the surface reduces the adhesion forces and results in shedding of condensed droplets. This, again, results in fresh nucleation sites for further droplet condensation, effectively increasing the liquefaction efficiency. In addition, the droplets in themselves have a smaller heat transfer resistance than a liquid film, resulting in increased heat transfer rates from vapour to solid. Surface tension is a crucial parameter for dropwise condensation, due to its impact on the solid-liquid contact angle. A low surface tension usually results in a low contact angle, and again to spreading of the condensed liquid on the surface. CO₂ has very low surface tension compared to water. However, at relevant temperatures and pressures for CO₂ condensation, the surface tension is comparable to organic compounds such as pentane, a dropwise condensation of CO₂ is a completely new field of research. Therefore, knowledge of several important parameters such as contact angle and drop size distribution must be gained in order to understand the nature of the condensation. A new setup has been built to measure these relevant parameters. The main parts of the experimental setup is a pressure chamber in which the condensation occurs, and a high- speed camera. The process of CO₂ condensation is visually monitored, and one can determine the contact angle, contact angle hysteresis and hence, the surface adhesion of the liquid. CO₂ condensation on different surfaces can be analysed, e.g. copper, aluminium and stainless steel. The experimental setup is built for accurate measurements of the temperature difference between the surface and the condensing vapour and accurate pressure measurements in the vapour. The temperature will be measured directly underneath the condensing surface. The next step of the project will be to fabricate nanostructured surfaces for inducing superlyophobicity. Roughness is a key feature to achieve contact angles above 150° (limit for superlyophobicity) and controlled, and periodical roughness on the nanoscale is beneficial. Surfaces that are non- wetting towards organic non-polar liquids are candidates surface structures for dropwise condensation of CO₂.

Keywords: CCS, dropwise condensation, low surface tension liquid, superlyophobic surfaces

Procedia PDF Downloads 248
103 A Digital Twin Approach to Support Real-time Situational Awareness and Intelligent Cyber-physical Control in Energy Smart Buildings

Authors: Haowen Xu, Xiaobing Liu, Jin Dong, Jianming Lian

Abstract:

Emerging smart buildings often employ cyberinfrastructure, cyber-physical systems, and Internet of Things (IoT) technologies to increase the automation and responsiveness of building operations for better energy efficiency and lower carbon emission. These operations include the control of Heating, Ventilation, and Air Conditioning (HVAC) and lighting systems, which are often considered a major source of energy consumption in both commercial and residential buildings. Developing energy-saving control models for optimizing HVAC operations usually requires the collection of high-quality instrumental data from iterations of in-situ building experiments, which can be time-consuming and labor-intensive. This abstract describes a digital twin approach to automate building energy experiments for optimizing HVAC operations through the design and development of an adaptive web-based platform. The platform is created to enable (a) automated data acquisition from a variety of IoT-connected HVAC instruments, (b) real-time situational awareness through domain-based visualizations, (c) adaption of HVAC optimization algorithms based on experimental data, (d) sharing of experimental data and model predictive controls through web services, and (e) cyber-physical control of individual instruments in the HVAC system using outputs from different optimization algorithms. Through the digital twin approach, we aim to replicate a real-world building and its HVAC systems in an online computing environment to automate the development of building-specific model predictive controls and collaborative experiments in buildings located in different climate zones in the United States. We present two case studies to demonstrate our platform’s capability for real-time situational awareness and cyber-physical control of the HVAC in the flexible research platforms within the Oak Ridge National Laboratory (ORNL) main campus. Our platform is developed using adaptive and flexible architecture design, rendering the platform generalizable and extendable to support HVAC optimization experiments in different types of buildings across the nation.

Keywords: energy-saving buildings, digital twins, HVAC, cyber-physical system, BIM

Procedia PDF Downloads 72
102 Prominent Lipid Parameters Correlated with Trunk-to-Leg and Appendicular Fat Ratios in Severe Pediatric Obesity

Authors: Mustafa M. Donma, Orkide Donma

Abstract:

The examination of both serum lipid fractions and body’s lipid composition are quite informative during the evaluation of obesity stages. Within this context, alterations in lipid parameters are commonly observed. The variations in the fat distribution of the body are also noteworthy. Total cholesterol (TC), triglycerides (TRG), low density lipoprotein-cholesterol (LDL-C), high density lipoprotein-cholesterol (HDL-C) are considered as the basic lipid fractions. Fat deposited in trunk and extremities may give considerable amount of information and different messages during discrete health states. Ratios are also derived from distinct fat distribution in these areas. Trunk-to-leg fat ratio (TLFR) and trunk-to-appendicular fat ratio (TAFR) are the most recently introduced ratios. In this study, lipid fractions and TLFR, as well as TAFR, were evaluated, and the distinctions among healthy, obese (OB), and morbid obese (MO) groups were investigated. Three groups [normal body mass index (N-BMI), OB, MO] were constituted from a population aged 6 to 18 years. Ages and sexes of the groups were matched. The study protocol was approved by the Non-interventional Ethics Committee of Tekirdag Namik Kemal University. Written informed consent forms were obtained from the parents of the participants. Anthropometric measurements (height, weight, waist circumference, hip circumference, head circumference, neck circumference) were obtained and recorded during the physical examination. Body mass index values were calculated. Total, trunk, leg, and arm fat mass values were obtained by TANITA Bioelectrical Impedance Analysis. These values were used to calculate TLFR and TAFR. Systolic (SBP) and diastolic blood pressures (DBP) were measured. Routine biochemical tests including TC, TRG, LDL-C, HDL-C, and insulin were performed. Data were evaluated using SPSS software. p value smaller than 0.05 was accepted as statistically significant. There was no difference among the age values and gender ratios of the groups. Any statistically significant difference was not observed in terms of DBP, TLFR as well as serum lipid fractions. Higher SBP values were measured both in OB and MO children than those with N-BMI. TAFR showed a significant difference between N-BMI and OB groups. Statistically significant increases were detected between insulin values of N-BMI group and OB as well as MO groups. There were bivariate correlations between LDL and TLFR (r=0.396; p=0.037) as well as TAFR values (r=0.413; p=0.029) in MO group. When adjusted for SBP and DBP, partial correlations were calculated as (r=0.421; p=0.032) and (r=0.438; p=0.025) for LDL-TLFR as well as LDL-TAFR, respectively. Much stronger partial correlations were obtained for the same couples (r=0.475; p=0.019 and r=0.473; p=0.020, respectively) upon controlling for TRG and HDL-C. Much stronger partial correlations observed in MO children emphasize the potential transition from morbid obesity to metabolic syndrome. These findings have concluded that LDL-C may be suggested as a discriminating parameter between OB and MO children.

Keywords: children, lipid parameters, obesity, trunk-to-leg fat ratio, trunk-to-appendicular fat ratio

Procedia PDF Downloads 89
101 Sensitivity and Uncertainty Analysis of Hydrocarbon-In-Place in Sandstone Reservoir Modeling: A Case Study

Authors: Nejoud Alostad, Anup Bora, Prashant Dhote

Abstract:

Kuwait Oil Company (KOC) has been producing from its major reservoirs that are well defined and highly productive and of superior reservoir quality. These reservoirs are maturing and priority is shifting towards difficult reservoir to meet future production requirements. This paper discusses the results of the detailed integrated study for one of the satellite complex field discovered in the early 1960s. Following acquisition of new 3D seismic data in 1998 and re-processing work in the year 2006, an integrated G&G study was undertaken to review Lower Cretaceous prospectivity of this reservoir. Nine wells have been drilled in the area, till date with only three wells showing hydrocarbons in two formations. The average oil density is around 300API (American Petroleum Institute), and average porosity and water saturation of the reservoir is about 23% and 26%, respectively. The area is dissected by a number of NW-SE trending faults. Structurally, the area consists of horsts and grabens bounded by these faults and hence compartmentalized. The Wara/Burgan formation consists of discrete, dirty sands with clean channel sand complexes. There is a dramatic change in Upper Wara distributary channel facies, and reservoir quality of Wara and Burgan section varies with change of facies over the area. So predicting reservoir facies and its quality out of sparse well data is a major challenge for delineating the prospective area. To characterize the reservoir of Wara/Burgan formation, an integrated workflow involving seismic, well, petro-physical, reservoir and production engineering data has been used. Porosity and water saturation models are prepared and analyzed to predict reservoir quality of Wara and Burgan 3rd sand upper reservoirs. Subsequently, boundary conditions are defined for reservoir and non-reservoir facies by integrating facies, porosity and water saturation. Based on the detailed analyses of volumetric parameters, potential volumes of stock-tank oil initially in place (STOIIP) and gas initially in place (GIIP) were documented after running several probablistic sensitivity analysis using Montecalro simulation method. Sensitivity analysis on probabilistic models of reservoir horizons, petro-physical properties, and oil-water contacts and their effect on reserve clearly shows some alteration in the reservoir geometry. All these parameters have significant effect on the oil in place. This study has helped to identify uncertainty and risks of this prospect particularly and company is planning to develop this area with drilling of new wells.

Keywords: original oil-in-place, sensitivity, uncertainty, sandstone, reservoir modeling, Monte-Carlo simulation

Procedia PDF Downloads 174
100 The Role of Learning in Stimulation Policies to Increase Participation in Lifelong Development: A Government Policy Analysis

Authors: Björn de Kruijf, Arjen Edzes, Sietske Waslander

Abstract:

In an ever-quickly changing society, lifelong development is seen as a solution to labor market problems by politicians and policymakers. In this paper, we investigate how policy instruments are used to increase participation in lifelong development and on which behavioral principles policy is based. Digitization, automation, and an aging population change society and the labor market accordingly. Skills that were once most sought after in the workforce can become abundantly present. For people to remain relevant in the working population, they need to continue adapting new skills useful in the current labor market. Many reports have been written that focus on the role of lifelong development in this changing society and how lifelong development can help keep people adapt and stay relevant. Inspired by these reports, governments have implemented a broad range of policies to support participation in lifelong development. The question we ask ourselves is how government policies promote participation in lifelong development. This stems from a complex interplay of policy instruments and learning. Regulation, economic and soft instruments can be combined to promote lifelong development, and different types of education further complex policies on lifelong development. Literature suggests that different stages in people’s lives might warrant different methods of learning. Governments could anticipate this in their policies. In order to influence people’s behavior, the government can tap into a broad range of sociological, psychological, and (behavioral) economic principles. The traditional economic assumption that behavior is rational is known to be only partially true, and the government can use many biases in human behavior to stimulate participation in lifelong development. In this paper, we also try to find which biases the government taps into to promote participation if they tap into any of these biases. The goal of this paper is to analyze government policies intended to promote participation in lifelong development. To do this, we develop a framework to analyze the policies on lifelong development. We specifically incorporate the role of learning and the behavioral principles underlying policy instruments in the framework. We apply this framework to the case of the Netherlands, where we examine a set of policy documents. We single out the policies the government has put in place and how they are vertically and horizontally related. Afterward, we apply the framework and classify the individual policies by policy instrument and by type of learning. We find that the Dutch government focuses on formal and non-formal learning in their policy instruments. However, the literature suggests that learning at a later age is mainly done in an informal manner through experiences.

Keywords: learning, lifelong development, policy analysis, policy instruments

Procedia PDF Downloads 59
99 Localization of Radioactive Sources with a Mobile Radiation Detection System using Profit Functions

Authors: Luís Miguel Cabeça Marques, Alberto Manuel Martinho Vale, José Pedro Miragaia Trancoso Vaz, Ana Sofia Baptista Fernandes, Rui Alexandre de Barros Coito, Tiago Miguel Prates da Costa

Abstract:

The detection and localization of hidden radioactive sources are of significant importance in countering the illicit traffic of Special Nuclear Materials and other radioactive sources and materials. Radiation portal monitors are commonly used at airports, seaports, and international land borders for inspecting cargo and vehicles. However, these equipment can be expensive and are not available at all checkpoints. Consequently, the localization of SNM and other radioactive sources often relies on handheld equipment, which can be time-consuming. The current study presents the advantages of real-time analysis of gamma-ray count rate data from a mobile radiation detection system based on simulated data and field tests. The incorporation of profit functions and decision criteria to optimize the detection system's path significantly enhances the radiation field information and reduces survey time during cargo inspection. For source position estimation, a maximum likelihood estimation algorithm is employed, and confidence intervals are derived using the Fisher information. The study also explores the impact of uncertainties, baselines, and thresholds on the performance of the profit function. The proposed detection system, utilizing a plastic scintillator with silicon photomultiplier sensors, boasts several benefits, including cost-effectiveness, high geometric efficiency, compactness, and lightweight design. This versatility allows for seamless integration into any mobile platform, be it air, land, maritime, or hybrid, and it can also serve as a handheld device. Furthermore, integration of the detection system into drones, particularly multirotors, and its affordability enable the automation of source search and substantial reduction in survey time, particularly when deploying a fleet of drones. While the primary focus is on inspecting maritime container cargo, the methodologies explored in this research can be applied to the inspection of other infrastructures, such as nuclear facilities or vehicles.

Keywords: plastic scintillators, profit functions, path planning, gamma-ray detection, source localization, mobile radiation detection system, security scenario

Procedia PDF Downloads 80
98 Optimum Drilling States in Down-the-Hole Percussive Drilling: An Experimental Investigation

Authors: Joao Victor Borges Dos Santos, Thomas Richard, Yevhen Kovalyshen

Abstract:

Down-the-hole (DTH) percussive drilling is an excavation method that is widely used in the mining industry due to its high efficiency in fragmenting hard rock formations. A DTH hammer system consists of a fluid driven (air or water) piston and a drill bit; the reciprocating movement of the piston transmits its kinetic energy to the drill bit by means of stress waves that propagate through the drill bit towards the rock formation. In the literature of percussive drilling, the existence of an optimum drilling state (Sweet Spot) is reported in some laboratory and field experimental studies. An optimum rate of penetration is achieved for a specific range of axial thrust (or weight-on-bit) beyond which the rate of penetration decreases. Several authors advance different explanations as possible root causes to the occurrence of the Sweet Spot, but a universal explanation or consensus does not exist yet. The experimental investigation in this work was initiated with drilling experiments conducted at a mining site. A full-scale drilling rig (equipped with a DTH hammer system) was instrumented with high precision sensors sampled at a very high sampling rate (kHz). Data was collected while two boreholes were being excavated, an in depth analysis of the recorded data confirmed that an optimum performance can be achieved for specific ranges of input thrust (weight-on-bit). The high sampling rate allowed to identify the bit penetration at each single impact (of the piston on the drill bit) as well as the impact frequency. These measurements provide a direct method to identify when the hammer does not fire, and drilling occurs without percussion, and the bit propagate the borehole by shearing the rock. The second stage of the experimental investigation was conducted in a laboratory environment with a custom-built equipment dubbed Woody. Woody allows the drilling of shallow holes few centimetres deep by successive discrete impacts from a piston. After each individual impact, the bit angular position is incremented by a fixed amount, the piston is moved back to its initial position at the top of the barrel, and the air pressure and thrust are set back to their pre-set values. The goal is to explore whether the observed optimum drilling state stems from the interaction between the drill bit and the rock (during impact) or governed by the overall system dynamics (between impacts). The experiments were conducted on samples of Calca Red, with a drill bit of 74 millimetres (outside diameter) and with weight-on-bit ranging from 0.3 kN to 3.7 kN. Results show that under the same piston impact energy and constant angular displacement of 15 degrees between impact, the average drill bit rate of penetration is independent of the weight-on-bit, which suggests that the sweet spot is not caused by intrinsic properties of the bit-rock interface.

Keywords: optimum drilling state, experimental investigation, field experiments, laboratory experiments, down-the-hole percussive drilling

Procedia PDF Downloads 62
97 Strength Evaluation by Finite Element Analysis of Mesoscale Concrete Models Developed from CT Scan Images of Concrete Cube

Authors: Nirjhar Dhang, S. Vinay Kumar

Abstract:

Concrete is a non-homogeneous mix of coarse aggregates, sand, cement, air-voids and interfacial transition zone (ITZ) around aggregates. Adoption of these complex structures and material properties in numerical simulation would lead us to better understanding and design of concrete. In this work, the mesoscale model of concrete has been prepared from X-ray computerized tomography (CT) image. These images are converted into computer model and numerically simulated using commercially available finite element software. The mesoscale models are simulated under the influence of compressive displacement. The effect of shape and distribution of aggregates, continuous and discrete ITZ thickness, voids, and variation of mortar strength has been investigated. The CT scan of concrete cube consists of series of two dimensional slices. Total 49 slices are obtained from a cube of 150mm and the interval of slices comes approximately 3mm. In CT scan images, the same cube can be CT scanned in a non-destructive manner and later the compression test can be carried out in a universal testing machine (UTM) for finding its strength. The image processing and extraction of mortar and aggregates from CT scan slices are performed by programming in Python. The digital colour image consists of red, green and blue (RGB) pixels. The conversion of RGB image to black and white image (BW) is carried out, and identification of mesoscale constituents is made by putting value between 0-255. The pixel matrix is created for modeling of mortar, aggregates, and ITZ. Pixels are normalized to 0-9 scale considering the relative strength. Here, zero is assigned to voids, 4-6 for mortar and 7-9 for aggregates. The value between 1-3 identifies boundary between aggregates and mortar. In the next step, triangular and quadrilateral elements for plane stress and plane strain models are generated depending on option given. Properties of materials, boundary conditions, and analysis scheme are specified in this module. The responses like displacement, stresses, and damages are evaluated by ABAQUS importing the input file. This simulation evaluates compressive strengths of 49 slices of the cube. The model is meshed with more than sixty thousand elements. The effect of shape and distribution of aggregates, inclusion of voids and variation of thickness of ITZ layer with relation to load carrying capacity, stress-strain response and strain localizations of concrete have been studied. The plane strain condition carried more load than plane stress condition due to confinement. The CT scan technique can be used to get slices from concrete cores taken from the actual structure, and the digital image processing can be used for finding the shape and contents of aggregates in concrete. This may be further compared with test results of concrete cores and can be used as an important tool for strength evaluation of concrete.

Keywords: concrete, image processing, plane strain, interfacial transition zone

Procedia PDF Downloads 221
96 Assessing the Effectiveness of Warehousing Facility Management: The Case of Mantrac Ghana Limited

Authors: Kuhorfah Emmanuel Mawuli

Abstract:

Generally, for firms to enhance their operational efficiency of logistics, it is imperative to assess the logistics function. The cost of logistics conventionally represents a key consideration in the pricing decisions of firms, which suggests that cost efficiency in logistics can go a long way to improve margins. Warehousing, which is a key part of logistics operations, has the prospect of influencing operational efficiency in logistics management as well as customer value, but this potential has often not been recognized. It has been found that there is a paucity of research that evaluates the efficiency of warehouses. Indeed, limited research has been conducted to examine potential barriers to effective warehousing management. Due to this paucity of research, there is limited knowledge on how to address the obstacles associated with warehousing management. In order for warehousing management to become profitable, there is the need to integrate, balance, and manage the economic inputs and outputs of the entire warehouse operations, something that many firms tend to ignore. Management of warehousing is not solely related to storage functions. Instead, effective warehousing management requires such practices as maximum possible mechanization and automation of operations, optimal use of space and capacity of storage facilities, organization through "continuous flow" of goods, a planned system of storage operations, and safety of goods. For example, there is an important need for space utilization of the warehouse surface as it is a good way to evaluate the storing operation and pick items per hour. In the setting of Mantrac Ghana, not much knowledge regarding the management of the warehouses exists. The researcher has personally observed many gaps in the management of the warehouse facilities in the case organization Mantrac Ghana. It is important, therefore, to assess the warehouse facility management of the case company with the objective of identifying weaknesses for improvement. The study employs an in-depth qualitative research approach using interviews as a mode of data collection. Respondents in the study mainly comprised warehouse facility managers in the studied company. A total of 10 participants were selected for the study using a purposive sampling strategy. Results emanating from the study demonstrate limited warehousing effectiveness in the case company. Findings further reveal that the major barriers to effective warehousing facility management comprise poor layout, poor picking optimization, labour costs, and inaccurate orders; policy implications of the study findings are finally outlined.

Keywords: assessing, warehousing, facility, management

Procedia PDF Downloads 37
95 Internet of Things-Based Smart Irrigation System

Authors: Ahmed Abdulfatah Yusuf, Collins Oduor Ondiek

Abstract:

The automation of farming activities can have a transformational impact on the agricultural sector, especially from the emerging new technologies such as the Internet of Things (IoT). The system uses water level sensors and soil moisture sensors that measure the content of water in the soil as the values generated from the sensors enable the system to use an appropriate quantity of water, which avoids over or under irrigation. Due to the increase in the world’s population, there is a need to increase food production. With this demand in place, it is difficult to increase crop yield using the traditional manual approaches that lead to the wastage of water, thus affecting crop production. Food insecurity has become a scourge greatly affecting the developing countries and agriculture is an essential part of human life and tends to be the mainstay of the economy in most developing nations. Thus, without the provision of adequate food supplies, the population of those living in poverty is likely to multiply. The project’s main objective is to design and develop an IoT (Internet of Things) microcontroller-based Smart Irrigation System. In addition, the specific research objectives are to find out the challenges with traditional irrigation approaches and to determine the benefits of IoT-based smart irrigation systems. Furthermore, the system includes Arduino, a website and a database that works simultaneously in collecting and storing the data. The system is designed to pave the way in attaining the Sustainable Development Goal (SDG 1), which aims to end extreme poverty in all forms by 2030. The research design aimed at this project is a descriptive research design. Data was gathered through online questionnaires that used both quantitative and qualitative in order to triangulate the data. Out of the 32 questionnaires sent, there were 32 responses leading to a 100% response rate. In terms of sampling, the target group of this project is urban farmers, which account for about 25% of the population of Nairobi. From the findings of the research carried out, it is evident that there is a need to move away from manual irrigation approaches due to the high wastage of water to the use of smart irrigation systems that propose a better way of conserving water while maintaining the quality and moisture of the soil. The research also found out that urban farmers are willing to adopt this system to better their farming practices. However, this system can be improved in the future by incorporating it with other features and deploying it to a larger geographical area.

Keywords: crop production, food security, smart irrigation system, sustainable development goal

Procedia PDF Downloads 125
94 Electrical Degradation of GaN-based p-channel HFETs Under Dynamic Electrical Stress

Authors: Xuerui Niu, Bolin Wang, Xinchuang Zhang, Xiaohua Ma, Bin Hou, Ling Yang

Abstract:

The application of discrete GaN-based power switches requires the collaboration of silicon-based peripheral circuit structures. However, the packages and interconnection between the Si and GaN devices can introduce parasitic effects to the circuit, which has great impacts on GaN power transistors. GaN-based monolithic power integration technology is an emerging solution which can improve the stability of circuits and allow the GaN-based devices to achieve more functions. Complementary logic circuits consisting of GaN-based E-mode p-channel heterostructure field-effect transistors (p-HFETs) and E-mode n-channel HEMTs can be served as the gate drivers. E-mode p-HFETs with recessed gate have attracted increasing interest because of the low leakage current and large gate swing. However, they suffer from a poor interface between the gate dielectric and polarized nitride layers. The reliability of p-HFETs is analyzed and discussed in this work. In circuit applications, the inverter is always operated with dynamic gate voltage (VGS) rather than a constant VGS. Therefore, dynamic electrical stress has been simulated to resemble the operation conditions for E-mode p-HFETs. The dynamic electrical stress condition is as follows. VGS is a square waveform switching from -5 V to 0 V, VDS is fixed, and the source grounded. The frequency of the square waveform is 100kHz with the rising/falling time of 100 ns and duty ratio of 50%. The effective stress time is 1000s. A number of stress tests are carried out. The stress was briefly interrupted to measure the linear IDS-VGS, saturation IDS-VGS, As VGS switches from -5 V to 0 V and VDS = 0 V, devices are under negative-bias-instability (NBI) condition. Holes are trapped at the interface of oxide layer and GaN channel layer, which results in the reduction of VTH. The negative shift of VTH is serious at the first 10s and then changes slightly with the following stress time. However, different phenomenon is observed when VDS reduces to -5V. VTH shifts negatively during stress condition, and the variation in VTH increases with time, which is different from that when VDS is 0V. Two mechanisms exists in this condition. On the one hand, the electric field in the gate region is influenced by the drain voltage, so that the trapping behavior of holes in the gate region changes. The impact of the gate voltage is weakened. On the other hand, large drain voltage can induce the hot holes generation and lead to serious hot carrier stress (HCS) degradation with time. The poor-quality interface between the oxide layer and GaN channel layer at the gate region makes a major contribution to the high-density interface traps, which will greatly influence the reliability of devices. These results emphasize that the improved etching and pretreatment processes needs to be developed so that high-performance GaN complementary logics with enhanced stability can be achieved.

Keywords: GaN-based E-mode p-HFETs, dynamic electric stress, threshold voltage, monolithic power integration technology

Procedia PDF Downloads 65
93 Characterization of Anisotropic Deformation in Sandstones Using Micro-Computed Tomography Technique

Authors: Seyed Mehdi Seyed Alizadeh, Christoph Arns, Shane Latham

Abstract:

Geomechanical characterization of rocks in detail and its possible implications on flow properties is an important aspect of reservoir characterization workflow. In order to gain more understanding of the microstructure evolution of reservoir rocks under stress a series of axisymmetric triaxial tests were performed on two different analogue rock samples. In-situ compression tests were coupled with high resolution micro-Computed Tomography to elucidate the changes in the pore/grain network of the rocks under pressurized conditions. Two outcrop sandstones were chosen in the current study representing a various cementation status of well-consolidated and weakly-consolidated granular system respectively. High resolution images were acquired while the rocks deformed in a purpose-built compression cell. A detailed analysis of the 3D images in each series of step-wise compression tests (up to the failure point) was conducted which includes the registration of the deformed specimen images with the reference pristine dry rock image. Digital Image Correlation (DIC) technique based on the intensity of the registered 3D subsets and particle tracking are utilized to map the displacement fields in each sample. The results suggest the complex architecture of the localized shear zone in well-cemented Bentheimer sandstone whereas for the weakly-consolidated Castlegate sandstone no discernible shear band could be observed even after macroscopic failure. Post-mortem imaging a sister plug from the friable rock upon undergoing continuous compression reveals signs of a shear band pattern. This suggests that for friable sandstones at small scales loading mode may affect the pattern of deformation. Prior to mechanical failure, the continuum digital image correlation approach can reasonably capture the kinematics of deformation. As failure occurs, however, discrete image correlation (i.e. particle tracking) reveals superiority in both tracking the grains as well as quantifying their kinematics (in terms of translations/rotations) with respect to any stage of compaction. An attempt was made to quantify the displacement field in compression using continuum Digital Image Correlation which is based on the reference and secondary image intensity correlation. Such approach has only been previously applied to unconsolidated granular systems under pressure. We are applying this technique to sandstones with various degrees of consolidation. Such element of novelty will set the results of this study apart from previous attempts to characterize the deformation pattern in consolidated sands.

Keywords: deformation mechanism, displacement field, shear behavior, triaxial compression, X-ray micro-CT

Procedia PDF Downloads 161
92 The Location-Routing Problem with Pickup Facilities and Heterogeneous Demand: Formulation and Heuristics Approach

Authors: Mao Zhaofang, Xu Yida, Fang Kan, Fu Enyuan, Zhao Zhao

Abstract:

Nowadays, last-mile distribution plays an increasingly important role in the whole industrial chain delivery link and accounts for a large proportion of the whole distribution process cost. Promoting the upgrading of logistics networks and improving the layout of final distribution points has become one of the trends in the development of modern logistics. Due to the discrete and heterogeneous needs and spatial distribution of customer demand, which will lead to a higher delivery failure rate and lower vehicle utilization, last-mile delivery has become a time-consuming and uncertain process. As a result, courier companies have introduced a range of innovative parcel storage facilities, including pick-up points and lockers. The introduction of pick-up points and lockers has not only improved the users’ experience but has also helped logistics and courier companies achieve large-scale economy. Against the backdrop of the COVID-19 of the previous period, contactless delivery has become a new hotspot, which has also created new opportunities for the development of collection services. Therefore, a key issue for logistics companies is how to design/redesign their last-mile distribution network systems to create integrated logistics and distribution networks that consider pick-up points and lockers. This paper focuses on the introduction of self-pickup facilities in new logistics and distribution scenarios and the heterogeneous demands of customers. In this paper, we consider two types of demand, including ordinary products and refrigerated products, as well as corresponding transportation vehicles. We consider the constraints associated with self-pickup points and lockers and then address the location-routing problem with self-pickup facilities and heterogeneous demands (LRP-PFHD). To solve this challenging problem, we propose a mixed integer linear programming (MILP) model that aims to minimize the total cost, which includes the facility opening cost, the variable transport cost, and the fixed transport cost. Due to the NP-hardness of the problem, we propose a hybrid adaptive large-neighbourhood search algorithm to solve LRP-PFHD. We evaluate the effectiveness and efficiency of the proposed algorithm by using instances generated based on benchmark instances. The results demonstrate that the hybrid adaptive large neighbourhood search algorithm is more efficient than MILP solvers such as Gurobi for LRP-PFHD, especially for large-scale instances. In addition, we made a comprehensive analysis of some important parameters (e.g., facility opening cost and transportation cost) to explore their impacts on the results and suggested helpful managerial insights for courier companies.

Keywords: city logistics, last-mile delivery, location-routing, adaptive large neighborhood search

Procedia PDF Downloads 42
91 Determining Factors of Suspended Glass Systems with Pre-Stress Cable Truss

Authors: Cemil Atakara, Hüseyin Eryaman

Abstract:

The use of glass as an envelope of a building has been increasing in the twentieth century. For more transparency and dematerialization new glass facade types have emerged in the past two decades which depends on point fixed glazing system (PFGS). The aim of this study is to analyze of the PFGS systems which are used on the glass curtain wall according to their types, degree, architectural and structural effects. This new system is desired because it enhances the transparency of the façade and it minimizes the component of the frames or of the profiles. This PFGS led to new structural elements which use cables, rods, trusses when designing a glass building facades, this structural element called the suspended glass system with pre-stressed cable truss (SGSPCT) which has been used for the first time in 1980 in Serres building. The twenty glass buildings which are designed in different systems have been analyzed during this study. After these analyses five selected SGSPCT building analyzed deeply and one skeletal frame building selected from Lefkosa redesigned according to the analysis results. These selected buildings have been included of various cable-truss system typologies and degree. The methodology of this study is building analysis method and literature survey with the help of books, articles, magazines, drawings, internet sources and applied connection details of the glass buildings. The selected five glass buildings and case building have been detailed analyzed with their architectural drawings, photographs and details. A gridshell structure can be compared with a shell structure; it consists of discrete members connecting nodal points. As these nodal points lie on the surface of an imaginary shell, their shapes function almost identically. Difference between shell and gridshell structures can be found in the fact that, due to their free-form and thus, due to the presence of bending forces, gridshells are required to resist loading through their cross-section. This research is divided into parts. A general study about the glass building and cable-glass and grid shell system will be done in the first chapters. Structural analyses and detailed analyses with schematic drawings with the plans, sections of the selected buildings will be explained in the second part. The third part it consists of the advantages and disadvantages of the use of the SGSPCT and Grid Shell in architecture. The study consists of four chapters including the introduction chapter. The general information of the SGSPCT and glazing system has been mentioned in the first chapter. Structural features, typologies, transparency principle and analytical information on systems have been explained of the selected buildings in the second chapter. The detailed analyses of case building have been done according to their schematic drawings with the plans, sections in the third chapter. After third chapter SGSPCT discussed on to the case building and selected buildings. SGSPCT systems have been compared with their advantages and disadvantages to the other systems. Advantages of cable-truss systems and SGSPCT have been concluded that the use of glass substrates in the last chapter.

Keywords: cable truss, glass, grid shell, transparency

Procedia PDF Downloads 385
90 University Building: Discussion about the Effect of Numerical Modelling Assumptions for Occupant Behavior

Authors: Fabrizio Ascione, Martina Borrelli, Rosa Francesca De Masi, Silvia Ruggiero, Giuseppe Peter Vanoli

Abstract:

The refurbishment of public buildings is one of the key factors of energy efficiency policy of European States. Educational buildings account for the largest share of the oldest edifice with interesting potentialities for demonstrating best practice with regards to high performance and low and zero-carbon design and for becoming exemplar cases within the community. In this context, this paper discusses the critical issue of dealing the energy refurbishment of a university building in heating dominated climate of South Italy. More in detail, the importance of using validated models will be examined exhaustively by proposing an analysis on uncertainties due to modelling assumptions mainly referring to the adoption of stochastic schedules for occupant behavior and equipment or lighting usage. Indeed, today, the great part of commercial tools provides to designers a library of possible schedules with which thermal zones can be described. Very often, the users do not pay close attention to diversify thermal zones and to modify or to adapt predefined profiles, and results of designing are affected positively or negatively without any alarm about it. Data such as occupancy schedules, internal loads and the interaction between people and windows or plant systems, represent some of the largest variables during the energy modelling and to understand calibration results. This is mainly due to the adoption of discrete standardized and conventional schedules with important consequences on the prevision of the energy consumptions. The problem is surely difficult to examine and to solve. In this paper, a sensitivity analysis is presented, to understand what is the order of magnitude of error that is committed by varying the deterministic schedules used for occupation, internal load, and lighting system. This could be a typical uncertainty for a case study as the presented one where there is not a regulation system for the HVAC system thus the occupant cannot interact with it. More in detail, starting from adopted schedules, created according to questioner’ s responses and that has allowed a good calibration of energy simulation model, several different scenarios are tested. Two type of analysis are presented: the reference building is compared with these scenarios in term of percentage difference on the projected total electric energy need and natural gas request. Then the different entries of consumption are analyzed and for more interesting cases also the comparison between calibration indexes. Moreover, for the optimal refurbishment solution, the same simulations are done. The variation on the provision of energy saving and global cost reduction is evidenced. This parametric study wants to underline the effect on performance indexes evaluation of the modelling assumptions during the description of thermal zones.

Keywords: energy simulation, modelling calibration, occupant behavior, university building

Procedia PDF Downloads 117
89 The Quantum Theory of Music and Human Languages

Authors: Mballa Abanda Luc Aurelien Serge, Henda Gnakate Biba, Kuate Guemo Romaric, Akono Rufine Nicole, Zabotom Yaya Fadel Biba, Petfiang Sidonie, Bella Suzane Jenifer

Abstract:

The main hypotheses proposed around the definition of the syllable and of music, of the common origin of music and language, should lead the reader to reflect on the cross-cutting questions raised by the debate on the notion of universals in linguistics and musicology. These are objects of controversy, and there lies its interest: the debate raises questions that are at the heart of theories on language. It is an inventive, original, and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological, and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation, and the question of modeling in the human sciences: mathematics, computer science, translation automation, and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal, and random music. The experimentation confirming the theorization, I designed a semi-digital, semi-analog application that translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music, and deterministic and random music). To test this application, I use music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). The translation is done (from writing to writing, from writing to speech, and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz, and world music or variety, etc. The software runs, giving you the option to choose harmonies, and then you select your melody.

Keywords: language, music, sciences, quantum entenglement

Procedia PDF Downloads 52
88 Performance Validation of Model Predictive Control for Electrical Power Converters of a Grid Integrated Oscillating Water Column

Authors: G. Rajapakse, S. Jayasinghe, A. Fleming

Abstract:

This paper aims to experimentally validate the control strategy used for electrical power converters in grid integrated oscillating water column (OWC) wave energy converter (WEC). The particular OWC’s unidirectional air turbine-generator output power results in discrete large power pulses. Therefore, the system requires power conditioning prior to integrating to the grid. This is achieved by using a back to back power converter with an energy storage system. A Li-Ion battery energy storage is connected to the dc-link of the back-to-back converter using a bidirectional dc-dc converter. This arrangement decouples the system dynamics and mitigates the mismatch between supply and demand powers. All three electrical power converters used in the arrangement are controlled using finite control set-model predictive control (FCS-MPC) strategy. The rectifier controller is to regulate the speed of the turbine at a set rotational speed to uphold the air turbine at a desirable speed range under varying wave conditions. The inverter controller is to maintain the output power to the grid adhering to grid codes. The dc-dc bidirectional converter controller is to set the dc-link voltage at its reference value. The software modeling of the OWC system and FCS-MPC is carried out in the MATLAB/Simulink software using actual data and parameters obtained from a prototype unidirectional air-turbine OWC developed at Australian Maritime College (AMC). The hardware development and experimental validations are being carried out at AMC Electronic laboratory. The designed FCS-MPC for the power converters are separately coded in Code Composer Studio V8 and downloaded into separate Texas Instrument’s TIVA C Series EK-TM4C123GXL Launchpad Evaluation Boards with TM4C123GH6PMI microcontrollers (real-time control processors). Each microcontroller is used to drive 2kW 3-phase STEVAL-IHM028V2 evaluation board with an intelligent power module (STGIPS20C60). The power module consists of a 3-phase inverter bridge with 600V insulated gate bipolar transistors. Delta standard (ASDA-B2 series) servo drive/motor coupled to a 2kW permanent magnet synchronous generator is served as the turbine-generator. This lab-scale setup is used to obtain experimental results. The validation of the FCS-MPC is done by comparing these experimental results to the results obtained by MATLAB/Simulink software results in similar scenarios. The results show that under the proposed control scheme, the regulated variables follow their references accurately. This research confirms that FCS-MPC fits well into the power converter control of the OWC-WEC system with a Li-Ion battery energy storage.

Keywords: dc-dc bidirectional converter, finite control set-model predictive control, Li-ion battery energy storage, oscillating water column, wave energy converter

Procedia PDF Downloads 92
87 Building on Previous Microvalving Approaches for Highly Reliable Actuation in Centrifugal Microfluidic Platforms

Authors: Ivan Maguire, Ciprian Briciu, Alan Barrett, Dara Kervick, Jens Ducrèe, Fiona Regan

Abstract:

With the ever-increasing myriad of applications of which microfluidic devices are capable, reliable fluidic actuation development has remained fundamental to the success of these microfluidic platforms. There are a number of approaches which can be taken in order to integrate liquid actuation on microfluidic platforms, which can usually be split into two primary categories; active microvalves and passive microvalves. Active microvalves are microfluidic valves which require a physical parameter change by external, or separate interaction, for actuation to occur. Passive microvalves are microfluidic valves which don’t require external interaction for actuation due to the valve’s natural physical parameters, which can be overcome through sample interaction. The purpose of this paper is to illustrate how further improvements to past microvalve solutions can largely enhance systematic reliability and performance, with both novel active and passive microvalves demonstrated. Covered within this scope will be two alternative and novel microvalve solutions for centrifugal microfluidic platforms; a revamped pneumatic-dissolvable film active microvalve (PAM) strategy and a spray-on Sol-Gel based hydrophobic passive microvalve (HPM) approach. Both the PAM and the HPM mechanisms were demonstrated on a centrifugal microfluidic platform consisting of alternating layers of 1.5 mm poly(methyl methacrylate) (PMMA) (for reagent storage) sheets and ~150 μm pressure sensitive adhesive (PSA) (for microchannel fabrication) sheets. The PAM approach differs from previous SOLUBON™ dissolvable film methods by introducing a more reliable and predictable liquid delivery mechanism to microvalve site, thus significantly reducing premature activation. This approach has also shown excellent synchronicity when performed in a multiplexed form. The HPM method utilises a new spray-on and low curing temperature (70°C) sol-gel material. The resultant double layer coating comprises a PMMA adherent sol-gel as the bottom layer and an ultra hydrophobic silica nano-particles (SNPs) film as the top layer. The optimal coating was integrated to microfluidic channels with varying cross-sectional area for assessing microvalve burst frequencies consistency. It is hoped that these microvalving solutions, which can be easily added to centrifugal microfluidic platforms, will significantly improve automation reliability.

Keywords: centrifugal microfluidics, hydrophobic microvalves, lab-on-a-disc, pneumatic microvalves

Procedia PDF Downloads 168
86 Insights into Particle Dispersion, Agglomeration and Deposition in Turbulent Channel Flow

Authors: Mohammad Afkhami, Ali Hassanpour, Michael Fairweather

Abstract:

The work described in this paper was undertaken to gain insight into fundamental aspects of turbulent gas-particle flows with relevance to processes employed in a wide range of applications, such as oil and gas flow assurance in pipes, powder dispersion from dry powder inhalers, and particle resuspension in nuclear waste ponds, to name but a few. In particular, the influence of particle interaction and fluid phase behavior in turbulent flow on particle dispersion in a horizontal channel is investigated. The mathematical modeling technique used is based on the large eddy simulation (LES) methodology embodied in the commercial CFD code FLUENT, with flow solutions provided by this approach coupled to a second commercial code, EDEM, based on the discrete element method (DEM) which is used for the prediction of particle motion and interaction. The results generated by LES for the fluid phase have been validated against direct numerical simulations (DNS) for three different channel flows with shear Reynolds numbers, Reτ = 150, 300 and 590. Overall, the LES shows good agreement, with mean velocities and normal and shear stresses matching those of the DNS in both magnitude and position. The research work has focused on the prediction of those conditions favoring particle aggregation and deposition within turbulent flows. Simulations have been carried out to investigate the effects of particle size, density and concentration on particle agglomeration. Furthermore, particles with different surface properties have been simulated in three channel flows with different levels of flow turbulence, achieved by increasing the Reynolds number of the flow. The simulations mimic the conditions of two-phase, fluid-solid flows frequently encountered in domestic, commercial and industrial applications, for example, air conditioning and refrigeration units, heat exchangers, oil and gas suction and pressure lines. The particle size, density, surface energy and volume fractions selected are 45.6, 102 and 150 µm, 250, 1000 and 2159 kg m-3, 50, 500, and 5000 mJ m-2 and 7.84 × 10-6, 2.8 × 10-5, and 1 × 10-4, respectively; such particle properties are associated with particles found in soil, as well as metals and oxides prevalent in turbulent bounded fluid-solid flows due to erosion and corrosion of inner pipe walls. It has been found that the turbulence structure of the flow dominates the motion of the particles, creating particle-particle interactions, with most of these interactions taking place at locations close to the channel walls and in regions of high turbulence where their agglomeration is aided both by the high levels of turbulence and the high concentration of particles. A positive relationship between particle surface energy, concentration, size and density, and agglomeration was observed. Moreover, the results derived for the three Reynolds numbers considered show that the rate of agglomeration is strongly influenced for high surface energy particles by, and increases with, the intensity of the flow turbulence. In contrast, for lower surface energy particles, the rate of agglomeration diminishes with an increase in flow turbulence intensity.

Keywords: agglomeration, channel flow, DEM, LES, turbulence

Procedia PDF Downloads 294
85 Technology for Good: Deploying Artificial Intelligence to Analyze Participant Response to Anti-Trafficking Education

Authors: Ray Bryant

Abstract:

3Strands Global Foundation (3SGF), a non-profit with a mission to mobilize communities to combat human trafficking through prevention education and reintegration programs, launched a groundbreaking study that calls out the usage and benefits of artificial intelligence in the war against human trafficking. Having gathered more than 30,000 stories from counselors and school staff who have gone through its PROTECT Prevention Education program, 3SGF sought to develop a methodology to measure the effectiveness of the training, which helps educators and school staff identify physical signs and behaviors indicating a student is being victimized. The program further illustrates how to recognize and respond to trauma and teaches the steps to take to report human trafficking, as well as how to connect victims with the proper professionals. 3SGF partnered with Levity, a leader in no-code Artificial Intelligence (AI) automation, to create the research study utilizing natural language processing, a branch of artificial intelligence, to measure the effectiveness of their prevention education program. By applying the logic created for the study, the platform analyzed and categorized each story. If the story, directly from the educator, demonstrated one or more of the desired outcomes; Increased Awareness, Increased Knowledge, or Intended Behavior Change, a label was applied. The system then added a confidence level for each identified label. The study results were generated with a 99% confidence level. Preliminary results show that of the 30,000 stories gathered, it became overwhelmingly clear that a significant majority of the participants now have increased awareness of the issue, demonstrated better knowledge of how to help prevent the crime, and expressed an intention to change how they approach what they do daily. In addition, it was observed that approximately 30% of the stories involved comments by educators expressing they wish they’d had this knowledge sooner as they can think of many students they would have been able to help. Objectives Of Research: To solve the problem of needing to analyze and accurately categorize more than 30,000 data points of participant feedback in order to evaluate the success of a human trafficking prevention program by using AI and Natural Language Processing. Methodologies Used: In conjunction with our strategic partner, Levity, we have created our own NLP analysis engine specific to our problem. Contributions To Research: The intersection of AI and human rights and how to utilize technology to combat human trafficking.

Keywords: AI, technology, human trafficking, prevention

Procedia PDF Downloads 38
84 Exploration of Building Information Modelling Software to Develop Modular Coordination Design Tool for Architects

Authors: Muhammad Khairi bin Sulaiman

Abstract:

The utilization of Building Information Modelling (BIM) in the construction industry has provided an opportunity for designers in the Architecture, Engineering and Construction (AEC) industry to proceed from the conventional method of using manual drafting to a way that creates alternative designs quickly, produces more accurate, reliable and consistent outputs. By using BIM Software, designers can create digital content that manipulates the use of data using the parametric model of BIM. With BIM software, more alternative designs can be created quickly and design problems can be explored further to produce a better design faster than conventional design methods. Generally, BIM is used as a documentation mechanism and has not been fully explored and utilised its capabilities as a design tool. Relative to the current issue, Modular Coordination (MC) design as a sustainable design practice is encouraged since MC design will reduce material wastage through standard dimensioning, pre-fabrication, repetitive, modular construction and components. However, MC design involves a complex process of rules and dimensions. Therefore, a tool is needed to make this process easier. Since the parameters in BIM can easily be manipulated to follow MC rules and dimensioning, thus, the integration of BIM software with MC design is proposed for architects during the design stage. With this tool, there will be an improvement in acceptance and practice in the application of MC design effectively. Consequently, this study will analyse and explore the function and customization of BIM objects and the capability of BIM software to expedite the application of MC design during the design stage for architects. With this application, architects will be able to create building models and locate objects within reference modular grids that adhere to MC rules and dimensions. The parametric modeling capabilities of BIM will also act as a visual tool that will further enhance the automation of the 3-Dimensional space planning modeling process. (Method) The study will first analyze and explore the parametric modeling capabilities of rule-based BIM objects, which eventually customize a reference grid within the rules and dimensioning of MC. Eventually, the approach will further enhance the architect's overall design process and enable architects to automate complex modeling, which was nearly impossible before. A prototype using a residential quarter will be modeled. A set of reference grids guided by specific MC rules and dimensions will be used to develop a variety of space planning and configuration. With the use of the design, the tool will expedite the design process and encourage the use of MC Design in the construction industry.

Keywords: building information modeling, modular coordination, space planning, customization, BIM application, MC space planning

Procedia PDF Downloads 61
83 An Effort at Improving Reliability of Laboratory Data in Titrimetric Analysis for Zinc Sulphate Tablets Using Validated Spreadsheet Calculators

Authors: M. A. Okezue, K. L. Clase, S. R. Byrn

Abstract:

The requirement for maintaining data integrity in laboratory operations is critical for regulatory compliance. Automation of procedures reduces incidence of human errors. Quality control laboratories located in low-income economies may face some barriers in attempts to automate their processes. Since data from quality control tests on pharmaceutical products are used in making regulatory decisions, it is important that laboratory reports are accurate and reliable. Zinc Sulphate (ZnSO4) tablets is used in treatment of diarrhea in pediatric population, and as an adjunct therapy for COVID-19 regimen. Unfortunately, zinc content in these formulations is determined titrimetrically; a manual analytical procedure. The assay for ZnSO4 tablets involves time-consuming steps that contain mathematical formulae prone to calculation errors. To achieve consistency, save costs, and improve data integrity, validated spreadsheets were developed to simplify the two critical steps in the analysis of ZnSO4 tablets: standardization of 0.1M Sodium Edetate (EDTA) solution, and the complexometric titration assay procedure. The assay method in the United States Pharmacopoeia was used to create a process flow for ZnSO4 tablets. For each step in the process, different formulae were input into two spreadsheets to automate calculations. Further checks were created within the automated system to ensure validity of replicate analysis in titrimetric procedures. Validations were conducted using five data sets of manually computed assay results. The acceptance criteria set for the protocol were met. Significant p-values (p < 0.05, α = 0.05, at 95% Confidence Interval) were obtained from students’ t-test evaluation of the mean values for manual-calculated and spreadsheet results at all levels of the analysis flow. Right-first-time analysis and principles of data integrity were enhanced by use of the validated spreadsheet calculators in titrimetric evaluations of ZnSO4 tablets. Human errors were minimized in calculations when procedures were automated in quality control laboratories. The assay procedure for the formulation was achieved in a time-efficient manner with greater level of accuracy. This project is expected to promote cost savings for laboratory business models.

Keywords: data integrity, spreadsheets, titrimetry, validation, zinc sulphate tablets

Procedia PDF Downloads 151
82 Petrology, Geochemistry and Formation Conditions of Metaophiolites of the Loki Crystalline Massif (the Caucasus)

Authors: Irakli Gamkrelidze, David Shengelia, Tamara Tsutsunava, Giorgi Chichinadze, Giorgi Beridze, Ketevan Tedliashvili, Tamara Tsamalashvili

Abstract:

The Loki crystalline massif crops out in the Caucasian region and the geological retrospective represent the northern marginal part of the Baiburt-Sevanian terrain (island arc), bordering with the Paleotethys oceanic basin in the north. The pre-Alpine basement of the massif is built up of Lower-Middle Paleozoic metamorphic complex (metasedimentary and metabasite rocks), Upper Devonian quartz-diorites and Late Variscan granites. Earlier metamorphic complex was considered as an indivisible set including suites with different degree of metamorphism. Systematic geologic, petrologic and geochemical investigations of the massif’s rocks suggest the different conception on composition, structure and formation conditions of the massif. In particular, there are two main rock types in the Loki massif: the oldest autochthonous series of gneissic quartz-diorites and cutting them granites. The massif is flanked on its western side by a volcano-sedimentary sequence, metamorphosed to low-T facies. Petrologic, metamorphic and structural differences in this sequence prove the existence of a number of discrete units (overthrust sheets). One of them, the metabasic sheet represents the fragment of ophiolite complex. It comprises transition types of the second and third layers of the Paleooceanic crust: the upper noncumulated part of the third layer gabbro component and the following lowest part of the parallel diabase dykes of the second layer. The ophiolites are represented by metagabbros, metagabbro-diabases, metadiabases and amphibolite schists. According to the content of petrogenic components and additive elements in metabasites is stated that the protolith of metabasites belongs to petrochemical type of tholeiitic series of basalts. The parental magma of metaophiolites is of E-MORB composition, and by petrochemical parameters, it is very close to the composition of intraplate basalts. The dykes of hypabissal leucocratic siliceous and medium magmatic rocks associated with the metaophiolite sheet form the separate complex. They are granitoids with the extremely low content of CaO and quartz-diorite porphyries. According to various petrochemical parameters, these rocks have mixed characteristics. Their formation took place in spreading conditions or in the areas of manifestation of plumes most likely of island arc type. The metamorphism degree of the metaophiolites corresponds to a very low stage of green schist facies. The rocks of the metaophiolite complex are obducted from the Paleotethys Ocean. Geological and paleomagnetic data show that the primary location of the ocean is supposed to be to the north of the Loki crystalline massif.

Keywords: the Caucasus, crystalline massif, ophiolites, tectonic sheet

Procedia PDF Downloads 256
81 Electro-Hydrodynamic Effects Due to Plasma Bullet Propagation

Authors: Panagiotis Svarnas, Polykarpos Papadopoulos

Abstract:

Atmospheric-pressure cold plasmas continue to gain increasing interest for various applications due to their unique properties, like cost-efficient production, high chemical reactivity, low gas temperature, adaptability, etc. Numerous designs have been proposed for these plasmas production in terms of electrode configuration, driving voltage waveform and working gas(es). However, in order to exploit most of the advantages of these systems, the majority of the designs are based on dielectric-barrier discharges (DBDs) either in filamentary or glow regimes. A special category of the DBD-based atmospheric-pressure cold plasmas refers to the so-called plasma jets, where a carrier noble gas is guided by the dielectric barrier (usually a hollow cylinder) and left to flow up to the atmospheric air where a complicated hydrodynamic interplay takes place. Although it is now well established that these plasmas are generated due to ionizing waves reminding in many ways streamer propagation, they exhibit discrete characteristics which are better mirrored on the terms 'guided streamers' or 'plasma bullets'. These 'bullets' travel with supersonic velocities both inside the dielectric barrier and the channel formed by the noble gas during its penetration into the air. The present work is devoted to the interpretation of the electro-hydrodynamic effects that take place downstream of the dielectric barrier opening, i.e., in the noble gas-air mixing area where plasma bullet propagate under the influence of local electric fields in regions of variable noble gas concentration. Herein, we focus on the role of the local space charge and the residual ionic charge left behind after the bullet propagation in the gas flow field modification. The study communicates both experimental and numerical results, coupled in a comprehensive manner. The plasma bullets are here produced by a custom device having a quartz tube as a dielectric barrier and two external ring-type electrodes driven by sinusoidal high voltage at 10 kHz. Helium gas is fed to the tube and schlieren photography is employed for mapping the flow field downstream of the tube orifice. Mixture mass conservation equation, momentum conservation equation, energy conservation equation in terms of temperature and helium transfer equation are simultaneously solved, leading to the physical mechanisms that govern the experimental results. Namely, we deal with electro-hydrodynamic effects mainly due to momentum transfer from atomic ions to neutrals. The atomic ions are left behind as residual charge after the bullet propagation and gain energy from the locally created electric field. The electro-hydrodynamic force is eventually evaluated.

Keywords: atmospheric-pressure plasmas, dielectric-barrier discharges, schlieren photography, electro-hydrodynamic force

Procedia PDF Downloads 121
80 Graphene Metamaterials Supported Tunable Terahertz Fano Resonance

Authors: Xiaoyong He

Abstract:

The manipulation of THz waves is still a challenging task due to lack of natural materials interacted with it strongly. Designed by tailoring the characters of unit cells (meta-molecules), the advance of metamaterials (MMs) may solve this problem. However, because of Ohmic and radiation losses, the performance of MMs devices is subjected to the dissipation and low quality factor (Q-factor). This dilemma may be circumvented by Fano resonance, which arises from the destructive interference between a bright continuum mode and dark discrete mode (or a narrow resonance). Different from symmetric Lorentz spectral curve, Fano resonance indicates a distinct asymmetric line-shape, ultrahigh quality factor, steep variations in spectrum curves. Fano resonance is usually realized through symmetry breaking. However, if concentric double rings (DR) are placed closely to each other, the near-field coupling between them gives rise to two hybridized modes (bright and narrowband dark modes) because of the local asymmetry, resulting into the characteristic Fano line shape. Furthermore, from the practical viewpoint, it is highly desirable requirement that to achieve the modulation of Fano spectral curves conveniently, which is an important and interesting research topics. For current Fano systems, the tunable spectral curves can be realized by adjusting the geometrical structural parameters or magnetic fields biased the ferrite-based structure. But due to limited dispersion properties of active materials, it is still a tough work to tailor Fano resonance conveniently with the fixed structural parameters. With the favorable properties of extreme confinement and high tunability, graphene is a strong candidate to achieve this goal. The DR-structure possesses the excitation of so-called “trapped modes,” with the merits of simple structure and high quality of resonances in thin structures. By depositing graphene circular DR on the SiO2/Si/ polymer substrate, the tunable Fano resonance has been theoretically investigated in the terahertz regime, including the effects of graphene Fermi level, structural parameters and operation frequency. The results manifest that the obvious Fano peak can be efficiently modulated because of the strong coupling between incident waves and graphene ribbons. As Fermi level increases, the peak amplitude of Fano curve increases, and the resonant peak position shifts to high frequency. The amplitude modulation depth of Fano curves is about 30% if Fermi level changes in the scope of 0.1-1.0 eV. The optimum gap distance between DR is about 8-12 μm, where the value of figure of merit shows a peak. As the graphene ribbon width increases, the Fano spectral curves become broad, and the resonant peak denotes blue shift. The results are very helpful to develop novel graphene plasmonic devices, e.g. sensors and modulators.

Keywords: graphene, metamaterials, terahertz, tunable

Procedia PDF Downloads 324
79 Deep Reinforcement Learning Approach for Trading Automation in The Stock Market

Authors: Taylan Kabbani, Ekrem Duman

Abstract:

The design of adaptive systems that take advantage of financial markets while reducing the risk can bring more stagnant wealth into the global market. However, most efforts made to generate successful deals in trading financial assets rely on Supervised Learning (SL), which suffered from various limitations. Deep Reinforcement Learning (DRL) offers to solve these drawbacks of SL approaches by combining the financial assets price "prediction" step and the "allocation" step of the portfolio in one unified process to produce fully autonomous systems capable of interacting with its environment to make optimal decisions through trial and error. In this paper, a continuous action space approach is adopted to give the trading agent the ability to gradually adjust the portfolio's positions with each time step (dynamically re-allocate investments), resulting in better agent-environment interaction and faster convergence of the learning process. In addition, the approach supports the managing of a portfolio with several assets instead of a single one. This work represents a novel DRL model to generate profitable trades in the stock market, effectively overcoming the limitations of supervised learning approaches. We formulate the trading problem, or what is referred to as The Agent Environment as Partially observed Markov Decision Process (POMDP) model, considering the constraints imposed by the stock market, such as liquidity and transaction costs. More specifically, we design an environment that simulates the real-world trading process by augmenting the state representation with ten different technical indicators and sentiment analysis of news articles for each stock. We then solve the formulated POMDP problem using the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm, which can learn policies in high-dimensional and continuous action spaces like those typically found in the stock market environment. From the point of view of stock market forecasting and the intelligent decision-making mechanism, this paper demonstrates the superiority of deep reinforcement learning in financial markets over other types of machine learning such as supervised learning and proves its credibility and advantages of strategic decision-making.

Keywords: the stock market, deep reinforcement learning, MDP, twin delayed deep deterministic policy gradient, sentiment analysis, technical indicators, autonomous agent

Procedia PDF Downloads 155
78 Automated Building Internal Layout Design Incorporating Post-Earthquake Evacuation Considerations

Authors: Sajjad Hassanpour, Vicente A. González, Yang Zou, Jiamou Liu

Abstract:

Earthquakes pose a significant threat to both structural and non-structural elements in buildings, putting human lives at risk. Effective post-earthquake evacuation is critical for ensuring the safety of building occupants. However, current design practices often neglect the integration of post-earthquake evacuation considerations into the early-stage architectural design process. To address this gap, this paper presents a novel automated internal architectural layout generation tool that optimizes post-earthquake evacuation performance. The tool takes an initial plain floor plan as input, along with specific requirements from the user/architect, such as minimum room dimensions, corridor width, and exit lengths. Based on these inputs, firstly, the tool randomly generates different architectural layouts. Secondly, the human post-earthquake evacuation behaviour will be thoroughly assessed for each generated layout using the advanced Agent-Based Building Earthquake Evacuation Simulation (AB2E2S) model. The AB2E2S prototype is a post-earthquake evacuation simulation tool that incorporates variables related to earthquake intensity, architectural layout, and human factors. It leverages a hierarchical agent-based simulation approach, incorporating reinforcement learning to mimic human behaviour during evacuation. The model evaluates different layout options and provides feedback on evacuation flow, time, and possible casualties due to earthquake non-structural damage. By integrating the AB2E2S model into the automated layout generation tool, architects and designers can obtain optimized architectural layouts that prioritize post-earthquake evacuation performance. Through the use of the tool, architects and designers can explore various design alternatives, considering different minimum room requirements, corridor widths, and exit lengths. This approach ensures that evacuation considerations are embedded in the early stages of the design process. In conclusion, this research presents an innovative automated internal architectural layout generation tool that integrates post-earthquake evacuation simulation. By incorporating evacuation considerations into the early-stage design process, architects and designers can optimize building layouts for improved post-earthquake evacuation performance. This tool empowers professionals to create resilient designs that prioritize the safety of building occupants in the face of seismic events.

Keywords: agent-based simulation, automation in design, architectural layout, post-earthquake evacuation behavior

Procedia PDF Downloads 74