Search results for: complexity measurement
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4180

Search results for: complexity measurement

3460 Optimization of Titanium Leaching Process Using Experimental Design

Authors: Arash Rafiei, Carroll Moore

Abstract:

Leaching process as the first stage of hydrometallurgy is a multidisciplinary system including material properties, chemistry, reactor design, mechanics and fluid dynamics. Therefore, doing leaching system optimization by pure scientific methods need lots of times and expenses. In this work, a mixture of two titanium ores and one titanium slag are used for extracting titanium for leaching stage of TiO2 pigment production procedure. Optimum titanium extraction can be obtained from following strategies: i) Maximizing titanium extraction without selective digestion; and ii) Optimizing selective titanium extraction by balancing between maximum titanium extraction and minimum impurity digestion. The main difference between two strategies is due to process optimization framework. For the first strategy, the most important stage of production process is concerned as the main stage and rest of stages would be adopted with respect to the main stage. The second strategy optimizes performance of more than one stage at once. The second strategy has more technical complexity compared to the first one but it brings more economical and technical advantages for the leaching system. Obviously, each strategy has its own optimum operational zone that is not as same as the other one and the best operational zone is chosen due to complexity, economical and practical aspects of the leaching system. Experimental design has been carried out by using Taguchi method. The most important advantages of this methodology are involving different technical aspects of leaching process; minimizing the number of needed experiments as well as time and expense; and concerning the role of parameter interactions due to principles of multifactor-at-time optimization. Leaching tests have been done at batch scale on lab with appropriate control on temperature. The leaching tank geometry has been concerned as an important factor to provide comparable agitation conditions. Data analysis has been done by using reactor design and mass balancing principles. Finally, optimum zone for operational parameters are determined for each leaching strategy and discussed due to their economical and practical aspects.

Keywords: titanium leaching, optimization, experimental design, performance analysis

Procedia PDF Downloads 354
3459 A Fast Optimizer for Large-scale Fulfillment Planning based on Genetic Algorithm

Authors: Choonoh Lee, Seyeon Park, Dongyun Kang, Jaehyeong Choi, Soojee Kim, Younggeun Kim

Abstract:

Market Kurly is the first South Korean online grocery retailer that guarantees same-day, overnight shipping. More than 1.6 million customers place an average of 4.7 million orders and add 3 to 14 products into a cart per month. The company has sold almost 30,000 kinds of various products in the past 6 months, including food items, cosmetics, kitchenware, toys for kids/pets, and even flowers. The company is operating and expanding multiple dry, cold, and frozen fulfillment centers in order to store and ship these products. Due to the scale and complexity of the fulfillment, pick-pack-ship processes are planned and operated in batches, and thus, the planning that decides the batch of the customers’ orders is a critical factor in overall productivity. This paper introduces a metaheuristic optimization method that reduces the complexity of batch processing in a fulfillment center. The method is an iterative genetic algorithm with heuristic creation and evolution strategies; it aims to group similar orders into pick-pack-ship batches to minimize the total number of distinct products. With a well-designed approach to create initial genes, the method produces streamlined plans, up to 13.5% less complex than the actual plans carried out in the company’s fulfillment centers in the previous months. Furthermore, our digital-twin simulations show that the optimized plans can reduce 3% of operation time for packing, which is the most complex and time-consuming task in the process. The optimization method implements a multithreading design on the Spring framework to support the company’s warehouse management systems in near real-time, finding a solution for 4,000 orders within 5 to 7 seconds on an AWS c5.2xlarge instance.

Keywords: fulfillment planning, genetic algorithm, online grocery retail, optimization

Procedia PDF Downloads 64
3458 Social Business Evaluation in Brazil: Analysis of Entrepreneurship and Investor Practices

Authors: Erica Siqueira, Adriana Bin, Rachel Stefanuto

Abstract:

The paper aims to identify and to discuss the impact and results of ex-ante, mid-term and ex-post evaluation initiatives in Brazilian Social Enterprises from the point of view of the entrepreneurs and investors, highlighting the processes involved in these activities and their aftereffects. The study was conducted using a descriptive methodology, primarily qualitative. A multiple-case study was used, and, for that, semi-structured interviews were conducted with ten entrepreneurs in the (i) social finance, (ii) education, (iii) health, (iv) citizenship and (v) green tech fields, as well as three representatives of various impact investments, which are (i) venture capital, (ii) loan and (iii) equity interest areas. Convenience (non-probabilistic) sampling was adopted to select both businesses and investors, who voluntarily contributed to the research. The evaluation is still incipient in most of the studied business cases. Some stand out by adopting well-known methodologies like Global Impact Investing Report System (GIIRS), but still, have a lot to improve in several aspects. Most of these enterprises use nonexperimental research conducted by their own employees, which is ordinarily not understood as 'golden standard' to some authors in the area. Nevertheless, from the entrepreneur point of view, it is possible to identify that most of them including those routines in some extent in their day-by-day activities, despite the difficulty they have of the business in general. In turn, the investors do not have overall directions to establish evaluation initiatives in respective enterprises; they are funding. There is a mechanism of trust, and this is, usually, enough to prove the impact for all stakeholders. The work concludes that there is a large gap between what the literature states in regard to what should be the best practices in these businesses and what the enterprises really do. The evaluation initiatives must be included in some extension in all enterprises in order to confirm social impact that they realize. Here it is recommended the development and adoption of more flexible evaluation mechanisms that consider the complexity involved in these businesses’ routines. The reflections of the research also suggest important implications for the field of Social Enterprises, whose practices are far from what the theory preaches. It highlights the risk of the legitimacy of these enterprises that identify themselves as 'social impact', sometimes without the proper proof based on causality data. Consequently, this makes the field of social entrepreneurship fragile and susceptible to questioning, weakening the ecosystem as a whole. In this way, the top priorities of these enterprises must be handled together with the results and impact measurement activities. Likewise, it is recommended to perform further investigations that consider the trade-offs between impact versus profit. In addition, research about gender, the entrepreneur motivation to call themselves as Social Enterprises, and the possible unintended consequences from these businesses also should be investigated.

Keywords: evaluation practices, impact, results, social enterprise, social entrepreneurship ecosystem

Procedia PDF Downloads 102
3457 A Universal Approach to Categorize Failures in Production

Authors: Konja Knüppel, Gerrit Meyer, Peter Nyhuis

Abstract:

The increasing interconnectedness and complexity of production processes raise the susceptibility of production systems to failure. Therefore, the ability to respond quickly to failures is increasingly becoming a competitive factor. The research project "Sustainable failure management in manufacturing SMEs" is developing a methodology to identify failures in the production and select preventive and reactive measures in order to correct failures and to establish sustainable failure management systems.

Keywords: failure categorization, failure management, logistic performance, production optimization

Procedia PDF Downloads 359
3456 Offline High Voltage Diagnostic Test Findings on 15MVA Generator of Basochhu Hydropower Plant

Authors: Suprit Pradhan, Tshering Yangzom

Abstract:

Even with availability of the modern day online insulation diagnostic technologies like partial discharge monitoring, the measurements like Dissipation Factor (tanδ), DC High Voltage Insulation Currents, Polarization Index (PI) and Insulation Resistance Measurements are still widely used as a diagnostic tools to assess the condition of stator insulation in hydro power plants. To evaluate the condition of stator winding insulation in one of the generators that have been operated since 1999, diagnostic tests were performed on the stator bars of 15 MVA generators of Basochhu Hydropower Plant. This paper presents diagnostic study done on the data gathered from the measurements which were performed in 2015 and 2016 as part of regular maintenance as since its commissioning no proper aging data were maintained. Measurement results of Dissipation Factor, DC High Potential tests and Polarization Index are discussed with regard to their effectiveness in assessing the ageing condition of the stator insulation. After a brief review of the theoretical background, the strengths of each diagnostic method in detecting symptoms of insulation deterioration are identified. The interesting results observed from Basochhu Hydropower Plant is taken into consideration to conclude that Polarization Index and DC High Voltage Insulation current measurements are best suited for the detection of humidity and contamination problems and Dissipation Factor measurement is a robust indicator of long-term ageing caused by oxidative degradation.

Keywords: dissipation Factor (tanδ), polarization Index (PI), DC High Voltage Insulation Current, insulation resistance (IR), Tan Delta Tip-Up, dielectric absorption ratio

Procedia PDF Downloads 293
3455 Communication Anxiety in Nigerian Students Studying English as a Foreign Language: Evidence from Colleges of Education Sector

Authors: Yasàlu Haruna

Abstract:

In every transaction, the use of language is central regardless of form or complexity if any meaning is expected to be harvested therefrom. Students constituting a population group in the learning landscape of Nigeria occupy a central position with a propensity to excel or otherwise in the context of communication, especially in the learning process and social interaction. The nature or quantum of anxiety or confidence in speaking a second language is not only peculiar to societies where the second language is not an official language but to a degree, the linguistic gap created by adoption and adaptation syndrome manifests in created anxiety or lack of confidence especially where mastery of a spoken language becomes a major challenge. This paper explores the manner in which linguistic complexity and cultural barriers combine to widen the adaptation and adoption gap. In much the same way, typical issues of pronouncement, intonation and accent difficulties are vital variables that explain the root cause of anxiety. Using a combination of primary and secondary sources of data expressed in questionnaires, key informant interviews and other available data, the paper concludes that the non-integration of anxiety possibility into the education delivery framework has left a lot to be needed in cultivating second language speakers among students of Nigerian Colleges of Education. In addition, cultural barriers and the absence of integration interfaces in the course of learning within and outside the classroom contribute to further widening the gap. Again, colleagues/mates/conversation partners' mastery of a second language remains a contributory factor largely due to the quality of the preparatory school system in many parts of the country. The paper recommends that national policies and frameworks must be reviewed to consider integration windows where culture and conversation partner deficiencies can be remedied through educational events such as debates, quizzes and symposia; improvements can be attained while commercial advertisements are tailored towards seeking for adoption of second language in commerce and major cultural activities.

Keywords: cultural barriers, integration, college of education and adaptation, second language

Procedia PDF Downloads 64
3454 Quantification and Detection of Non-Sewer Water Infiltration and Inflow in Urban Sewer Systems

Authors: M. Beheshti, S. Saegrov, T. M. Muthanna

Abstract:

Separated sewer systems are designed to transfer the wastewater from houses and industrial sections to wastewater treatment plants. Unwanted water in the sewer systems is a well-known problem, i.e. storm-water inflow is around 50% of the foul sewer, and groundwater infiltration to the sewer system can exceed 50% of total wastewater volume in deteriorated networks. Infiltration and inflow of non-sewer water (I/I) into sewer systems is unfavorable in separated sewer systems and can trigger overloading the system and reducing the efficiency of wastewater treatment plants. Moreover, I/I has negative economic, environmental, and social impacts on urban areas. Therefore, for having sustainable management of urban sewer systems, I/I of unwanted water into the urban sewer systems should be considered carefully and maintenance and rehabilitation plan should be implemented on these water infrastructural assets. This study presents a methodology to identify and quantify the level of I/I into the sewer system. Amount of I/I is evaluated by accurate flow measurement in separated sewer systems for specified isolated catchments in Trondheim city (Norway). Advanced information about the characteristics of I/I is gained by CCTV inspection of sewer pipelines with high I/I contribution. Achieving enhanced knowledge about the detection and localization of non-sewer water in foul sewer system during the wet and dry weather conditions will enable the possibility for finding the problem of sewer system and prioritizing them and taking decisions for rehabilitation and renewal planning in the long-term. Furthermore, preventive measures and optimization of sewer systems functionality and efficiency can be executed by maintenance of sewer system. In this way, the exploitation of sewer system can be improved by maintenance and rehabilitation of existing pipelines in a sustainable way by more practical cost-effective and environmental friendly way. This study is conducted on specified catchments with different properties in Trondheim city. Risvollan catchment is one of these catchments with a measuring station to investigate hydrological parameters through the year, which also has a good database. For assessing the infiltration in a separated sewer system, applying the flow rate measurement method can be utilized in obtaining a general view of the network condition from infiltration point of view. This study discusses commonly used and advanced methods of localizing and quantifying I/I in sewer systems. A combination of these methods give sewer operators the possibility to compare different techniques and obtain reliable and accurate I/I data which is vital for long-term rehabilitation plans.

Keywords: flow rate measurement, infiltration and inflow (I/I), non-sewer water, separated sewer systems, sustainable management

Procedia PDF Downloads 305
3453 The Tramway in French Cities: Complication of Public Spaces and Complexity of the Design Process

Authors: Elisa Maître

Abstract:

The redeployment of tram networks in French cities has considerably modified public spaces and the way citizens use them. Above and beyond the image that trams have of contributing to the sustainable urban development, the question of safety for users in these spaces has not been studied much. This study is based on an analysis of use of public spaces laid out for trams, from the standpoint of legibility and safety concerns. The study also examines to what extent the complexity of the design process, with many interactions between numerous and varied players in this process has a role in the genesis of these problems. This work is mainly based on the analysis of links between the uses of these re-designed public spaces (through observations, interviews of users and accident studies) and the analysis of the design conditions and processes of the projects studied (mainly based on interviews with the actors of these projects). Practical analyses were based three points of view: that of the planner, that of the user (based on observations and interviews) and that of the road safety expert. The cities of Montpellier, Marseille and Nice are the three fields of study on which the demonstration of this thesis is based. On part, the results of this study allow showing that the insertion of tram poses some problems complication of public areas of French cities. These complications related to the restructuring of public spaces for the tram, create difficulties of use and safety concerns. On the other hand, interviews depth analyses, fully transcribed, have led us to develop particular dysfunction scenarios in the design process. These elements lead to question the way the legibility and safety of these new forms of public spaces are taken into account. Then, an in-depth analysis of the design processes of public spaces with trams systems would also be a way of better understanding the choices made, the compromises accepted, and the conflicts and constraints at work, weighing on the layout of these spaces. The results presented concerning the impact that spaces laid out for trams have on the difficulty of use, suggest different possibilities for improving the way in which safety for all users is taken into account in designing public spaces.

Keywords: public spaces, road layout, users, design process of urban projects

Procedia PDF Downloads 214
3452 Analysis of Direct Current Motor in LabVIEW

Authors: E. Ramprasath, P. Manojkumar, P. Veena

Abstract:

DC motors have been widely used in the past centuries which are proudly known as the workhorse of industrial systems until the invention of the AC induction motors which makes a huge revolution in industries. Since then, the use of DC machines have been decreased due to enormous factors such as reliability, robustness and complexity but it lost its fame due to the losses. A new methodology is proposed to construct a DC motor through the simulation in LabVIEW to get an idea about its real time performances, if a change in parameter might have bigger improvement in losses and reliability.

Keywords: analysis, characteristics, direct current motor, LabVIEW software, simulation

Procedia PDF Downloads 531
3451 A Cooperative Signaling Scheme for Global Navigation Satellite Systems

Authors: Keunhong Chae, Seokho Yoon

Abstract:

Recently, the global navigation satellite system (GNSS) such as Galileo and GPS is employing more satellites to provide a higher degree of accuracy for the location service, thus calling for a more efficient signaling scheme among the satellites used in the overall GNSS network. In that the network throughput is improved, the spatial diversity can be one of the efficient signaling schemes; however, it requires multiple antenna that could cause a significant increase in the complexity of the GNSS. Thus, a diversity scheme called the cooperative signaling was proposed, where the virtual multiple-input multiple-output (MIMO) signaling is realized with using only a single antenna in the transmit satellite of interest and with modeling the neighboring satellites as relay nodes. The main drawback of the cooperative signaling is that the relay nodes receive the transmitted signal at different time instants, i.e., they operate in an asynchronous way, and thus, the overall performance of the GNSS network could degrade severely. To tackle the problem, several modified cooperative signaling schemes were proposed; however, all of them are difficult to implement due to a signal decoding at the relay nodes. Although the implementation at the relay nodes could be simpler to some degree by employing the time-reversal and conjugation operations instead of the signal decoding, it would be more efficient if we could implement the operations of the relay nodes at the source node having more resources than the relay nodes. So, in this paper, we propose a novel cooperative signaling scheme, where the data signals are combined in a unique way at the source node, thus obviating the need of the complex operations such as signal decoding, time-reversal and conjugation at the relay nodes. The numerical results confirm that the proposed scheme provides the same performance in the cooperative diversity and the bit error rate (BER) as the conventional scheme, while reducing the complexity at the relay nodes significantly. Acknowledgment: This work was supported by the National GNSS Research Center program of Defense Acquisition Program Administration and Agency for Defense Development.

Keywords: global navigation satellite network, cooperative signaling, data combining, nodes

Procedia PDF Downloads 269
3450 Fatigue Crack Growth Rate Measurement by Means of Classic Method and Acoustic Emission

Authors: V. Mentl, V. Koula, P. Mazal, J. Volák

Abstract:

Nowadays, the acoustic emission is a widely recognized method of material damage investigation, mainly in cases of cracks initiation and growth observation and evaluation. This is highly important in structures, e.g. pressure vessels, large steam turbine rotors etc., applied both in classic and nuclear power plants. Nevertheless, the acoustic emission signals must be correlated with the real crack progress to be able to evaluate the cracks and their growth by this non-destructive technique alone in real situations and to reach reliable results when the assessment of the structures' safety and reliability is performed and also when the remaining lifetime should be evaluated. The main aim of this study was to propose a methodology for evaluation of the early manifestations of the fatigue cracks and their growth and thus to quantify the material damage by acoustic emission parameters. Specimens made of several steels used in the power producing industry were subjected to fatigue loading in the low- and high-cycle regimes. This study presents results of the crack growth rate measurement obtained by the classic compliance change method and the acoustic emission signal analysis. The experiments were realized in cooperation between laboratories of Brno University of Technology and West Bohemia University in Pilsen within the solution of the project of the Czech Ministry of Industry and Commerce: "A diagnostic complex for the detection of pressure media and material defects in pressure components of nuclear and classic power plants" and the project “New Technologies for Mechanical Engineering”.

Keywords: fatigue, crack growth rate, acoustic emission, material damage

Procedia PDF Downloads 358
3449 Quantifying Uncertainties in an Archetype-Based Building Stock Energy Model by Use of Individual Building Models

Authors: Morten Brøgger, Kim Wittchen

Abstract:

Focus on reducing energy consumption in existing buildings at large scale, e.g. in cities or countries, has been increasing in recent years. In order to reduce energy consumption in existing buildings, political incentive schemes are put in place and large scale investments are made by utility companies. Prioritising these investments requires a comprehensive overview of the energy consumption in the existing building stock, as well as potential energy-savings. However, a building stock comprises thousands of buildings with different characteristics making it difficult to model energy consumption accurately. Moreover, the complexity of the building stock makes it difficult to convey model results to policymakers and other stakeholders. In order to manage the complexity of the building stock, building archetypes are often employed in building stock energy models (BSEMs). Building archetypes are formed by segmenting the building stock according to specific characteristics. Segmenting the building stock according to building type and building age is common, among other things because this information is often easily available. This segmentation makes it easy to convey results to non-experts. However, using a single archetypical building to represent all buildings in a segment of the building stock is associated with loss of detail. Thermal characteristics are aggregated while other characteristics, which could affect the energy efficiency of a building, are disregarded. Thus, using a simplified representation of the building stock could come at the expense of the accuracy of the model. The present study evaluates the accuracy of a conventional archetype-based BSEM that segments the building stock according to building type- and age. The accuracy is evaluated in terms of the archetypes’ ability to accurately emulate the average energy demands of the corresponding buildings they were meant to represent. This is done for the buildings’ energy demands as a whole as well as for relevant sub-demands. Both are evaluated in relation to the type- and the age of the building. This should provide researchers, who use archetypes in BSEMs, with an indication of the expected accuracy of the conventional archetype model, as well as the accuracy lost in specific parts of the calculation, due to use of the archetype method.

Keywords: building stock energy modelling, energy-savings, archetype

Procedia PDF Downloads 137
3448 Measuring Satisfaction with Life Construct Among Public and Private University Students During COVID-19 Pandemic in Sabah, Malaysia

Authors: Mohd Dahlan Abdul Malek, Muhamad Idris, Adi Fahrudin, Ida Shafinaz Mohamed Kamil, Husmiati Yusuf, Edeymend Reny Japil, Wan Anor Wan Sulaiman, Lailawati Madlan, Alfred Chan, Nurfarhana Adillah Aftar, Mahirah Masdin

Abstract:

This research intended to develop a valid and reliable instrument of the Satisfaction with Life Scale (SWLS) to measure satisfaction with life (SWL) constructs among public and private university students in Sabah, Malaysia, through the exploratory factor analysis (EFA) procedure. The pilot study obtained a sample of 108 students from public and private education institutions in Sabah, Malaysia, through an online survey using a self-administered questionnaire. The researchers performed the EFA procedure on SWL construct using IBM SPSS 25. The Bartletts' Test of Sphericity is highly significant (Sig. = .000). Furthermore, the sampling adequacy by Kaiser-Meyer-Olkin (KMO = 0.839) is excellent. Using the extraction method of Principal Component Analysis (PCA) with Varimax Rotation, a component of the SWL construct is extracted with an eigenvalue of 3.101. The variance explained for this component is 62.030%. The construct of SWL has Cronbach's alpha value of .817. The development scale and validation confirmed that the instrument is consistent and stable with both private and public college and university student samples. It adds a remarkable contribution to the measurement of SWLS, mainly in the context of higher education institution students. The EFA outcomes formed a configuration that extracts a component of SWL, which can be measured by the original five items established in this research. This research reveals that the SWL construct is applicable to this study.

Keywords: satisfaction, university students, measurement, scale development

Procedia PDF Downloads 67
3447 Quality of Life and Self-Assessed Health of Methadone – Maintained Opiate Addicts

Authors: Brajevic-gizdic Igna, Vuletic Gorka

Abstract:

Introduction: Research in opiate addiction is increasingly indicating the importance of substitution therapy in opiate addicts. Opiate addiction is a chronic relapsing disease that includes craving as a criterion. Craving has been considered a predictor of a relapse, which is defined as a strong desire with an excessive need to take a substance. The study aimed to measure the intensity of craving using the VAS (visual analog scale) in opioid addicts taking the Opioid Substitution Therapy (OST). Method: The total sample compromised of 30 participants in outpatient treatment. Two groups of opiate addicts were considered: Methadone-maintenance and buprenorphine-maintenance addicts. The participants completed the survey questionnaire during the outpatient treatment. Results: The results indicated high levels of cravings in patients during the treatment on OST, which is considered an important destabilization factor in abstinence. Thus, the use of methadone/buprenorphine dose should be considered. Conclusion: These findings provided an objective measurement of methadone /buprenorphine dosage and therapy options. The underdoes of OST can put patients at high risk of relapse, resulting in high levels of craving. Thus, when determining the therapeutic dose of OST, it is crucial to consider patients´ craving. This would achieve stabilization more quickly and avoid relapse in abstinence. Subjective physician assessment and patient’s statement are the main criteria to determine OST dosage. Future studies should use larger sample sizes and focus on the importance of intensity craving measurement on OST to objectify methadone /buprenorphine dosage.

Keywords: abstinence, addicts, methadone, OST, quality of life

Procedia PDF Downloads 82
3446 A CORDIC Based Design Technique for Efficient Computation of DCT

Authors: Deboraj Muchahary, Amlan Deep Borah Abir J. Mondal, Alak Majumder

Abstract:

A discrete cosine transform (DCT) is described and a technique to compute it using fast Fourier transform (FFT) is developed. In this work, DCT of a finite length sequence is obtained by incorporating CORDIC methodology in radix-2 FFT algorithm. The proposed methodology is simple to comprehend and maintains a regular structure, thereby reducing computational complexity. DCTs are used extensively in the area of digital processing for the purpose of pattern recognition. So the efficient computation of DCT maintaining a transparent design flow is highly solicited.

Keywords: DCT, DFT, CORDIC, FFT

Procedia PDF Downloads 455
3445 Weibull Cumulative Distribution Function Analysis with Life Expectancy Endurance Test Result of Power Window Switch

Authors: Miky Lee, K. Kim, D. Lim, D. Cho

Abstract:

This paper presents the planning, rationale for test specification derivation, sampling requirements, test facilities, and result analysis used to conduct lifetime expectancy endurance tests on power window switches (PWS) considering thermally induced mechanical stress under diurnal cyclic temperatures during normal operation (power cycling). The detail process of analysis and test results on the selected PWS set were discussed in this paper. A statistical approach to ‘life time expectancy’ was given to the measurement standards dealing with PWS lifetime determination through endurance tests. The approach choice, within the framework of the task, was explained. The present task was dedicated to voltage drop measurement to derive lifetime expectancy while others mostly consider contact or surface resistance. The measurements to perform and the main instruments to measure were fully described accordingly. The failure data from tests were analyzed to conclude lifetime expectancy through statistical method using Weibull cumulative distribution function. The first goal of this task is to develop realistic worst case lifetime endurance test specification because existing large number of switch test standards cannot induce degradation mechanism which makes the switches less reliable. 2nd goal is to assess quantitative reliability status of PWS currently manufactured based on test specification newly developed thru this project. The last and most important goal is to satisfy customer’ requirement regarding product reliability.

Keywords: power window switch, endurance test, Weibull function, reliability, degradation mechanism

Procedia PDF Downloads 219
3444 Analysis of Vibration of Thin-Walled Parts During Milling Made of EN AW-7075 Alloy

Authors: Jakub Czyżycki, Paweł Twardowski

Abstract:

Thin-walled components made of aluminum alloys are increasingly found in many fields of industry, and they dominate the aerospace industry. The machining of thinwalled structures encounters many difficulties related to the high susceptibility of the workpiece, which causes vibrations including the most unfavorable ones called chatter. The effect of these phenomena is the difficulty in obtaining the required geometric dimensions and surface quality. The purpose of this study is to analyze vibrations arising during machining of thin-walled workpieces made of aluminum alloy EN AW-7075. Samples representing actual thin-walled workpieces were examined in a different range of dimensions characterizing thin-walled workpieces. The tests were carried out in HSM high-speed machining (cutting speed vc = 1400 m/min) using a monolithic solid carbide endmill. Measurement of vibration was realized using a singlecomponent piezoelectric accelerometer 4508C from Brüel&Kjær which was mounted directly on the sample before machining, the measurement was made in the normal feed direction AfN. In addition, the natural frequency of the tested thin-walled components was investigated using a laser vibrometer for an broader analysis of the tested samples. The effect of vibrations on machining accuracy was presented in the form of surface images taken with an optical measuring device from Alicona. A classification of the vibrations produced during the test was carried out, and were analyzed in both the time and frequency domains. Observed significant influence of the thickness of the thin-walled component on the course of vibrations during machining.

Keywords: high-speed machining, thin-walled elements, thin-walled components, milling, vibrations

Procedia PDF Downloads 30
3443 First Formaldehyde Retrieval Using the Raw Data Obtained from Pandora in Seoul: Investigation of the Temporal Characteristics and Comparison with Ozone Monitoring Instrument Measurement

Authors: H. Lee, J. Park

Abstract:

In this present study, for the first time, we retrieved the Formaldehyde (HCHO) Vertical Column Density (HCHOVCD) using Pandora instruments in Seoul, a megacity in northeast Asia, for the period between 2012 and 2014 and investigated the temporal characteristics of HCHOVCD. HCHO Slant Column Density (HCHOSCD) was obtained using the Differential Optical Absorption Spectroscopy (DOAS) method. HCHOSCD was converted to HCHOVCD using geometric Air Mass Factor (AMFG) as Pandora is the direct-sun measurement. The HCHOVCDs is low at 12:00 Local Time (LT) and is high in the morning (10:00 LT) and late afternoon (16:00 LT) except for winter. The maximum (minimum) values of Pandora HCHOVCD are 2.68×1016 (1.63×10¹⁶), 3.19×10¹⁶ (2.23×10¹⁶), 2.00×10¹⁶ (1.26×10¹⁶), and 1.63×10¹⁶ (0.82×10¹⁶) molecules cm⁻² in spring, summer, autumn, and winter, respectively. In terms of seasonal variations, HCHOVCD was high in summer and low in winter which implies that photo-oxidation plays an important role in HCHO production in Seoul. In comparison with the Ozone Monitoring Instrument (OMI) measurements, the HCHOVCDs from the OMI are lower than those from Pandora. The correlation coefficient (R) between monthly HCHOVCDs values from Pandora and OMI is 0.61, with slop of 0.35. Furthermore, to understand HCHO mixing ratio within Planetary Boundary Layer (PBL) in Seoul, we converted Pandora HCHOVCDs to HCHO mixing ratio in the PBL using several meteorological input data from the Atmospheric InfraRed Sounder (AIRS). Seasonal HCHO mixing ratio in PBL converted from Pandora (OMI) HCHOVCDs are estimated to be 6.57 (5.17), 7.08 (6.68), 7.60 (4.70), and 5.00 (4.76) ppbv in spring, summer, autumn, and winter, respectively.

Keywords: formaldehyde, OMI, Pandora, remote sensing

Procedia PDF Downloads 138
3442 A Two-Week and Six-Month Stability of Cancer Health Literacy Classification Using the CHLT-6

Authors: Levent Dumenci, Laura A. Siminoff

Abstract:

Health literacy has been shown to predict a variety of health outcomes. Reliable identification of persons with limited cancer health literacy (LCHL) has been proved questionable with existing instruments using an arbitrary cut point along a continuum. The CHLT-6, however, uses a latent mixture modeling approach to identify persons with LCHL. The purpose of this study was to estimate two-week and six-month stability of identifying persons with LCHL using the CHLT-6 with a discrete latent variable approach as the underlying measurement structure. Using a test-retest design, the CHLT-6 was administered to cancer patients with two-week (N=98) and six-month (N=51) intervals. The two-week and six-month latent test-retest agreements were 89% and 88%, respectively. The chance-corrected latent agreements estimated from Dumenci’s latent kappa were 0.62 (95% CI: 0.41 – 0.82) and .47 (95% CI: 0.14 – 0.80) for the two-week and six-month intervals, respectively. High levels of latent test-retest agreement between limited and adequate categories of cancer health literacy construct, coupled with moderate to good levels of change-corrected latent agreements indicated that the CHLT-6 classification of limited versus adequate cancer health literacy is relatively stable over time. In conclusion, the measurement structure underlying the instrument allows for estimating classification errors circumventing limitations due to arbitrary approaches adopted by all other instruments. The CHLT-6 can be used to identify persons with LCHL in oncology clinics and intervention studies to accurately estimate treatment effectiveness.

Keywords: limited cancer health literacy, the CHLT-6, discrete latent variable modeling, latent agreement

Procedia PDF Downloads 162
3441 Measuring Systems Interoperability: A Focal Point for Standardized Assessment of Regional Disaster Resilience

Authors: Joel Thomas, Alexa Squirini

Abstract:

The key argument of this research is that every element of systems interoperability is an enabler of regional disaster resilience, and arguably should become a focal point for standardized measurement of communities’ ability to work together. Few resilience research efforts have focused on the development and application of solutions that measurably improve communities’ ability to work together at a regional level, yet a majority of the most devastating and disruptive disasters are those that have had a regional impact. The key findings of the research include a unique theoretical, mathematical, and operational approach to tangibly and defensibly measure and assess systems interoperability required to support crisis information management activities performed by governments, the private sector, and humanitarian organizations. A most effective way for communities to measurably improve regional disaster resilience is through deliberately executed disaster preparedness activities. Developing interoperable crisis information management capabilities is a crosscutting preparedness activity that greatly affects a community’s readiness and ability to work together in times of crisis. Thus, improving communities’ human and technical posture to work together in advance of a crisis, with the ultimate goal of enabling information sharing to support coordination and the careful management of available resources, is a primary means by which communities may improve regional disaster resilience. This model describes how systems interoperability can be qualitatively and quantitatively assessed when characterized as five forms of capital: governance; standard operating procedures; technology; training and exercises; and usage. The unique measurement framework presented defines the relationships between systems interoperability, information sharing and safeguarding, operational coordination, community preparedness and regional disaster resilience, and offers a means by which to implement real-world solutions and measure progress over the course of a multi-year program. The model is being developed and piloted in partnership with the U.S. Department of Homeland Security (DHS) Science and Technology Directorate (S&T) and the North Atlantic Treaty Organization (NATO) Advanced Regional Civil Emergency Coordination Pilot (ARCECP) with twenty-three organizations in Bosnia and Herzegovina, Croatia, Macedonia, and Montenegro. The intended effect of the model implementation is to enable communities to answer two key questions: 'Have we measurably improved crisis information management capabilities as a result of this effort?' and, 'As a result, are we more resilient?'

Keywords: disaster, interoperability, measurement, resilience

Procedia PDF Downloads 122
3440 Linearization and Process Standardization of Construction Design Engineering Workflows

Authors: T. R. Sreeram, S. Natarajan, C. Jena

Abstract:

Civil engineering construction is a network of tasks involving varying degree of complexity and streamlining, and standardization is the only way to establish a systemic approach to design. While there are off the shelf tools such as AutoCAD that play a role in the realization of design, the repeatable process in which these tools are deployed often is ignored. The present paper addresses this challenge through a sustainable design process and effective standardizations at all stages in the design workflow. The same is demonstrated through a case study in the context of construction, and further improvement points are highlighted.

Keywords: syste, lean, value stream, process improvement

Procedia PDF Downloads 105
3439 Radiation Protection Assessment of the Emission of a d-t Neutron Generator: Simulations with MCNP Code and Experimental Measurements in Different Operating Conditions

Authors: G. M. Contessa, L. Lepore, G. Gandolfo, C. Poggi, N. Cherubini, R. Remetti, S. Sandri

Abstract:

Practical guidelines are provided in this work for the safe use of a portable d-t Thermo Scientific MP-320 neutron generator producing pulsed 14.1 MeV neutron beams. The neutron generator’s emission was tested experimentally and reproduced by MCNPX Monte Carlo code. Simulations were particularly accurate, even generator’s internal components were reproduced on the basis of ad-hoc collected X-ray radiographic images. Measurement campaigns were conducted under different standard experimental conditions using an LB 6411 neutron detector properly calibrated at three different energies, and comparing simulated and experimental data. In order to estimate the dose to the operator vs. the operating conditions and the energy spectrum, the most appropriate value of the conversion factor between neutron fluence and ambient dose equivalent has been identified, taking into account both direct and scattered components. The results of the simulations show that, in real situations, when there is no information about the neutron spectrum at the point where the dose has to be evaluated, it is possible - and in any case conservative - to convert the measured value of the count rate by means of the conversion factor corresponding to 14 MeV energy. This outcome has a general value when using this type of generator, enabling a more accurate design of experimental activities in different setups. The increasingly widespread use of this type of device for industrial and medical applications makes the results of this work of interest in different situations, especially as a support for the definition of appropriate radiation protection procedures and, in general, for risk analysis.

Keywords: instrumentation and monitoring, management of radiological safety, measurement of individual dose, radiation protection of workers

Procedia PDF Downloads 116
3438 Approaches to Reduce the Complexity of Mathematical Models for the Operational Optimization of Large-Scale Virtual Power Plants in Public Energy Supply

Authors: Thomas Weber, Nina Strobel, Thomas Kohne, Eberhard Abele

Abstract:

In context of the energy transition in Germany, the importance of so-called virtual power plants in the energy supply continues to increase. The progressive dismantling of the large power plants and the ongoing construction of many new decentralized plants result in great potential for optimization through synergies between the individual plants. These potentials can be exploited by mathematical optimization algorithms to calculate the optimal application planning of decentralized power and heat generators and storage systems. This also includes linear or linear mixed integer optimization. In this paper, procedures for reducing the number of decision variables to be calculated are explained and validated. On the one hand, this includes combining n similar installation types into one aggregated unit. This aggregated unit is described by the same constraints and target function terms as a single plant. This reduces the number of decision variables per time step and the complexity of the problem to be solved by a factor of n. The exact operating mode of the individual plants can then be calculated in a second optimization in such a way that the output of the individual plants corresponds to the calculated output of the aggregated unit. Another way to reduce the number of decision variables in an optimization problem is to reduce the number of time steps to be calculated. This is useful if a high temporal resolution is not necessary for all time steps. For example, the volatility or the forecast quality of environmental parameters may justify a high or low temporal resolution of the optimization. Both approaches are examined for the resulting calculation time as well as for optimality. Several optimization models for virtual power plants (combined heat and power plants, heat storage, power storage, gas turbine) with different numbers of plants are used as a reference for the investigation of both processes with regard to calculation duration and optimality.

Keywords: CHP, Energy 4.0, energy storage, MILP, optimization, virtual power plant

Procedia PDF Downloads 153
3437 Manipulator Development for Telediagnostics

Authors: Adam Kurnicki, Bartłomiej Stanczyk, Bartosz Kania

Abstract:

This paper presents development of the light-weight manipulator with series elastic actuation for medical telediagnostics (USG examination). General structure of realized impedance control algorithm was shown. It was described how to perform force measurements based mainly on elasticity of manipulator links.

Keywords: telediagnostics, elastic manipulator, impedance control, force measurement

Procedia PDF Downloads 454
3436 Statistical Assessment of Models for Determination of Soil–Water Characteristic Curves of Sand Soils

Authors: S. J. Matlan, M. Mukhlisin, M. R. Taha

Abstract:

Characterization of the engineering behavior of unsaturated soil is dependent on the soil-water characteristic curve (SWCC), a graphical representation of the relationship between water content or degree of saturation and soil suction. A reasonable description of the SWCC is thus important for the accurate prediction of unsaturated soil parameters. The measurement procedures for determining the SWCC, however, are difficult, expensive, and time-consuming. During the past few decades, researchers have laid a major focus on developing empirical equations for predicting the SWCC, with a large number of empirical models suggested. One of the most crucial questions is how precisely existing equations can represent the SWCC. As different models have different ranges of capability, it is essential to evaluate the precision of the SWCC models used for each particular soil type for better SWCC estimation. It is expected that better estimation of SWCC would be achieved via a thorough statistical analysis of its distribution within a particular soil class. With this in view, a statistical analysis was conducted in order to evaluate the reliability of the SWCC prediction models against laboratory measurement. Optimization techniques were used to obtain the best-fit of the model parameters in four forms of SWCC equation, using laboratory data for relatively coarse-textured (i.e., sandy) soil. The four most prominent SWCCs were evaluated and computed for each sample. The result shows that the Brooks and Corey model is the most consistent in describing the SWCC for sand soil type. The Brooks and Corey model prediction also exhibit compatibility with samples ranging from low to high soil water content in which subjected to the samples that evaluated in this study.

Keywords: soil-water characteristic curve (SWCC), statistical analysis, unsaturated soil, geotechnical engineering

Procedia PDF Downloads 325
3435 Development of a Smart System for Measuring Strain Levels of Natural Gas and Petroleum Pipelines on Earthquake Fault Lines in Turkiye

Authors: Ahmet Yetik, Seyit Ali Kara, Cevat Özarpa

Abstract:

Load changes occur on natural gas and oil pipelines due to natural disasters. The displacement of the soil around the natural gas and oil pipes due to situations that may cause erosion, such as earthquakes, landslides, and floods, is the source of this load change. The exposure of natural gas and oil pipes to variable loads causes deformation, cracks, and breaks in these pipes. Cracks and breaks on the pipes cause damage to people and the environment due to reasons such as explosions. Especially with the examinations made after natural disasters, it can be easily understood which of the pipes has more damage in the regions followed. It has been determined that the earthquakes in Turkey caused permanent damage to the pipelines. This project was designed and realized because it was determined that there were cracks and gas leaks in the insulation gaskets placed in the pipelines, especially at the junction points. In this study, A new SCADA (Supervisory Control and Data Acquisition) application has been developed to monitor load changes caused by natural disasters. The newly developed SCADA application monitors the changes in the x, y, and z axes of the stresses occurring in the pipes with the help of strain gauge sensors placed on the pipes. For the developed SCADA system, test setups in accordance with the standards were created during the fieldwork. The test setups created were integrated into the SCADA system, and the system was followed up. Thanks to the SCADA system developed with the field application, the load changes that will occur on the natural gas and oil pipes are instantly monitored, and the accumulations that may create a load on the pipes and their surroundings are immediately intervened, and new risks that may arise are prevented. It has contributed to energy supply security, asset management, pipeline holistic management, and sustainability.

Keywords: earthquake, natural gas pipes, oil pipes, strain measurement, stress measurement, landslide

Procedia PDF Downloads 58
3434 A Methodology for the Identification of Technological Gaps and the Measurement of the Level of Technological Acceptance in the Rural Sector in Colombia

Authors: Anyi Katherine Garzon Robles, Luis Carlos Gomez Florez

Abstract:

Since the advent of the Internet, the use of Information Technologies (IT) has increased exponentially. The field of informatics and telecommunications has put on the table countless possibilities for the development of different socio-economic activities, promoting a change of social paradigm and the emergence of the so-called information and knowledge society. For more than a decade, the Colombian government has been working on the incorporation of IT into the public sector through an e-government strategy. However, to date, many technological gaps has not yet been identified in the country to our knowledge, especially in rural areas and far from large cities, where factors such as low investment and the expansion of the armed conflict have led to economic and technological stagnation. This paper presents the research results obtained from the execution of a research project, which was approach from a qualitative approach and a methodological design of a participatory action research type. This design consists of nine fundamental stages divided into four work cycles. For which different strategies for data collection and analysis were established. From which, a methodology was obtained for the identification of technological gaps and the measurement of the level of technological acceptance in the rural sector, based on the TAM (Technology Acceptance Model) model, as a previous activity to the development of IT solutions framed in the e-government strategy in Colombia. The result of this research work represents a contribution from academia for the improvement of the country's technological development and a guide for the proper planning of IT solutions aimed at promoting a close relationship between government and citizens.

Keywords: E-government, knowledge society, level of technological acceptance, technological gaps, technology acceptance model

Procedia PDF Downloads 221
3433 Technology of Gyro Orientation Measurement Unit (Gyro Omu) for Underground Utility Mapping Practice

Authors: Mohd Ruzlin Mohd Mokhtar

Abstract:

At present, most operators who are working on projects for utilities such as power, water, oil, gas, telecommunication and sewerage are using technologies e.g. Total station, Global Positioning System (GPS), Electromagnetic Locator (EML) and Ground Penetrating Radar (GPR) to perform underground utility mapping. With the increase in popularity of Horizontal Directional Drilling (HDD) method among the local authorities and asset owners, most of newly installed underground utilities need to use the HDD method. HDD method is seen as simple and create not much disturbance to the public and traffic. Thus, it was the preferred utilities installation method in most of areas especially in urban areas. HDDs were installed much deeper than exiting utilities (some reports saying that HDD is averaging 5 meter in depth). However, this impacts the accuracy or ability of existing underground utility mapping technologies. In most of Malaysia underground soil condition, those technologies were limited to maximum of 3 meter depth. Thus, those utilities which were installed much deeper than 3 meter depth could not be detected by using existing detection tools. The accuracy and reliability of existing underground utility mapping technologies or work procedure were in doubt. Thus, a mitigation action plan is required. While installing new utility using Horizontal Directional Drilling (HDD) method, a more accurate underground utility mapping can be achieved by using Gyro OMU compared to existing practice using e.g. EML and GPR. Gyro OMU is a method to accurately identify the location of HDD thus this mapping can be used or referred to avoid those cost of breakdown due to future HDD works which can be caused by inaccurate underground utility mapping.

Keywords: Gyro Orientation Measurement Unit (Gyro OMU), Horizontal Directional Drilling (HDD), Ground Penetrating Radar (GPR), Electromagnetic Locator (EML)

Procedia PDF Downloads 120
3432 Bounded Rational Heterogeneous Agents in Artificial Stock Markets: Literature Review and Research Direction

Authors: Talal Alsulaiman, Khaldoun Khashanah

Abstract:

In this paper, we provided a literature survey on the artificial stock problem (ASM). The paper began by exploring the complexity of the stock market and the needs for ASM. ASM aims to investigate the link between individual behaviors (micro level) and financial market dynamics (macro level). The variety of patterns at the macro level is a function of the AFM complexity. The financial market system is a complex system where the relationship between the micro and macro level cannot be captured analytically. Computational approaches, such as simulation, are expected to comprehend this connection. Agent-based simulation is a simulation technique commonly used to build AFMs. The paper proceeds by discussing the components of the ASM. We consider the roles of behavioral finance (BF) alongside the traditionally risk-averse assumption in the construction of agent's attributes. Also, the influence of social networks in the developing of agents’ interactions is addressed. Network topologies such as a small world, distance-based, and scale-free networks may be utilized to outline economic collaborations. In addition, the primary methods for developing agents learning and adaptive abilities have been summarized. These incorporated approach such as Genetic Algorithm, Genetic Programming, Artificial neural network and Reinforcement Learning. In addition, the most common statistical properties (the stylized facts) of stock that are used for calibration and validation of ASM are discussed. Besides, we have reviewed the major related previous studies and categorize the utilized approaches as a part of these studies. Finally, research directions and potential research questions are argued. The research directions of ASM may focus on the macro level by analyzing the market dynamic or on the micro level by investigating the wealth distributions of the agents.

Keywords: artificial stock markets, market dynamics, bounded rationality, agent based simulation, learning, interaction, social networks

Procedia PDF Downloads 335
3431 Preliminary Studies on the Potentials of Bambara nut (Voandzeia substerranea) and Pigeon pea (Cajanus cajan) as Imitation Milk

Authors: Onuoha Gideon

Abstract:

The preliminary studies on the potentials of Bambara nut and pigeon pea as imitation milk were investigated. Bambara nut and Pigeon pea milk were produced from two separate unit operations; Bambara nut seed was cooked, dehulled, milled and strained to milk (BCM) and another batch was toasted at moderate temperature, dehulled, milled and strained to milk (BTM). Pigeon pea seed was cooked, dehulled, milled and strained to milk (PCM) and another batch was toasted at moderate temperature, dehulled, milled and strained to milk (PTM). The result of the proximate analysis on the milk samples on wet basis showed that the protein content ranged from 28.56 – 26.77, the crude fibre ranged from 6.28 – 1.85, the ash content ranged from 5.22 – 1.17, the fat content ranged from 2.71 – 1.12, the moisture content ranged from 95.93 – 93.83, the carbohydrate content ranged from 67.62 – 58.83. The functional analysis on the milk samples showed that emulsification capacity ranged from 43.21 – 38.66, emulsion stability ranged from 34.10 – 25.00, the specific gravity ranged from 997.50 – 945.00, the foaming capacity ranged from 3,500 to 2,250, the measurement of viscosity ranged from 0.017 – 0.007, the pH range from 5.55 – 5.25, the measurement of dispersibility range from 11.00 – 7.00, the total soluble solid ranged from 4.00 to 1.75, the total titratable acidity ranged from 0.314 – 0.328. The sensory evaluation report showed that in terms of flavor, sample BCM and PCM value were significantly different from sample BTM and PTM. In terms of colour, sample BCM showed a significant difference from samples BTM, PCM and PTM. In term of texture, sample BCM was significantly different from samples BTM, PCM and PTM. The general acceptability shows that sample BCM was significantly different from other the samples and was the most accepted. The microbial analysis indicated that the microbial load increases with time. Bacterial count ranged from 1.3 x 105 – 1.20 x 106 to 1.6 x 105 – 1.06 x 106, fungal count ranged from 4.0 x 105 – 8.0 x 105 to 4.0 x 105 – 7.0 x 105. The studies showed that BCM was the most preferred.

Keywords: imitation milk, Bambara nut, Pigeon pea, proximate composition

Procedia PDF Downloads 328