Search results for: two stream interfaces
437 Modelling and Numerical Analysis of Thermal Non-Destructive Testing on Complex Structure
Authors: Y. L. Hor, H. S. Chu, V. P. Bui
Abstract:
Composite material is widely used to replace conventional material, especially in the aerospace industry to reduce the weight of the devices. It is formed by combining reinforced materials together via adhesive bonding to produce a bulk material with alternated macroscopic properties. In bulk composites, degradation may occur in microscopic scale, which is in each individual reinforced fiber layer or especially in its matrix layer such as delamination, inclusion, disbond, void, cracks, and porosity. In this paper, we focus on the detection of defect in matrix layer which the adhesion between the composite plies is in contact but coupled through a weak bond. In fact, the adhesive defects are tested through various nondestructive methods. Among them, pulsed phase thermography (PPT) has shown some advantages providing improved sensitivity, large-area coverage, and high-speed testing. The aim of this work is to develop an efficient numerical model to study the application of PPT to the nondestructive inspection of weak bonding in composite material. The resulting thermal evolution field is comprised of internal reflections between the interfaces of defects and the specimen, and the important key-features of the defects presented in the material can be obtained from the investigation of the thermal evolution of the field distribution. Computational simulation of such inspections has allowed the improvement of the techniques to apply in various inspections, such as materials with high thermal conductivity and more complex structures.Keywords: pulsed phase thermography, weak bond, composite, CFRP, computational modelling, optimization
Procedia PDF Downloads 176436 Settlement Analysis of Back-To-Back Mechanically Stabilized Earth Walls
Authors: Akhila Palat, B. Umashankar
Abstract:
Back-to-back Mechanically Stabilized Earth (MSE) walls are cost-effective soil-retaining structures that can tolerate large settlements compared to conventional gravity retaining walls. They are also an economical way to meet everyday earth retention needs for highway and bridge grade separations, railroads, commercial and residential developments. But, existing design guidelines (FHWA/BS/ IS codes) do not provide a mechanistic approach for the design of back-to-back reinforced retaining walls. The settlement analysis of such structures is limited in the literature. A better understanding of the deformations of this wall system requires an analytical tool that incorporates the properties of backfill material, foundation soil, and geosynthetic reinforcement, and account for the soil–structure interactions in a realistic manner. This study was conducted to investigate the effect of reinforced back-to-back MSE walls on wall settlements and facing deformations. Back-to-back reinforced retaining walls were modeled and compared using commercially available finite difference package FLAC 2D. Parametric studies were carried out for various angles of shearing resistance of backfill material and foundation soil, and the axial stiffness of the reinforcement. A 6m-high wall was modeled, and the facing panels were taken as full-length panels with nominal thickness. Reinforcement was modeled as cable elements (two-dimensional structural elements). Interfaces were considered between soil and wall, and soil and reinforcement.Keywords: back-to-back walls, numerical modeling, reinforced wall, settlement
Procedia PDF Downloads 303435 Experimental Investigation of S822 and S823 Wind Turbine Airfoils Wake
Authors: Amir B. Khoshnevis, Morteza Mirhosseini
Abstract:
The paper deals with a sub-part of an extensive research program on the wake survey method in various Reynolds numbers and angles of attack. This research experimentally investigates the wake flow characteristics behind S823 and S822 airfoils in which designed for small wind turbines. Velocity measurements determined by using hot-wire anemometer. Data acquired in the wake of the airfoil at locations(c is the chord length): 0.01c - 3c. Reynolds number increased due to increase of free stream velocity. Results showed that mean velocity profiles depend on the angle of attack and location of data collections. Data acquired at the low Reynolds numbers (smaller than 10^5). Effects of Reynolds numbers on the mean velocity profiles are more significant in near locations the trailing edge and these effects decrease by taking distance from trailing edge toward downstream. Mean velocity profiles region increased by increasing the angle of attack, except for 7°, and also the maximum velocity deficit (velocity defect) increased. The difference of mean velocity in and out of the wake decreased by taking distance from trailing edge, and mean velocity profile become wider and more uniform.Keywords: angle of attack, Reynolds number, velocity deficit, separation
Procedia PDF Downloads 377434 A Machine Learning Approach for Intelligent Transportation System Management on Urban Roads
Authors: Ashish Dhamaniya, Vineet Jain, Rajesh Chouhan
Abstract:
Traffic management is one of the gigantic issue in most of the urban roads in al-most all metropolitan cities in India. Speed is one of the critical traffic parameters for effective Intelligent Transportation System (ITS) implementation as it decides the arrival rate of vehicles on an intersection which are majorly the point of con-gestions. The study aimed to leverage Machine Learning (ML) models to produce precise predictions of speed on urban roadway links. The research objective was to assess how categorized traffic volume and road width, serving as variables, in-fluence speed prediction. Four tree-based regression models namely: Decision Tree (DT), Random Forest (RF), Extra Tree (ET), and Extreme Gradient Boost (XGB)are employed for this purpose. The models' performances were validated using test data, and the results demonstrate that Random Forest surpasses other machine learning techniques and a conventional utility theory-based model in speed prediction. The study is useful for managing the urban roadway network performance under mixed traffic conditions and effective implementation of ITS.Keywords: stream speed, urban roads, machine learning, traffic flow
Procedia PDF Downloads 71433 Clean Gold Solution from Printed Circuit Board Physical Processing Dust by Selective Complexation
Authors: Iyiola O. Otunniyi, Oluwayimika O. Oluokun
Abstract:
The two-step leaching process of PCB dust will produce a first leaching stream containing assorted metals that still requires more demanding multistage processing afterward to recover base metals and precious metals. In this work, three-step selective complexations produce a clean gold solution from printed circuit board dust. After optimizing for temperature and concentrations, the first step under oxidative ammonia leaching recovered no gold, 90 % Cu and 50 % Zn. Second step acid leaching recovered no gold, 89 % Fe, 48 % Zn, 94 % Ni. The recoveries generally increased with reducing dust particle sizes, except for zinc under oxidative ammonia, and it was noted that its various alloy forms in PCB could be responsible for this. At the third leaching step using acidified thiourea with 0.1 M H₂O₂ at 25 OC, gold recovery was 99 %. The leaching rate was shown to be chemically controlled, implying that reagent dosage control will compensate for feed assay shifts in an operation design. Copper, zinc and nickel will be easily recoverable from leach solutions of the first two steps in this leaching scheme. The third step produced a clean gold solution for easy processing downstream.Keywords: gold thiourea complexation, printed circuit board, step leaching, selective recovery
Procedia PDF Downloads 11432 Development of Automatic Laser Scanning Measurement Instrument
Authors: Chien-Hung Liu, Yu-Fen Chen
Abstract:
This study used triangular laser probe and three-axial direction mobile platform for surface measurement, programmed it and applied it to real-time analytic statistics of different measured data. This structure was used to design a system integration program: using triangular laser probe for scattering or reflection non-contact measurement, transferring the captured signals to the computer through RS-232, and using RS-485 to control the three-axis platform for a wide range of measurement. The data captured by the laser probe are formed into a 3D surface. This study constructed an optical measurement application program in the concept of visual programming language. First, the signals are transmitted to the computer through RS-232/RS-485, and then the signals are stored and recorded in graphic interface timely. This programming concept analyzes various messages, and makes proper presentation graphs and data processing to provide the users with friendly graphic interfaces and data processing state monitoring, and identifies whether the present data are normal in graphic concept. The major functions of the measurement system developed by this study are thickness measurement, SPC, surface smoothness analysis, and analytical calculation of trend line. A result report can be made and printed promptly. This study measured different heights and surfaces successfully, performed on-line data analysis and processing effectively, and developed a man-machine interface for users to operate.Keywords: laser probe, non-contact measurement, triangulation measurement principle, statistical process control, labVIEW
Procedia PDF Downloads 360431 Towards Reliable Mobile Cloud Computing
Authors: Khaled Darwish, Islam El Madahh, Hoda Mohamed, Hadia El Hennawy
Abstract:
Cloud computing has been one of the fastest growing parts in IT industry mainly in the context of the future of the web where computing, communication, and storage services are main services provided for Internet users. Mobile Cloud Computing (MCC) is gaining stream which can be used to extend cloud computing functions, services and results to the world of future mobile applications and enables delivery of a large variety of cloud application to billions of smartphones and wearable devices. This paper describes reliability for MCC by determining the ability of a system or component to function correctly under stated conditions for a specified period of time to be able to deal with the estimation and management of high levels of lifetime engineering uncertainty and risks of failure. The assessment procedures consists of determine Mean Time between Failures (MTBF), Mean Time to Failure (MTTF), and availability percentages for main components in both cloud computing and MCC structures applied on single node OpenStack installation to analyze its performance with different settings governing the behavior of participants. Additionally, we presented several factors have a significant impact on rates of change overall cloud system reliability should be taken into account in order to deliver highly available cloud computing services for mobile consumers.Keywords: cloud computing, mobile cloud computing, reliability, availability, OpenStack
Procedia PDF Downloads 398430 Waste Identification Diagrams Effectiveness: A Case Study in the Manaus Industrial Pole
Authors: José Dinis-Carvalho, Levi Guimarães, Celina Leão, Rui Sousa, Rosa Eliza Vieira, Larissa Thomaz, Kelliane Guerreiro
Abstract:
This research paper investigates the efficacy of waste identification diagrams (WIDs) as a tool for waste reduction and management within the Manaus Industrial Pole. The study focuses on assessing the practical application and effectiveness of WIDs in identifying, categorizing, and mitigating various forms of waste generated across industrial processes. Employing a mixed-methods approach, including a qualitative questionnaire applied to 5 companies and quantitative data analysis with SPSS statistical software, the research evaluates the implementation and impact of WIDs on waste reduction practices in select industries within the Manaus Industrial Pole. The findings contribute to understanding the utility of WIDs as a proactive strategy for waste management, offering insights into their potential for fostering sustainable practices and promoting environmental stewardship in industrial settings. The study also discusses challenges, best practices, and recommendations for optimizing the utilization of WIDs in industrial waste management, thereby addressing the broader implications for sustainable industrial development.Keywords: waste identification diagram, value stream mapping, overall equipment effectiveness, lean manufacturing
Procedia PDF Downloads 56429 Crisis Communication at Destinations: A Study for Tourism Managers
Authors: Volkan Altintas, Burcu Oksuz
Abstract:
Tourism industry essentially requires effective crisis management and crisis communication skills, as it is extremely vulnerable to crises. In terms of destinations, tourism crises cause dramatic decreases in the number of inbound tourists, impairment in the destination’s image, and decline in the level of preferability of the destination not only in the short but also in the long term. Therefore, any destination should be well prepared for crisis situation that may arise for various reasons. Currently, the advancement in communication technologies enables and facilitates information and experience to spread rapidly, and negative information and experiences tend to be shared to a further extent. Destinations are broadly exposed to the impacts of such communication stream. Turkey is almost continuously exposed to crises and their adverse impacts as a tourism destination, and thus requires effective crisis communication activities to be maintained. Hence, the approaches of tourism managers toward crisis communication and their proposals for addressing issues in question are important. This study intends to set forth the considerations of the managers serving in the tourism industry about crisis communication at destinations. The theoretical part of the study describes and explains crisis management and crisis communication at destinations; following which are provided the outcomes of the thorough in-depth interviews and discussions conducted for the establishment of the considerations of tourism managers. Managers indicated the role and importance of crisis communications in destinations.Keywords: crisis communication, crisis management, destination, tourism managers
Procedia PDF Downloads 314428 Evaluating Portfolio Performance by Highlighting Network Property and the Sharpe Ratio in the Stock Market
Authors: Zahra Hatami, Hesham Ali, David Volkman
Abstract:
Selecting a portfolio for investing is a crucial decision for individuals and legal entities. In the last two decades, with economic globalization, a stream of financial innovations has rushed to the aid of financial institutions. The importance of selecting stocks for the portfolio is always a challenging task for investors. This study aims to create a financial network to identify optimal portfolios using network centralities metrics. This research presents a community detection technique of superior stocks that can be described as an optimal stock portfolio to be used by investors. By using the advantages of a network and its property in extracted communities, a group of stocks was selected for each of the various time periods. The performance of the optimal portfolios compared to the famous index. Their Sharpe ratio was calculated in a timely manner to evaluate their profit for making decisions. The analysis shows that the selected potential portfolio from stocks with low centrality measurement can outperform the market; however, they have a lower Sharpe ratio than stocks with high centrality scores. In other words, stocks with low centralities could outperform the S&P500 yet have a lower Sharpe ratio than high central stocks.Keywords: portfolio management performance, network analysis, centrality measurements, Sharpe ratio
Procedia PDF Downloads 155427 A Large Ion Collider Experiment (ALICE) Diffractive Detector Control System for RUN-II at the Large Hadron Collider
Authors: J. C. Cabanillas-Noris, M. I. Martínez-Hernández, I. León-Monzón
Abstract:
The selection of diffractive events in the ALICE experiment during the first data taking period (RUN-I) of the Large Hadron Collider (LHC) was limited by the range over which rapidity gaps occur. It would be possible to achieve better measurements by expanding the range in which the production of particles can be detected. For this purpose, the ALICE Diffractive (AD0) detector has been installed and commissioned for the second phase (RUN-II). Any new detector should be able to take the data synchronously with all other detectors and be operated through the ALICE central systems. One of the key elements that must be developed for the AD0 detector is the Detector Control System (DCS). The DCS must be designed to operate safely and correctly this detector. Furthermore, the DCS must also provide optimum operating conditions for the acquisition and storage of physics data and ensure these are of the highest quality. The operation of AD0 implies the configuration of about 200 parameters, from electronics settings and power supply levels to the archiving of operating conditions data and the generation of safety alerts. It also includes the automation of procedures to get the AD0 detector ready for taking data in the appropriate conditions for the different run types in ALICE. The performance of AD0 detector depends on a certain number of parameters such as the nominal voltages for each photomultiplier tube (PMT), their threshold levels to accept or reject the incoming pulses, the definition of triggers, etc. All these parameters define the efficiency of AD0 and they have to be monitored and controlled through AD0 DCS. Finally, AD0 DCS provides the operator with multiple interfaces to execute these tasks. They are realized as operating panels and scripts running in the background. These features are implemented on a SCADA software platform as a distributed control system which integrates to the global control system of the ALICE experiment.Keywords: AD0, ALICE, DCS, LHC
Procedia PDF Downloads 306426 Riparian Buffer Strips’ Capability of E. coli Removal in New York Streams
Authors: Helen Sanders, Joshua Cousins
Abstract:
The purpose of this study is to ascertain whether riparian buffer strips could be used to reduce Escherichia Coli (E. coli) runoff into streams in Central New York. Mainstream methods currently utilized to reduce E. coli runoff include fencing and staggered fertilizing plans for agriculture. These methods still do not significantly limit E. coli and thus, pose a serious health risk to individuals who swim in contaminated waters or consume contaminated produce. One additional method still in research development involves the planting of vegetated riparian buffers along waterways. Currently, riparian buffer strips are primarily used for filtration of nitrate and phosphate runoff to slow erosion, regulate pH and, improve biodiversity within waterways. For my research, four different stream sites were selected for the study, in which rainwater runoff was collected at both the riparian buffer and the E. coli sourced runoff upstream. Preliminary results indicate that there is an average 70% decrease in E. coli content in streams at the riparian buffer strips compared to upstream runoff. This research could be utilized to include vegetated buffer planting as a method to decrease manure runoff into essential waterways.Keywords: Escherichia coli, riparian buffer strips, vegetated riparian buffers, runoff, filtration
Procedia PDF Downloads 179425 Powder Flow with Normalized Powder Particles Size Distribution and Temperature Analyses in Laser Melting Deposition: Analytical Modelling and Experimental Validation
Authors: Muhammad Arif Mahmood, Andrei C. Popescu, Mihai Oane, Diana Chioibascu, Carmen Ristoscu, Ion N. Mihailescu
Abstract:
Powder flow and temperature distributions are recognized as influencing factors during laser melting deposition (LMD) process, that not only affect the consolidation rate but also characteristics of the deposited layers. Herewith, two simplified analytical models will be presented to simulate the powder flow with the inclusion of powder particles size distribution in Gaussian form, under three powder jet nozzles, and temperature analyses during LMD process. The output of the 1st model will serve as the input in the 2nd model. The models will be validated with experimental data, i.e., weight measurement method for powder particles distribution and infrared imaging for temperature analyses. This study will increase the cost-efficiency of the LMD process by adjustment of the operating parameters for reaching optimal powder debit and energy. This research has received funds under the Marie Sklodowska-Curie grant agreement No. 764935, from the European Union’s Horizon 2020 research and innovation program.Keywords: laser additive manufacturing, powder particles size distribution in Gaussian form, powder stream distribution, temperature analyses
Procedia PDF Downloads 136424 Challenge of Baseline Hydrology Estimation at Large-Scale Watersheds
Authors: Can Liu, Graham Markowitz, John Balay, Ben Pratt
Abstract:
Baseline or natural hydrology is commonly employed for hydrologic modeling and quantification of hydrologic alteration due to manmade activities. It can inform planning and policy related efforts for various state and federal water resource agencies to restore natural streamflow flow regimes. A common challenge faced by hydrologists is how to replicate unaltered streamflow conditions, particularly in large watershed settings prone to development and regulation. Three different methods were employed to estimate baseline streamflow conditions for 6 major subbasins the Susquehanna River Basin; those being: 1) incorporation of consumptive water use and reservoir operations back into regulated gaged records; 2) using a map correlation method and flow duration (exceedance probability) regression equations; 3) extending the pre-regulation streamflow records based on the relationship between concurrent streamflows at unregulated and regulated gage locations. Parallel analyses were perform among the three methods and limitations associated with each are presented. Results from these analyses indicate that generating baseline streamflow records at large-scale watersheds remain challenging, even with long-term continuous stream gage records available.Keywords: baseline hydrology, streamflow gage, subbasin, regression
Procedia PDF Downloads 324423 Context Detection in Spreadsheets Based on Automatically Inferred Table Schema
Authors: Alexander Wachtel, Michael T. Franzen, Walter F. Tichy
Abstract:
Programming requires years of training. With natural language and end user development methods, programming could become available to everyone. It enables end users to program their own devices and extend the functionality of the existing system without any knowledge of programming languages. In this paper, we describe an Interactive Spreadsheet Processing Module (ISPM), a natural language interface to spreadsheets that allows users to address ranges within the spreadsheet based on inferred table schema. Using the ISPM, end users are able to search for values in the schema of the table and to address the data in spreadsheets implicitly. Furthermore, it enables them to select and sort the spreadsheet data by using natural language. ISPM uses a machine learning technique to automatically infer areas within a spreadsheet, including different kinds of headers and data ranges. Since ranges can be identified from natural language queries, the end users can query the data using natural language. During the evaluation 12 undergraduate students were asked to perform operations (sum, sort, group and select) using the system and also Excel without ISPM interface, and the time taken for task completion was compared across the two systems. Only for the selection task did users take less time in Excel (since they directly selected the cells using the mouse) than in ISPM, by using natural language for end user software engineering, to overcome the present bottleneck of professional developers.Keywords: natural language processing, natural language interfaces, human computer interaction, end user development, dialog systems, data recognition, spreadsheet
Procedia PDF Downloads 313422 Catalytic Combustion of Methane over Pd-Meox-CeO₂/Al₂O₃ (Me= Co or Ni) Catalysts
Authors: Silviya Todorova, Anton Naydenov, Ralitsa Velinova, Alexander Larin
Abstract:
Catalytic combustion of methane has been extensively investigated for emission control and power generation during the last decades. The alumina-supported palladium catalyst is widely accepted as the most active catalysts for catalytic combustion of methane. The activity of Pd/Al₂O₃ decreases during the time on stream, especially underwater vapor. The following order of activity in the reaction of complete oxidation of methane was established: Co₃O₄> CuO>NiO> Mn₂O₃> Cr₂O₃. It may be expected that the combination between Pd and these oxides could lead to the promising catalysts in the reaction of complete methane. In the present work, we investigate the activity of Pd/Al₂O₃ catalysts promoted with other metal oxides (MOx; M= Ni, Co, Ce). The Pd-based catalysts modified by metal oxide were prepared by sequential impregnation of Al₂O₃ with aqueous solutions of Me(NO₃)₂.6H₂O and Pd(NO₃)₂H₂O. All samples were characterized by X-ray diffraction (XRD), temperature-programmed reduction (TPR), and X-ray photoelectron spectroscopy (XPS). An improvement of activity was observed after modification with different oxides. The results demonstrate that the Pd/Al₂O₃ catalysts modified with Co and Ce by impregnation with a common solution of respective salts, exhibit the most promising catalytic activity for methane oxidation. Most probably, the presence of Co₃O₄ and CeO₂ on catalytic surface increases surface oxygen and therefore leads to the better reactivity in methane combustion.Keywords: methane combustion, palladium, Co-Ce, Ni-Ce
Procedia PDF Downloads 186421 Characterization of Laminar Flow and Power Consumption in Agitated Vessel with Curved Blade Agitator
Authors: Amine Benmoussa, Mohamed Bouanini, Mebrouk Rebhi
Abstract:
Stirring is one of the unifying processes which form part of the mechanical unit operations in process technology such chemical, biotechnological, pharmaceutical, petrochemical, cosmetic, and food processing. Therefore determining the level of mixing and overall behavior and performance of the mixing tanks are crucial from the product quality and process economics point of views. The most fundamental needs for the analysis of these processes from both a theoretical and industrial perspective are the knowledge of the hydrodynamic behavior and the flow structure in such tanks. Depending on the purpose of the operation carried out in mixer, the best choice for geometry of the tank and agitator type can vary widely. Initially, a local and global study namely the velocity and power number on a typical agitation system agitated by a mobile-type two-blade straight (d/D=0.5) allowed us to test the reliability of the CFD, the result were compared with those of experimental literature, a very good concordance was observed. The stream function, the velocity profile, the velocity fields and power number are analyzed. It was shown that the hydrodynamics is modified by the curvature of the mobile which plays a key role.Keywords: agitated vessels, curved blade agitator, laminar flow, finite volume method
Procedia PDF Downloads 284420 Progressive Loading Effect of Co Over SiO2/Al2O3 Catalyst for Cox Free Hydrogen and Carbon Nanotubes Production via Catalytic Decomposition of Methane
Authors: Sushil Kumar Saraswat, K. K. Pant
Abstract:
Co metal supported on SiO2 and Al2O3 catalysts with a metal loading varied from 30 of 70 wt.% were evaluated for decomposition of methane to CO/CO2 free hydrogen and carbon nano materials. The catalytic runs were carried out from 550-800 oC under atmospheric pressure using fixed bed vertical flow reactor. The fresh and spent catalysts were characterized by BET surface area analyzer, TPR, XRD, SEM, TEM, and TG analysis. The data showed that 50% Co/Al2O3 catalyst exhibited remarkable higher activity and stability up to 10 h time-on-stream at 750 oC with respect to H2 production compared to rest of the catalysts. However, the catalytic activity and durability was greatly declined at a higher temperature. The main reason for the catalytic inhibition of Co containing SiO2 catalysts is the higher reduction temperature of Co2SiO4. TEM images illustrate that the carbon materials with various morphologies, carbon nanofibers (CNFs), helical-shaped CNFs, and branched CNFs depending on the catalyst composition and reaction temperature, were obtained. The TG data showed that a higher yield of MWCNTs was achieved over 50% Co/Al2O3 catalyst compared to other catalysts.Keywords: carbon nanotubes, cobalt, hydrogen production, methane decomposition
Procedia PDF Downloads 323419 An Analysis of Digital Forensic Laboratory Development among Malaysia’s Law Enforcement Agencies
Authors: Sarah K. Taylor, Miratun M. Saharuddin, Zabri A. Talib
Abstract:
Cybercrime is on the rise, and yet many Law Enforcement Agencies (LEAs) in Malaysia have no Digital Forensics Laboratory (DFL) to assist them in the attrition and analysis of digital evidence. From the estimated number of 30 LEAs in Malaysia, sadly, only eight of them owned a DFL. All of the DFLs are concentrated in the capital of Malaysia and none at the state level. LEAs are still depending on the national DFL (CyberSecurity Malaysia) even for simple and straightforward cases. A survey was conducted among LEAs in Malaysia owning a DFL to understand their history of establishing the DFL, the challenges that they faced and the significance of the DFL to their case investigation. The results showed that the while some LEAs faced no challenge in establishing a DFL, some of them took seven to 10 years to do so. The reason was due to the difficulty in convincing their management because of the high costs involved. The results also revealed that with the establishment of a DFL, LEAs were better able to get faster forensic result and to meet agency’s timeline expectation. It is also found that LEAs were also able to get more meaningful forensic results on cases that require niche expertise, compared to sending off cases to the national DFL. Other than that, cases are getting more complex, and hence, a continuous stream of budget for equipment and training is inevitable. The result derived from the study is hoped to be used by other LEAs in justifying to their management the benefits of establishing an in-house DFL.Keywords: digital evidence, digital forensics, digital forensics laboratory, law enforcement agency
Procedia PDF Downloads 176418 Semantics of the Word “Nas” in the Verse 24 of Surah Al-Baqarah Based on Izutsus’ Semantic Field Theory
Authors: Seyedeh Khadijeh. Mirbazel, Masoumeh Arjmandi
Abstract:
Semantics is a linguistic approach and a scientific stream, and like all scientific streams, it is dynamic. The study of meaning is carried out in the broad semantic collections of words that form the discourse. In other words, meaning is not something that can be found in a word; rather, the formation of meaning is a process that takes place in a discourse as a whole. One of the contemporary semantic theories is Izutsu's Semantic Field Theory. According to this theory, the discovery of meaning depends on the function of words and takes place within the context of language. The purpose of this research is to identify the meaning of the word "Nas" in the discourse of verse 24 of Surah Al-Baqarah, which introduces "Nas" as the firewood of hell, but the translators have translated it as "people". The present research has investigated the semantic structure of the word "Nas" using the aforementioned theory through the descriptive-analytical method. In the process of investigation, by matching the semantic fields of the Quranic word "Nas", this research came to the conclusion that "Nas" implies those persons who have forgotten God and His covenant in believing in His Oneness. For this reason, God called them "Nas (the forgetful)" - the imperfect participle of the noun /næsiwoɔn/ in single trinity of Arabic language, which means “to forget”. Therefore, the intended meaning of "Nas" in the verses that have the word "Nas" is not equivalent to "People" which is a general noun.Keywords: Nas, people, semantics, semantic field theory.
Procedia PDF Downloads 190417 Lean Environmental Management Integration System (LEMIS) Framework Development
Authors: A. P. Puvanasvaran, Suresh A. L. Vasu, N. Norazlin
Abstract:
The Lean Environmental Management Integration System (LEMIS) framework development is integration between lean core element and ISO 14001. The curiosity on the relationship between continuous improvement and sustainability of lean implementation has influenced this study toward LEMIS. Characteristic of ISO 14001 standard clauses and core elements of lean principles are explored from past studies and literature reviews. Survey was carried out on ISO 14001 certified companies to examine continual improvement by implementing the ISO 14001 standard. The study found that there is a significant and positive relationship between Lean Principles: value, value stream, flow, pull and perfection with the ISO 14001 requirements. LEMIS is significant to support the continuous improvement and sustainability. The integration system can be implemented to any manufacturing company. It gives awareness on the importance on why organizations need to sustain its Environmental management system. At the meanwhile, the lean principle can be adapted in order to streamline daily activities of the company. Throughout the study, it had proven that there is no sacrifice or trade-off between lean principles with ISO 14001 requirements. The framework developed in the study can be further simplified in the future, especially the method of crossing each sub requirements of ISO 14001 standard with the core elements of Lean principles in this study.Keywords: LEMIS, ISO 14001, integration, framework
Procedia PDF Downloads 406416 Influence of Vibration Amplitude on Reaction Time and Drowsiness Level
Authors: Mohd A. Azizan, Mohd Z. Zali
Abstract:
It is well established that exposure to vibration has an adverse effect on human health, comfort, and performance. However, there is little quantitative knowledge on performance combined with drowsiness level during vibration exposure. This paper reports a study investigating the influence of vibration amplitude on seated occupant reaction time and drowsiness level. Eighteen male volunteers were recruited for this experiment. Before commencing the experiment, total transmitted acceleration measured at interfaces between the seat pan and seatback to human body was adjusted to become 0.2 ms-2 r.m.s and 0.4 ms-2 r.m.s for each volunteer. Seated volunteers were exposed to Gaussian random vibration with frequency band 1-15 Hz at two level of amplitude (low vibration amplitude and medium vibration amplitude) for 20-minutes in separate days. For the purpose of drowsiness measurement, volunteers were asked to complete 10-minutes PVT test before and after vibration exposure and rate their subjective drowsiness by giving score using Karolinska Sleepiness Scale (KSS) before vibration, every 5-minutes interval and following 20-minutes of vibration exposure. Strong evidence of drowsiness was found as there was a significant increase in reaction time and number of lapse following exposure to vibration in both conditions. However, the effect is more apparent in medium vibration amplitude. A steady increase of drowsiness level can also be observed in KSS in all volunteers. However, no significant differences were found in KSS between low vibration amplitude and medium vibration amplitude. It is concluded that exposure to vibration has an adverse effect on human alertness level and more pronounced at higher vibration amplitude. Taken together, these findings suggest a role of vibration in promoting drowsiness, especially at higher vibration amplitude.Keywords: drowsiness, human vibration, karolinska sleepiness scale, psychomotor vigilance test
Procedia PDF Downloads 284415 Cryogenic Separation of CO2 from Molten Carbonate Fuel Cell Anode Outlet—Experimental Guidelines
Authors: Jarosław Milewski, Rafał Bernat
Abstract:
This paper presents an analysis of using cryogenic separation unit for recovering fuel from anode off gas of molten carbonate fuel cells (MCFCs) in order to upgrade the efficiently of the unit. In the proposed solution, the CSU is used for condensing water and carbon dioxide from anode off gas, and re-cycling the rest of the stream to the anode, saving certain amount of fuel (at least 30%). The resulting system efficiency is increased considerably. CSU, virtually consumes power, thus this solution has energy penalty as well, on the other hand, MCFC generates large amount of heat at elevated temperature, thus part of the CSU can be based on absorption chiller. In all cases, a high amount of fuel is obtained after condensation of water and carbon dioxide and re-cycled to the anode inlet. Based on mathematical modeling done previously, the concept and guidelines for forthcoming experimental investigations are presented in this paper. During planned experiments, an existing single cell laboratory stand will be equipped with re-cycle device (a fan, a peristaltic pump, etc.). Parallel, a mixture of anode off gas will be cooled down for determining the proper temperature for the separation of water and carbon dioxide.Keywords: cryogenic separation, experiments, fuel cells, molten carbonate fuel cells
Procedia PDF Downloads 247414 Forecasting Performance Comparison of Autoregressive Fractional Integrated Moving Average and Jordan Recurrent Neural Network Models on the Turbidity of Stream Flows
Authors: Daniel Fulus Fom, Gau Patrick Damulak
Abstract:
In this study, the Autoregressive Fractional Integrated Moving Average (ARFIMA) and Jordan Recurrent Neural Network (JRNN) models were employed to model the forecasting performance of the daily turbidity flow of White Clay Creek (WCC). The two methods were applied to the log difference series of the daily turbidity flow series of WCC. The measurements of error employed to investigate the forecasting performance of the ARFIMA and JRNN models are the Root Mean Square Error (RMSE) and the Mean Absolute Error (MAE). The outcome of the investigation revealed that the forecasting performance of the JRNN technique is better than the forecasting performance of the ARFIMA technique in the mean square error sense. The results of the ARFIMA and JRNN models were obtained by the simulation of the models using MATLAB version 8.03. The significance of using the log difference series rather than the difference series is that the log difference series stabilizes the turbidity flow series than the difference series on the ARFIMA and JRNN.Keywords: auto regressive, mean absolute error, neural network, root square mean error
Procedia PDF Downloads 268413 Synergistic Effect of Zr-Modified Cu-ZnO-Al₂O₃ and Bio-Templated HZSM-5 Catalysts in CO₂ Hydrogenation to Methanol and DME
Authors: Abrar Hussain, Kuen-Song Lin, Sayed Maeen Badshah, Jamshid Hussain
Abstract:
The conversion of CO₂ into versatile, useful compounds such as fuels and other chemicals remains a challenging frontier in research, demanding the innovation of increasingly effective catalysts. In the present work, a catalyst-incorporating zirconium (Zr) modification within CuO–ZnO–Al₂O₃ (CZA) was synthesized via a co-precipitation method to convert CO₂ into methanol. Furthermore, bio-HZSM-5 was used to promote methanol dehydration to produce dimethyl ether (DME). We prepared the porous hierarchy bio-HZSM-5 with remarkable pore connectivity by utilizing an economical loofah sponge and rice husks as biotemplates. The synthesized catalysts were characterized using Field Emission Scanning Electron Microscopy (FE-SEM), X–ray diffraction (XRD), N₂ adsorption (BET), temperature-programmed desorption (NH₃-TPD) and thermogravimetric analysis (TGA). The Zr addition improved the performance of the CZZA catalyst as a structural promoter, leading to increased DME selectivity and total carbon conversion by enhancing active sites, surface area, and the synergistic interfaces between CuO and ZnO. The presence of silicon in the biomass, notably from the loofah sponge (0.016 wt %) and rice husks (8.3 wt %), also performed a pivotal role in the preparation of bio-HZSM-5. Furthermore, contrasted to the CZZA/com-ZSM-5 catalyst, the integration of CZZA with bio-HZSM-5-L bifunctional catalyst achieved the highest DME yield (12.1 %), DME selectivity (58.6%), CO₂ conversion (22.5%) at 280 °C and 30 bar. The payback time for 5 and 10-tons per day (5 and10-TPD) DME formation using the catalytic process of CO₂ from petrochemical refinery plant waste gas emissions was 2.98 and 2.44 years, respectively.Keywords: Cost assessment, Dimethyl ether, low-cost bio-HZSM-5, CZZA catalyst, CO₂ hydrogenation
Procedia PDF Downloads 13412 A Cost Effective Approach to Develop Mid-Size Enterprise Software Adopted the Waterfall Model
Authors: Mohammad Nehal Hasnine, Md Kamrul Hasan Chayon, Md Mobasswer Rahman
Abstract:
Organizational tendencies towards computer-based information processing have been observed noticeably in the third-world countries. Many enterprises are taking major initiatives towards computerized working environment because of massive benefits of computer-based information processing. However, designing and developing information resource management software for small and mid-size enterprises under budget costs and strict deadline is always challenging for software engineers. Therefore, we introduced an approach to design mid-size enterprise software by using the Waterfall model, which is one of the SDLC (Software Development Life Cycles), in a cost effective way. To fulfill research objectives, in this study, we developed mid-sized enterprise software named “BSK Management System” that assists enterprise software clients with information resource management and perform complex organizational tasks. Waterfall model phases have been applied to ensure that all functions, user requirements, strategic goals, and objectives are met. In addition, Rich Picture, Structured English, and Data Dictionary have been implemented and investigated properly in engineering manner. Furthermore, an assessment survey with 20 participants has been conducted to investigate the usability and performance of the proposed software. The survey results indicated that our system featured simple interfaces, easy operation and maintenance, quick processing, and reliable and accurate transactions.Keywords: end-user application development, enterprise software design, information resource management, usability
Procedia PDF Downloads 439411 The Trigger-DAQ System in the Mu2e Experiment
Authors: Antonio Gioiosa, Simone Doanti, Eric Flumerfelt, Luca Morescalchi, Elena Pedreschi, Gianantonio Pezzullo, Ryan A. Rivera, Franco Spinella
Abstract:
The Mu2e experiment at Fermilab aims to measure the charged-lepton flavour violating neutrino-less conversion of a negative muon into an electron in the field of an aluminum nucleus. With the expected experimental sensitivity, Mu2e will improve the previous limit of four orders of magnitude. The Mu2e data acquisition (DAQ) system provides hardware and software to collect digitized data from the tracker, calorimeter, cosmic ray veto, and beam monitoring systems. Mu2e’s trigger and data acquisition system (TDAQ) uses otsdaq as its solution. developed at Fermilab, otsdaq uses the artdaq DAQ framework and art analysis framework, under-the-hood, for event transfer, filtering, and processing. Otsdaq is an online DAQ software suite with a focus on flexibility and scalability while providing a multi-user, web-based interface accessible through the Chrome or Firefox web browser. The detector read out controller (ROC) from the tracker and calorimeter stream out zero-suppressed data continuously to the data transfer controller (DTC). Data is then read over the PCIe bus to a software filter algorithm that selects events which are finally combined with the data flux that comes from a cosmic ray veto system (CRV).Keywords: trigger, daq, mu2e, Fermilab
Procedia PDF Downloads 155410 Molecular-Dynamics Study of H₂-C₃H₈-Hydrate Dissociation: Non-Equilibrium Analysis
Authors: Mohammad Reza Ghaani, Niall English
Abstract:
Hydrogen is looked upon as the next-generation clean-energy carrier; the search for an efficient material and method for storing hydrogen has been, and is, pursued relentlessly. Clathrate hydrates are inclusion compounds wherein guest gas molecules like hydrogen are trapped in a host water-lattice framework. These types of materials can be categorised as potentially attractive hosting environments for physical hydrogen storage (i.e., no chemical reaction upon storage). Non-equilibrium molecular dynamics (NEMD) simulations have been performed to investigate thermal-driven break-up of propane-hydrate interfaces with liquid water at 270-300 K, with the propane hydrate containing either one or no hydrogen molecule in each of its small cavities. In addition, two types of hydrate-surface water-lattice molecular termination were adopted, at the hydrate edge with water: a 001-direct surface cleavage and one with completed cages. The geometric hydrate-ice-liquid distinction criteria of Báez and Clancy were employed to distinguish between the hydrate, ice lattices, and liquid-phase. Consequently, the melting temperatures of interface were estimated, and dissociation rates were observed to be strongly dependent on temperature, with higher dissociation rates at larger over-temperatures vis-à-vis melting. The different hydrate-edge terminations for the hydrate-water interface led to statistically-significant differences in the observed melting point and dissociation profile: it was found that the clathrate with the planar interface melts at around 280 K, whilst the melting temperature of the cage-completed interface was determined to be circa 270 K.Keywords: hydrogen storage, clathrate hydrate, molecular dynamics, thermal dissociation
Procedia PDF Downloads 277409 Lean Manufacturing: Systematic Layout Planning Application to an Assembly Line Layout of a Welding Industry
Authors: Fernando Augusto Ullmann Tobe, Moacyr Amaral Domingues, Figueiredo, Stephany Rie Yamamoto Gushiken
Abstract:
The purpose of this paper is to present the process of elaborating the layout of an assembly line of a welding industry using the principles of lean manufacturing as the main driver. The objective of this paper is relevant since the current layout of the assembly line causes non-productive times for operators, being related to the lean waste of unnecessary movements. The methodology used for the project development was Project-based Learning (PBL), which is an active way of learning focused on real problems. The process of selecting the methodology for layout planning was developed considering three criteria to evaluate the most relevant one for this paper's goal. As a result of this evaluation, Systematic Layout Planning was selected, and three steps were added to it – Value Stream Mapping for the current situation and after layout changed and the definition of lean tools and layout type. This inclusion was to consider lean manufacturing in the layout redesign of the industry. The layout change resulted in an increase in the value-adding time of operations carried out in the sector, reduction in movement times between previous and final assemblies, and in cost savings regarding the man-hour value of the employees, which can be invested in productive hours instead of movement times.Keywords: assembly line, layout, lean manufacturing, systematic layout planning
Procedia PDF Downloads 228408 Determination of ILSS of Composite Materials Using Micromechanical FEA Analysis
Authors: K. Rana, H.A.Saeed, S. Zahir
Abstract:
Inter Laminar Shear Stress (ILSS) is a main key parameter which quantify the properties of composite materials. These properties can ascertain the use of material for a specific purpose like aerospace, automotive etc. A modelling approach for determination of ILSS is presented in this paper. Geometric modelling of composite material is performed in TEXGEN software where reinforcement, cured matrix and their interfaces are modelled separately as per actual geometry. Mechanical properties of matrix and reinforcements are modelled separately which incorporated anisotropy in the real world composite material. ASTM D2344 is modelled in ANSYS for ILSS. In macroscopic analysis model approximates the anisotropy of the material and uses orthotropic properties by applying homogenization techniques. Shear Stress analysis in that case does not show the actual real world scenario and rather approximates it. In this paper actual geometry and properties of reinforcement and matrix are modelled to capture the actual stress state during the testing of samples as per ASTM standards. Testing of samples is also performed in order to validate the results. Fibre volume fraction of yarn is determined by image analysis of manufactured samples. Fibre volume fraction data is incorporated into the numerical model for correction of transversely isotropic properties of yarn. A comparison between experimental and simulated results is presented.Keywords: ILSS, FEA, micromechanical, fibre volume fraction, image analysis
Procedia PDF Downloads 374