Search results for: time driven activity based costing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 43113

Search results for: time driven activity based costing

31713 Unreliable Production Lines with Simultaneously Unbalanced Operation Time Means, Breakdown, and Repair Rates

Authors: Sabry Shaaban, Tom McNamara, Sarah Hudson

Abstract:

This paper investigates the benefits of deliberately unbalancing both operation time means (MTs) and unreliability (failure and repair rates) for non-automated production lines.The lines were simulated with various line lengths, buffer capacities, degrees of imbalance and patterns of MT and unreliability imbalance. Data on two performance measures, namely throughput (TR) and average buffer level (ABL) were gathered, analyzed and compared to a balanced line counterpart. A number of conclusions were made with respect to the ranking of configurations, as well as to the relationships among the independent design parameters and the dependent variables. It was found that the best configurations are a balanced line arrangement and a monotone decreasing MT order, coupled with either a decreasing or a bowl unreliability configuration, with the first generally resulting in a reduced TR and the second leading to a lower ABL than those of a balanced line.

Keywords: unreliable production lines, unequal mean operation times, unbalanced failure and repair rates, throughput, average buffer level

Procedia PDF Downloads 470
31712 Forecasting Future Society to Explore Promising Security Technologies

Authors: Jeonghwan Jeon, Mintak Han, Youngjun Kim

Abstract:

Due to the rapid development of information and communication technology (ICT), a substantial transformation is currently happening in the society. As the range of intelligent technologies and services is continuously expanding, ‘things’ are becoming capable of communicating one another and even with people. However, such “Internet of Things” has the technical weakness so that a great amount of such information transferred in real-time may be widely exposed to the threat of security. User’s personal data are a typical example which is faced with a serious security threat. The threats of security will be diversified and arose more frequently because next generation of unfamiliar technology develops. Moreover, as the society is becoming increasingly complex, security vulnerability will be increased as well. In the existing literature, a considerable number of private and public reports that forecast future society have been published as a precedent step of the selection of future technology and the establishment of strategies for competitiveness. Although there are previous studies that forecast security technology, they have focused only on technical issues and overlooked the interrelationships between security technology and social factors are. Therefore, investigations of security threats in the future and security technology that is able to protect people from various threats are required. In response, this study aims to derive potential security threats associated with the development of technology and to explore the security technology that can protect against them. To do this, first of all, private and public reports that forecast future and online documents from technology-related communities are collected. By analyzing the data, future issues are extracted and categorized in terms of STEEP (Society, Technology, Economy, Environment, and Politics), as well as security. Second, the components of potential security threats are developed based on classified future issues. Then, points that the security threats may occur –for example, mobile payment system based on a finger scan technology– are identified. Lastly, alternatives that prevent potential security threats are proposed by matching security threats with points and investigating related security technologies from patent data. Proposed approach can identify the ICT-related latent security menaces and provide the guidelines in the ‘problem – alternative’ form by linking the threat point with security technologies.

Keywords: future society, information and communication technology, security technology, technology forecasting

Procedia PDF Downloads 452
31711 Reinforcement Learning for Self Driving Racing Car Games

Authors: Adam Beaunoyer, Cory Beaunoyer, Mohammed Elmorsy, Hanan Saleh

Abstract:

This research aims to create a reinforcement learning agent capable of racing in challenging simulated environments with a low collision count. We present a reinforcement learning agent that can navigate challenging tracks using both a Deep Q-Network (DQN) and a Soft Actor-Critic (SAC) method. A challenging track includes curves, jumps, and varying road widths throughout. Using open-source code on Github, the environment used in this research is based on the 1995 racing game WipeOut. The proposed reinforcement learning agent can navigate challenging tracks rapidly while maintaining low racing completion time and collision count. The results show that the SAC model outperforms the DQN model by a large margin. We also propose an alternative multiple-car model that can navigate the track without colliding with other vehicles on the track. The SAC model is the basis for the multiple-car model, where it can complete the laps quicker than the single-car model but has a higher collision rate with the track wall.

Keywords: reinforcement learning, soft actor-critic, deep q-network, self-driving cars, artificial intelligence, gaming

Procedia PDF Downloads 27
31710 Adsorptive Desulfurization of Using Cu(I) – Y Zeolite via π-Complexation

Authors: Moshe Mello, Hilary Rutto, Tumisang Seodigeng, Itumeleng Kohitlhetse

Abstract:

The accelerating requirement to reach 0% sulfur content in liquid fuels demand researchers to seek efficient alternative technologies to challenge the predicament. In this current study, the adsorption capabilities of modified Cu(I)-Y zeolite were tested for the removal of organosulfur compounds (OSC) present in tire pyrolytic oil (TPO). The π-complexation-based adsorbent was obtained by ion exchanging Y-zeolite with Cu+ cation using liquid phase ion exchange (LPIE). Preparation of the adsorbent involved firstly ion exchange between Na-Y zeolite with a Cu(NO₃)₂ aqueous solution of 0.5M for 48 hours followed by reduction of Cu²⁺ to Cu+. Fixed-bed breakthrough studies for TPO in comparison with model diesel comprising of sulfur compounds such as thiophene, benzothiophenes (BT), and dibenzothiophenes (DBT) showed that modified Cu(I)-Y zeolite is an effective adsorbent for removal of OSC in liquid fuels. The effect of operating conditions such as adsorbent dosage and reaction time were studied to optimize the adsorptive desulfurization process. For model diesel fuel, the selectivity for adsorption of sulfur compounds followed the order DBT> BT> Thiophene. The Cu(I)-Y zeolite is fully regeneratable and this is achieved by a simple procedure of blowing the adsorbent with air at 350 °C, followed by reactivation at 450 °C in a rich helium surrounding.

Keywords: adsorption, desulfurization, TPO, zeolite

Procedia PDF Downloads 97
31709 The Use of Coronary Calcium Scanning for Cholesterol Assessment and Management

Authors: Eva Kirzner

Abstract:

Based on outcome studies published over the past two decades, in 2018, the ACC/AHA published new guidelines for the management of hypercholesterolemia that incorporate the use of coronary artery calcium (CAC) scanning as a decision tool for ascertaining which patients may benefit from statin therapy. This use is based on the recognition that the absence of calcium on CAC scanning (i.e., a CAC score of zero) usually signifies the absence of significant atherosclerotic deposits in the coronary arteries. Specifically, in patients with a high risk for atherosclerotic cardiovascular disease (ASCVD), initiation of statin therapy is generally recommended to decrease ASCVD risk. However, among patients with intermediate ASCVD risk, the need for statin therapy is less certain. However, there is a need for new outcome studies that provide evidence that the management of hypercholesterolemia based on these new ACC/AHA recommendations is safe for patients. Based on a Pub-Med and Google Scholar literature search, four relevant population-based or patient-based cohort studies that studied the relationship between CAC scanning, risk assessment or mortality, and statin therapy that were published between 2017 and 2021 were identified (see references). In each of these studies, patients were assessed for their baseline risk for atherosclerotic cardiovascular disease (ASCVD) using the Pooled Cohorts Equation (PCE), an ACC/AHA calculator for determining patient risk based on assessment of patient age, gender, ethnicity, and coronary artery disease risk factors. The combined findings of these four studies provided concordant evidence that a zero CAC score defines patients who remain at low clinical risk despite the non-use of statin therapy. Thus, these new studies confirm the use of CAC scanning as a safe tool for reducing the potential overuse of statin therapy among patients with zero CAC scores. Incorporating these new data suggest the following best practice: (1) ascertain ASCVD risk according to the PCE in all patients; (2) following an initial attempt trial to lower ASCVD risk with optimal diet among patients with elevated ASCVD risk, initiate statin therapy for patients who have a high ASCVD risk score; (3) if the ASCVD score is intermediate, refer patients for CAC scanning; and (4) and if the CAC score is zero among the intermediate risk ASCVD patients, statin therapy can be safely withheld despite the presence of an elevated serum cholesterol level.

Keywords: cholesterol, cardiovascular disease, statin therapy, coronary calcium

Procedia PDF Downloads 100
31708 Housing First, Not Housing Only: The Life Skills Project

Authors: Sara Cumming, Julianne DiSanto, Leah Burton

Abstract:

Homelessness in Canada is a persistent problem. It has been widely argued that the best tactic for eradicating homelessness is to approach social issues from a Housing First perspective—an approach that centers on quickly moving people into permanent and independent housing and then providing them additional support and services as needed. It is recognized that life skills training is both necessary and an effective way to reduce cyclical homelessness; however, there is a scarcity of research on effective ways to teach life skills; this problem was exacerbated in a pandemic context, where in-person delivery was severely restricted or no longer possible. Very little attention has been paid to the diverse cultural needs of clients in a multicultural context and the need to foster cultural knowledge/awareness in individuals to successfully contribute to the cultural safety of communities. This research attempts to fill these gaps in the literature and in practice by employing a community-engaged research (CER) approach. Academic, government, funders, front-line staff, and clients at 15 not-for-profits from across the Greater Toronto Area in Ontario, Canada, collaborated to co-create a virtual, client-centric, equity, diversity, and inclusion (EDI) informed life skill learning management system. We employed a triangulation methodology for this research. An environmental scan was conducted for best practices. Two separate Creative Problem Solving Sessions were held with over 100 front-line workers, managers, and executive directors who work with homeless populations. Quantitative and open-ended surveys were completed by over 200 individuals with experience with homelessness. All sections of this research aimed to discover the areas of skills that individuals need to maintain housing and to ascertain what a more client-driven EDI approach to life skills training should include. This research will showcase which life skills are deemed essential for homeless and precariously housed individuals.

Keywords: homelessness, Housing First, life skills, community engaged research

Procedia PDF Downloads 54
31707 Formation of Chemical Compound Layer at the Interface of Initial Substances A and B with Dominance of Diffusion of the A Atoms

Authors: Pavlo Selyshchev, Samuel Akintunde

Abstract:

A theoretical approach to consider formation of chemical compound layer at the interface between initial substances A and B due to the interfacial interaction and diffusion is developed. It is considered situation when speed of interfacial interaction is large enough and diffusion of A-atoms through AB-layer is much more then diffusion of B-atoms. Atoms from A-layer diffuse toward B-atoms and form AB-atoms on the surface of B-layer. B-atoms are assumed to be immobile. The growth kinetics of the AB-layer is described by two differential equations with non-linear coupling, producing a good fit to the experimental data. It is shown that growth of the thickness of the AB-layer determines by dependence of chemical reaction rate on reactants concentration. In special case the thickness of the AB-layer can grow linearly or parabolically depending on that which of processes (interaction or the diffusion) controls the growth. The thickness of AB-layer as function of time is obtained. The moment of time (transition point) at which the linear growth are changed by parabolic is found.

Keywords: phase formation, binary systems, interfacial reaction, diffusion, compound layers, growth kinetics

Procedia PDF Downloads 559
31706 Microstructure Characterization on Silicon Carbide Formation from Natural Wood

Authors: Noor Leha Abdul Rahman, Koay Mei Hyie, Anizah Kalam, Husna Elias, Teng Wang Dung

Abstract:

Dark Red Meranti and Kapur, kinds of important type of wood in Malaysia were used as a precursor to fabricate porous silicon carbide. A carbon template is produced by pyrolysis at 850°C in an oxygen free atmosphere. The carbon template then further subjected to infiltration with silicon by silicon melt infiltration method. The infiltration process was carried out in tube furnace in argon flow at 1500°C, at two different holding time; 2 hours and 3 hours. Thermo gravimetric analysis was done to investigate the decomposition behavior of two species of plants. The resulting silicon carbide was characterized by XRD which was found the formation of silicon carbide and also excess silicon. The microstructure was characterized by scanning electron microscope (SEM) and the density was determined by the Archimedes method. An increase in holding time during infiltration will increased the density as well as formation of silicon carbide. Dark Red Meranti precursor is likely suitable for production of silicon carbide compared to Kapur.

Keywords: density, SEM, silicon carbide, XRD

Procedia PDF Downloads 409
31705 Researches on Attractive Flowered Natural Woody Plants of Bursa Flora in Terms of Landscape Design

Authors: Elvan Ender, Murat Zencirkıran

Abstract:

One of the most important criteria that increase the success of design in landscape architecture is the visual effect. The characteristics that affect visual appearance in plant design vary depending on the phenological periods of the plants. In plants, although different effects are observed in different periods of the year, this effect is felt most prominently in flowering periods. For this reason, knowing the flowering time, duration and flower characteristics should be considered as a factor increasing the success of plant design. In this study, flower characteristics of natural woody plants with attractive flowers have been examined. Because of the variability of these characteristics of plants in the region, consideration of these criteria in the planting design processes in the region may increase the success of the design. At the same time, when species selection is made considering the obtained data, visuality and sustainability of natural species can be possible in Bursa city with planting design.

Keywords: Bursa, flower characteristics, natural plants, planting design

Procedia PDF Downloads 254
31704 Pd Supported on Activated Carbon: Effect of Support Texture on the Dispersion of Pd

Authors: Ji Sun Kim, Jae Ho Baek, Kyeong Ho Kim, Ji Hae Ha, Seong Soo Hong, Jung-Wook Park, Man Sig Lee

Abstract:

Carbon supported palladium catalysts have been used in many industrial reactions, especially for hydrogenation in the fine chemical industry. Porous carbons had been widely used as catalyst supports due to its higher surface area and larger pore volume. The specific surface area, pore structure and surface chemical functional groups of porous carbon affects metal dispersion and particle size. In this paper, we confirm the effect of support texture on the dispersion of Pd. Pd catalyst supported on activated carbon having various specific surface area were characterized by BET, XRD and FE-TEM. Catalyst activity and dispersion of prepared catalyst were evaluated on the basis of the CO adsorption capacity by CO-chemisorption. As concluding remark to this part of our study, let us note that specific area of carbon play important role on the synthesis of Pd/C catalyst/.

Keywords: carbon, dispersion, Pd/C, specific are, support

Procedia PDF Downloads 344
31703 Clinical Trial of VEUPLEXᵀᴹ TBI Assay to Help Diagnose Traumatic Brain Injury by Quantifying Glial Fibrillary Acidic Protein and Ubiquitin Carboxy-Terminal Hydrolase L1 in the Serum of Patients Suspected of Mild TBI by Fluorescence Immunoassay

Authors: Moon Jung Kim, Guil Rhim

Abstract:

The clinical sensitivity of the “VEUPLEXTM TBI assay”, a clinical trial medical device, in mild traumatic brain injury was 28.6% (95% CI, 19.7%-37.5%), and the clinical specificity was 94.0% (95% CI, 89.3%). -98.7%). In addition, when the results analyzed by marker were put together, the sensitivity was higher when interpreting the two tests together than the two tests, UCHL1 and GFAP alone. Additionally, when sensitivity and specificity were analyzed based on CT results for the mild traumatic brain injury patient group, the clinical sensitivity for 2 CT-positive cases was 50.0% (95% CI: 1.3%-98.7%), and 19 CT-negative cases. The clinical specificity for cases was 68.4% (95% CI: 43.5% - 87.4%). Since the low clinical sensitivity for the two CT-positive cases was not statistically significant due to the small number of samples analyzed, it was judged necessary to secure and analyze more samples in the future. Regarding the clinical specificity analysis results for 19 CT-negative cases, there were a large number of patients who were actually clinically diagnosed with mild traumatic brain injury but actually received a CT-negative result, and about 31.6% of them showed abnormal results on VEUPLEXTM TBI assay. Although traumatic brain injury could not be detected in 31.6% of the CT scans, the possibility of actually suffering a mild brain injury could not be ruled out, so it was judged that this could be confirmed through follow-up observation of the patient. In addition, among patients with mild traumatic brain injury, CT examinations were not performed in many cases because the symptoms were very mild, but among these patients, about 25% or more showed abnormal results in the VEUPLEXTM TBI assay. In fact, no damage is observed with the naked eye immediately after traumatic brain injury, and traumatic brain injury is not observed even on CT. But in some cases, brain hemorrhage may occur (delayed cerebral hemorrhage) after a certain period of time, so the patients who did show abnormal results on VEUPLEXTM TBI assay should be followed up for the delayed cerebral hemorrhage. In conclusion, it was judged that it was difficult to judge mild traumatic brain injury with the VEUPLEXTM TBI assay only through clinical findings without CT results, that is, based on the GCS value. Even in the case of CT, it does not detect all mild traumatic brain injury, so it is difficult to necessarily judge that there is no traumatic brain injury, even if there is no evidence of traumatic brain injury in CT. And in the long term, more patients should be included to evaluate the usefulness of the VEUPLEXTM TBI assay in the detection of microscopic traumatic brain injuries without using CT.

Keywords: brain injury, traumatic brain injury, GFAP, UCHL1

Procedia PDF Downloads 72
31702 Autism Spectrum Disorder Classification Algorithm Using Multimodal Data Based on Graph Convolutional Network

Authors: Yuntao Liu, Lei Wang, Haoran Xia

Abstract:

Machine learning has shown extensive applications in the development of classification models for autism spectrum disorder (ASD) using neural image data. This paper proposes a fusion multi-modal classification network based on a graph neural network. First, the brain is segmented into 116 regions of interest using a medical segmentation template (AAL, Anatomical Automatic Labeling). The image features of sMRI and the signal features of fMRI are extracted, which build the node and edge embedding representations of the brain map. Then, we construct a dynamically updated brain map neural network and propose a method based on a dynamic brain map adjacency matrix update mechanism and learnable graph to further improve the accuracy of autism diagnosis and recognition results. Based on the Autism Brain Imaging Data Exchange I dataset(ABIDE I), we reached a prediction accuracy of 74% between ASD and TD subjects. Besides, to study the biomarkers that can help doctors analyze diseases and interpretability, we used the features by extracting the top five maximum and minimum ROI weights. This work provides a meaningful way for brain disorder identification.

Keywords: autism spectrum disorder, brain map, supervised machine learning, graph network, multimodal data, model interpretability

Procedia PDF Downloads 44
31701 The Use of Correlation Difference for the Prediction of Leakage in Pipeline Networks

Authors: Mabel Usunobun Olanipekun, Henry Ogbemudia Omoregbee

Abstract:

Anomalies such as water pipeline and hydraulic or petrochemical pipeline network leakages and bursts have significant implications for economic conditions and the environment. In order to ensure pipeline systems are reliable, they must be efficiently controlled. Wireless Sensor Networks (WSNs) have become a powerful network with critical infrastructure monitoring systems for water, oil and gas pipelines. The loss of water, oil and gas is inevitable and is strongly linked to financial costs and environmental problems, and its avoidance often leads to saving of economic resources. Substantial repair costs and the loss of precious natural resources are part of the financial impact of leaking pipes. Pipeline systems experts have implemented various methodologies in recent decades to identify and locate leakages in water, oil and gas supply networks. These methodologies include, among others, the use of acoustic sensors, measurements, abrupt statistical analysis etc. The issue of leak quantification is to estimate, given some observations about that network, the size and location of one or more leaks in a water pipeline network. In detecting background leakage, however, there is a greater uncertainty in using these methodologies since their output is not so reliable. In this work, we are presenting a scalable concept and simulation where a pressure-driven model (PDM) was used to determine water pipeline leakage in a system network. These pressure data were collected with the use of acoustic sensors located at various node points after a predetermined distance apart. We were able to determine with the use of correlation difference to determine the leakage point locally introduced at a predetermined point between two consecutive nodes, causing a substantial pressure difference between in a pipeline network. After de-noising the signal from the sensors at the nodes, we successfully obtained the exact point where we introduced the local leakage using the correlation difference model we developed.

Keywords: leakage detection, acoustic signals, pipeline network, correlation, wireless sensor networks (WSNs)

Procedia PDF Downloads 80
31700 MRI Quality Control Using Texture Analysis and Spatial Metrics

Authors: Kumar Kanudkuri, A. Sandhya

Abstract:

Typically, in a MRI clinical setting, there are several protocols run, each indicated for a specific anatomy and disease condition. However, these protocols or parameters within them can change over time due to changes to the recommendations by the physician groups or updates in the software or by the availability of new technologies. Most of the time, the changes are performed by the MRI technologist to account for either time, coverage, physiological, or Specific Absorbtion Rate (SAR ) reasons. However, giving properly guidelines to MRI technologist is important so that they do not change the parameters that negatively impact the image quality. Typically a standard American College of Radiology (ACR) MRI phantom is used for Quality Control (QC) in order to guarantee that the primary objectives of MRI are met. The visual evaluation of quality depends on the operator/reviewer and might change amongst operators as well as for the same operator at various times. Therefore, overcoming these constraints is essential for a more impartial evaluation of quality. This makes quantitative estimation of image quality (IQ) metrics for MRI quality control is very important. So in order to solve this problem, we proposed that there is a need for a robust, open-source, and automated MRI image control tool. The Designed and developed an automatic analysis tool for measuring MRI image quality (IQ) metrics like Signal to Noise Ratio (SNR), Signal to Noise Ratio Uniformity (SNRU), Visual Information Fidelity (VIF), Feature Similarity (FSIM), Gray level co-occurrence matrix (GLCM), slice thickness accuracy, slice position accuracy, High contrast spatial resolution) provided good accuracy assessment. A standardized quality report has generated that incorporates metrics that impact diagnostic quality.

Keywords: ACR MRI phantom, MRI image quality metrics, SNRU, VIF, FSIM, GLCM, slice thickness accuracy, slice position accuracy

Procedia PDF Downloads 146
31699 Optimization of Geometric Parameters of Microfluidic Channels for Flow-Based Studies

Authors: Parth Gupta, Ujjawal Singh, Shashank Kumar, Mansi Chandra, Arnab Sarkar

Abstract:

Microfluidic devices have emerged as indispensable tools across various scientific disciplines, offering precise control and manipulation of fluids at the microscale. Their efficacy in flow-based research, spanning engineering, chemistry, and biology, relies heavily on the geometric design of microfluidic channels. This work introduces a novel approach to optimise these channels through Response Surface Methodology (RSM), departing from the conventional practice of addressing one parameter at a time. Traditionally, optimising microfluidic channels involved isolated adjustments to individual parameters, limiting the comprehensive understanding of their combined effects. In contrast, our approach considers the simultaneous impact of multiple parameters, employing RSM to efficiently explore the complex design space. The outcome is an innovative microfluidic channel that consumes an optimal sample volume and minimises flow time, enhancing overall efficiency. The relevance of geometric parameter optimization in microfluidic channels extends significantly in biomedical engineering. The flow characteristics of porous materials within these channels depend on many factors, including fluid viscosity, environmental conditions (such as temperature and humidity), and specific design parameters like sample volume, channel width, channel length, and substrate porosity. This intricate interplay directly influences the performance and efficacy of microfluidic devices, which, if not optimized, can lead to increased costs and errors in disease testing and analysis. In the context of biomedical applications, the proposed approach addresses the critical need for precision in fluid flow. it mitigate manufacturing costs associated with trial-and-error methodologies by optimising multiple geometric parameters concurrently. The resulting microfluidic channels offer enhanced performance and contribute to a streamlined, cost-effective process for testing and analyzing diseases. A key highlight of our methodology is its consideration of the interconnected nature of geometric parameters. For instance, the volume of the sample, when optimized alongside channel width, length, and substrate porosity, creates a synergistic effect that minimizes errors and maximizes efficiency. This holistic optimization approach ensures that microfluidic devices operate at their peak performance, delivering reliable results in disease testing. A key highlight of our methodology is its consideration of the interconnected nature of geometric parameters. For instance, the volume of the sample, when optimized alongside channel width, length, and substrate porosity, creates a synergistic effect that minimizes errors and maximizes efficiency. This holistic optimization approach ensures that microfluidic devices operate at their peak performance, delivering reliable results in disease testing. A key highlight of our methodology is its consideration of the interconnected nature of geometric parameters. For instance, the volume of the sample, when optimized alongside channel width, length, and substrate porosity, creates a synergistic effect that minimizes errors and maximizes efficiency. This holistic optimization approach ensures that microfluidic devices operate at their peak performance, delivering reliable results in disease testing.

Keywords: microfluidic device, minitab, statistical optimization, response surface methodology

Procedia PDF Downloads 42
31698 Optimisation of Intermodal Transport Chain of Supermarkets on Isle of Wight, UK

Authors: Jingya Liu, Yue Wu, Jiabin Luo

Abstract:

This work investigates an intermodal transportation system for delivering goods from a Regional Distribution Centre to supermarkets on the Isle of Wight (IOW) via the port of Southampton or Portsmouth in the UK. We consider this integrated logistics chain as a 3-echelon transportation system. In such a system, there are two types of transport methods used to deliver goods across the Solent Channel: one is accompanied transport, which is used by most supermarkets on the IOW, such as Spar, Lidl and Co-operative food; the other is unaccompanied transport, which is used by Aldi. Five transport scenarios are studied based on different transport modes and ferry routes. The aim is to determine an optimal delivery plan for supermarkets of different business scales on IOW, in order to minimise the total running cost, fuel consumptions and carbon emissions. The problem is modelled as a vehicle routing problem with time windows and solved by genetic algorithm. The computing results suggested that accompanied transport is more cost efficient for small and medium business-scale supermarket chains on IOW, while unaccompanied transport has the potential to improve the efficiency and effectiveness of large business scale supermarket chains.

Keywords: genetic algorithm, intermodal transport system, Isle of Wight, optimization, supermarket

Procedia PDF Downloads 356
31697 Data Management System for Environmental Remediation

Authors: Elizaveta Petelina, Anton Sizo

Abstract:

Environmental remediation projects deal with a wide spectrum of data, including data collected during site assessment, execution of remediation activities, and environmental monitoring. Therefore, an appropriate data management is required as a key factor for well-grounded decision making. The Environmental Data Management System (EDMS) was developed to address all necessary data management aspects, including efficient data handling and data interoperability, access to historical and current data, spatial and temporal analysis, 2D and 3D data visualization, mapping, and data sharing. The system focuses on support of well-grounded decision making in relation to required mitigation measures and assessment of remediation success. The EDMS is a combination of enterprise and desktop level data management and Geographic Information System (GIS) tools assembled to assist to environmental remediation, project planning, and evaluation, and environmental monitoring of mine sites. EDMS consists of seven main components: a Geodatabase that contains spatial database to store and query spatially distributed data; a GIS and Web GIS component that combines desktop and server-based GIS solutions; a Field Data Collection component that contains tools for field work; a Quality Assurance (QA)/Quality Control (QC) component that combines operational procedures for QA and measures for QC; Data Import and Export component that includes tools and templates to support project data flow; a Lab Data component that provides connection between EDMS and laboratory information management systems; and a Reporting component that includes server-based services for real-time report generation. The EDMS has been successfully implemented for the Project CLEANS (Clean-up of Abandoned Northern Mines). Project CLEANS is a multi-year, multimillion-dollar project aimed at assessing and reclaiming 37 uranium mine sites in northern Saskatchewan, Canada. The EDMS has effectively facilitated integrated decision-making for CLEANS project managers and transparency amongst stakeholders.

Keywords: data management, environmental remediation, geographic information system, GIS, decision making

Procedia PDF Downloads 141
31696 Advancements in Arthroscopic Surgery Techniques for Anterior Cruciate Ligament (ACL) Reconstruction

Authors: Islam Sherif, Ahmed Ashour, Ahmed Hassan, Hatem Osman

Abstract:

Anterior Cruciate Ligament (ACL) injuries are common among athletes and individuals participating in sports with sudden stops, pivots, and changes in direction. Arthroscopic surgery is the gold standard for ACL reconstruction, aiming to restore knee stability and function. Recent years have witnessed significant advancements in arthroscopic surgery techniques, graft materials, and technological innovations, revolutionizing the field of ACL reconstruction. This presentation delves into the latest advancements in arthroscopic surgery techniques for ACL reconstruction and their potential impact on patient outcomes. Traditionally, autografts from the patellar tendon, hamstring tendon, or quadriceps tendon have been commonly used for ACL reconstruction. However, recent studies have explored the use of allografts, synthetic scaffolds, and tissue-engineered grafts as viable alternatives. This abstract evaluates the benefits and potential drawbacks of each graft type, considering factors such as graft incorporation, strength, and risk of graft failure. Moreover, the application of augmented reality (AR) and virtual reality (VR) technologies in surgical planning and intraoperative navigation has gained traction. AR and VR platforms provide surgeons with detailed 3D anatomical reconstructions of the knee joint, enhancing preoperative visualization and aiding in graft tunnel placement during surgery. We discuss the integration of AR and VR in arthroscopic ACL reconstruction procedures, evaluating their accuracy, cost-effectiveness, and overall impact on surgical outcomes. Beyond graft selection and surgical navigation, patient-specific planning has gained attention in recent research. Advanced imaging techniques, such as MRI-based personalized planning, enable surgeons to tailor ACL reconstruction procedures to each patient's unique anatomy. By accounting for individual variations in the femoral and tibial insertion sites, this personalized approach aims to optimize graft placement and potentially improve postoperative knee kinematics and stability. Furthermore, rehabilitation and postoperative care play a crucial role in the success of ACL reconstruction. This abstract explores novel rehabilitation protocols, emphasizing early mobilization, neuromuscular training, and accelerated recovery strategies. Integrating technology, such as wearable sensors and mobile applications, into postoperative care can facilitate remote monitoring and timely intervention, contributing to enhanced rehabilitation outcomes. In conclusion, this presentation provides an overview of the cutting-edge advancements in arthroscopic surgery techniques for ACL reconstruction. By embracing innovative graft materials, augmented reality, patient-specific planning, and technology-driven rehabilitation, orthopedic surgeons and sports medicine specialists can achieve superior outcomes in ACL injury management. These developments hold great promise for improving the functional outcomes and long-term success rates of ACL reconstruction, benefitting athletes and patients alike.

Keywords: arthroscopic surgery, ACL, autograft, allograft, graft materials, ACL reconstruction, synthetic scaffolds, tissue-engineered graft, virtual reality, augmented reality, surgical planning, intra-operative navigation

Procedia PDF Downloads 76
31695 Dyeing of Polyester/Cotton Blends with Reverse-Micelle Encapsulated High Energy Disperse/Reactive Dye Mixture

Authors: Chi-Wai Kan, Yanming Wang, Alan Yiu-Lun Tang, Cheng-Hao Lee Lee

Abstract:

Dyeing of polyester/cotton blend fabrics in various polyester/cotton percentages (32/68, 40/60 and 65/35) was investigated using (poly(ethylene glycol), PEG) based reverse-micelle. High energy disperse dyes and warm type reactive dyes were encapsulated and applied on polyester/cotton blend fabrics in a one bath one step dyeing process. Comparison of reverse micellar-based and aqueous-based (water-based) dyeing was conducted in terms of colour reflectance. Experimental findings revealed that the colour shade of the dyed fabrics in reverse micellar non-aqueous dyeing system at a lower dyeing temperature of 98°C is slightly lighter than that of conventional aqueous dyeing system in two-step process (130oC for disperse dyeing and 70°C for reactive dyeing). The exhaustion of dye in polyester-cotton blend fabrics, in terms of colour reflectance, were found to be highly fluctuated at dyeing temperature of 98°C.

Keywords: one-bath dyeing, polyester/cotton blends, disperse/reactive dyes, reverse micelle

Procedia PDF Downloads 137
31694 “Double Layer” Theory of Hydrogenation

Authors: Vaclav Heral

Abstract:

Ideas about the mechanism of heterogeneous catalytic hydrogenation are diverse. The Horiuti-Polanyi mechanism is most often referred to, based on the idea of a semi-hydrogenated state. In our opinion, it does not represent a satisfactory explanation of the hydrogenation mechanism, because, for example: (1) It neglects the fact that the bond of atomic hydrogen to the metal surface is strongly polarized, (2) It does not explain why a surface deprived of atomic hydrogen (by thermal desorption or by alkyne) loses isomerization capabilities, but hydrogenation capabilities remain preserved, (3) It was observed that during the hydrogenation of 1-alkenes, the reaction can be of the 0th order to hydrogen and to the alkene at the same time, which is excluded during the competitive adsorption of both reactants on the catalyst surface. We offer an alternative mechanism that satisfactorily explains many of the ambiguities: It is the idea of an independent course of olefin isomerization, catalyzed by acidic atomic hydrogen bonded on the surface of the catalyst, in addition to the hydrogenation itself, in which a two-layer complex appears on the surface of the catalyst: olefin bound to the surface and molecular hydrogen bound to it in the second layer. The rate-determining step of hydrogenation is the conversion of this complex into the final product. We believe that the Horiuti-Polanyi mechanism is flawed and we naturally think that our two-layer theory better describes the experimental findings.

Keywords: acidity of hydrogenation catalyst, Horiuti-Polanyi, hydrogenation, two-layer hydrogenation

Procedia PDF Downloads 56
31693 Recurrent Patterns of Netspeak among Selected Nigerians on WhatsApp Platform: A Quest for Standardisation

Authors: Lily Chimuanya, Esther Ajiboye, Emmanuel Uba

Abstract:

One of the consequences of online communication is the birth of new orthography genres characterised by novel conventions of abbreviation and acronyms usually referred to as Netspeak. Netspeak, also known as internet slang, is a style of writing mainly used in online communication to limit the length of text characters and to save time. The aim of this study is to evaluate how second language users of the English language have internalised this new convention of writing; identify the recurrent patterns of Netspeak; and assess the consistency of the use of the identified patterns in relation to their meanings. The study is corpus-based, and data drawn from WhatsApp chart pages of selected groups of Nigerian English speakers show a large occurrence of inconsistencies in the patterns of Netspeak and their meanings. The study argues that rather than emphasise the negative impact of Netspeak on the communicative competence of second language users, studies should focus on suggesting models as yardsticks for standardising the usage of Netspeak and indeed all other emerging language conventions resulting from online communication. This stance stems from the inevitable global language transformation that is eminent with the coming of age of information technology.

Keywords: abbreviation, acronyms, Netspeak, online communication, standardisation

Procedia PDF Downloads 374
31692 Detection of Cardiac Arrhythmia Using Principal Component Analysis and Xgboost Model

Authors: Sujay Kotwale, Ramasubba Reddy M.

Abstract:

Electrocardiogram (ECG) is a non-invasive technique used to study and analyze various heart diseases. Cardiac arrhythmia is a serious heart disease which leads to death of the patients, when left untreated. An early-time detection of cardiac arrhythmia would help the doctors to do proper treatment of the heart. In the past, various algorithms and machine learning (ML) models were used to early-time detection of cardiac arrhythmia, but few of them have achieved better results. In order to improve the performance, this paper implements principal component analysis (PCA) along with XGBoost model. The PCA was implemented to the raw ECG signals which suppress redundancy information and extracted significant features. The obtained significant ECG features were fed into XGBoost model and the performance of the model was evaluated. In order to valid the proposed technique, raw ECG signals obtained from standard MIT-BIH database were employed for the analysis. The result shows that the performance of proposed method is superior to the several state-of-the-arts techniques.

Keywords: cardiac arrhythmia, electrocardiogram, principal component analysis, XGBoost

Procedia PDF Downloads 98
31691 HD-WSComp: Hypergraph Decomposition for Web Services Composition Based on QoS

Authors: Samah Benmerbi, Kamal Amroun, Abdelkamel Tari

Abstract:

The increasing number of Web service (WS)providers throughout the globe, have produced numerous Web services providing the same or similar functionality. Therefore, there is a need of tools developing the best answer of queries by selecting and composing services with total transparency. This paper reviews various QoS based Web service selection mechanisms and architectures which facilitate qualitatively optimal selection, in other fact Web service composition is required when a request cannot be fulfilled by a single web service. In such cases, it is preferable to integrate existing web services to satisfy user’s request. We introduce an automatic Web service composition method based on hypergraph decomposition using hypertree decomposition method. The problem of selection and the composition of the web services is transformed into a resolution in a hypertree by exploring the relations of dependency between web services to get composite web service via employing an execution order of WS satisfying global request.

Keywords: web service, web service selection, web service composition, QoS, hypergraph decomposition, BE hypergraph decomposition, hypertree resolution

Procedia PDF Downloads 496
31690 An Algorithm to Depreciate the Energy Utilization Using a Bio-Inspired Method in Wireless Sensor Network

Authors: Navdeep Singh Randhawa, Shally Sharma

Abstract:

Wireless Sensor Network is an autonomous technology emanating in the current scenario at a fast pace. This technology faces a number of defiance’s and energy management is one of them, which has a huge impact on the network lifetime. To sustain energy the different types of routing protocols have been flourished. The classical routing protocols are no more compatible to perform in complicated environments. Hence, in the field of routing the intelligent algorithms based on nature systems is a turning point in Wireless Sensor Network. These nature-based algorithms are quite efficient to handle the challenges of the WSN as they are capable of achieving local and global best optimization solutions for the complex environments. So, the main attention of this paper is to develop a routing algorithm based on some swarm intelligent technique to enhance the performance of Wireless Sensor Network.

Keywords: wireless sensor network, routing, swarm intelligence, MPRSO

Procedia PDF Downloads 342
31689 Synthesis, Characterization and Catalytic Applications of Divalent Schiff Base Metal Complexes Derived from Amino Coumarins and Substituted Benzaldehydes and Acetophenones

Authors: Srinivas Nerella

Abstract:

A series of new heterodentate N, O-donor ligands derived from condensing 3-amino Coumarins with hydroxy benzaldehydes and acetophenones were used to afford new mononuclear Mn(II), Co(II), Ni(II), Cu(II), Zn(II) and Pd(II) coordination compounds. All the complexes were characterized by IR, 1H-NMR, 13C-NMR, Mass, ESR, Electronic spectra, Conductance, Magnetic and Thermal studies. The ligands show hexa coordination in Mn(II), Co(II), Ni(II), and Pd(II) complexes resulting octahedral geometries, while the ligands in Zn(II) and Cu(II) complexes show tetra coordination resulting tetrahedral and square planar geometries respectively. These mononuclear complexes were investigated as catalysts in the hydrothiolation of aromatic and aliphatic alkynes with thiols. These metal complexes were acted as versatile catalysts and gave good yields.

Keywords: schiff bases, divalent metal complexes of schiff bases, Catalytic activity, hydrothiolation

Procedia PDF Downloads 406
31688 Accidental Compartment Fire Dynamics: Experiment, Computational Fluid Dynamics Weakness and Expert Interview Analysis

Authors: Timothy Onyenobi

Abstract:

Accidental fires and its dynamic as it relates to building compartmentation and the impact of the compartment morphology, is still an on-going area of study; especially with the use of computational fluid dynamics (CFD) modeling methods. With better knowledge on this subject come better solution recommendations by fire engineers. Interviews were carried out for this study where it was identified that the response perspectives to accidental fire were different with the fire engineer providing qualitative data which is based on “what is expected in real fires” and the fire fighters provided information on “what actually obtains in real fires”. This further led to a study and analysis of two real and comprehensively instrumented fire experiments: the Open Plan Office Project by National Institute of Standard and Technology (NIST) USA (to study time to flashover) and the TF2000 project by the Building Research Establishment (BRE) UK (to test for conformity with Building Regulation requirements). The findings from the analysis of the experiments revealed the relative yet critical weakness of fire prediction using a CFD model (usually used by fire engineers) as well as explained the differences in response perspectives of the fire engineers and firefighters from the interview analysis.

Keywords: CFD, compartment fire, experiment, fire fighters, fire engineers

Procedia PDF Downloads 320
31687 2D Nanomaterials-Based Geopolymer as-Self-Sensing Buildings in Construction Industry

Authors: Maryam Kiani

Abstract:

The self-sensing capability opens up new possibilities for structural health monitoring, offering real-time information on the condition and performance of constructions. The synthesis and characterization of these functional 2D material geopolymers will be explored in this study. Various fabrication techniques, including mixing, dispersion, and coating methods, will be employed to ensure uniform distribution and integration of the 2D materials within the geopolymers. The resulting composite materials will be evaluated for their mechanical strength, electrical conductivity, and sensing capabilities through rigorous testing and analysis. The potential applications of these self-sensing geopolymers are vast. They can be used in infrastructure projects, such as bridges, tunnels, and buildings, to provide continuous monitoring and early detection of structural damage or degradation. This proactive approach to maintenance and safety can significantly improve the lifespan and efficiency of constructions, ultimately reducing maintenance costs and enhancing overall sustainability. In conclusion, the development of functional 2D material geopolymers as self-sensing materials presents an exciting advancement in the construction industry. By integrating these innovative materials into structures, we can create a new generation of intelligent, self-monitoring constructions that can adapt and respond to their environment.

Keywords: 2D materials, geopolymers, electrical properties, self-sensing

Procedia PDF Downloads 111
31686 The Use of Ontology Framework for Automation Digital Forensics Investigation

Authors: Ahmad Luthfi

Abstract:

One of the main goals of a computer forensic analyst is to determine the cause and effect of the acquisition of a digital evidence in order to obtain relevant information on the case is being handled. In order to get fast and accurate results, this paper will discuss the approach known as ontology framework. This model uses a structured hierarchy of layers that create connectivity between the variant and searching investigation of activity that a computer forensic analysis activities can be carried out automatically. There are two main layers are used, namely analysis tools and operating system. By using the concept of ontology, the second layer is automatically designed to help investigator to perform the acquisition of digital evidence. The methodology of automation approach of this research is by utilizing forward chaining where the system will perform a search against investigative steps and atomically structured in accordance with the rules of the ontology.

Keywords: ontology, framework, automation, forensics

Procedia PDF Downloads 324
31685 Enhancement of Cement Mortar Mechanical Properties with Replacement of Seashell Powder

Authors: Abdoullah Namdar, Fadzil Mat Yahaya

Abstract:

Many synthetic additives have been using for improve cement mortar and concrete characteristics, but natural additive is a friendly environment option. The quantity of (2% and 4%) seashell powder has been replaced in cement mortar, and compared with plain cement mortar in early age of 7 days. The strain gauges have been installed on beams and cube, for monitoring fluctuation of flexural and compressive strength. Main objective of this paper is to study effect of linear static force on flexural and compressive strength of modified cement mortar. The results have been indicated that the replacement of appropriate proportion of seashell powder enhances cement mortar mechanical properties. The replacement of 2% seashell causes improvement of deflection, time to failure and maximum load to failure on concrete beam and cube, the same occurs for compressive modulus elasticity. Increase replacement of seashell to 4% reduces all flexural strength, compressive strength and strain of cement mortar.

Keywords: compressive strength, flexural strength, compressive modulus elasticity, time to failure, deflection

Procedia PDF Downloads 441
31684 Multimodal Optimization of Density-Based Clustering Using Collective Animal Behavior Algorithm

Authors: Kristian Bautista, Ruben A. Idoy

Abstract:

A bio-inspired metaheuristic algorithm inspired by the theory of collective animal behavior (CAB) was integrated to density-based clustering modeled as multimodal optimization problem. The algorithm was tested on synthetic, Iris, Glass, Pima and Thyroid data sets in order to measure its effectiveness relative to CDE-based Clustering algorithm. Upon preliminary testing, it was found out that one of the parameter settings used was ineffective in performing clustering when applied to the algorithm prompting the researcher to do an investigation. It was revealed that fine tuning distance δ3 that determines the extent to which a given data point will be clustered helped improve the quality of cluster output. Even though the modification of distance δ3 significantly improved the solution quality and cluster output of the algorithm, results suggest that there is no difference between the population mean of the solutions obtained using the original and modified parameter setting for all data sets. This implies that using either the original or modified parameter setting will not have any effect towards obtaining the best global and local animal positions. Results also suggest that CDE-based clustering algorithm is better than CAB-density clustering algorithm for all data sets. Nevertheless, CAB-density clustering algorithm is still a good clustering algorithm because it has correctly identified the number of classes of some data sets more frequently in a thirty trial run with a much smaller standard deviation, a potential in clustering high dimensional data sets. Thus, the researcher recommends further investigation in the post-processing stage of the algorithm.

Keywords: clustering, metaheuristics, collective animal behavior algorithm, density-based clustering, multimodal optimization

Procedia PDF Downloads 212