Search results for: time driven activity based costing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 43091

Search results for: time driven activity based costing

39161 A Hybrid Data-Handler Module Based Approach for Prioritization in Quality Function Deployment

Authors: P. Venu, Joeju M. Issac

Abstract:

Quality Function Deployment (QFD) is a systematic technique that creates a platform where the customer responses can be positively converted to design attributes. The accuracy of a QFD process heavily depends on the data that it is handling which is captured from customers or QFD team members. Customized computer programs that perform Quality Function Deployment within a stipulated time have been used by various companies across the globe. These programs heavily rely on storage and retrieval of the data on a common database. This database must act as a perfect source with minimum missing values or error values in order perform actual prioritization. This paper introduces a missing/error data handler module which uses Genetic Algorithm and Fuzzy numbers. The prioritization of customer requirements of sesame oil is illustrated and a comparison is made between proposed data handler module-based deployment and manual deployment.

Keywords: hybrid data handler, QFD, prioritization, module-based deployment

Procedia PDF Downloads 283
39160 [Keynote Talk]: Analysis of One Dimensional Advection Diffusion Model Using Finite Difference Method

Authors: Vijay Kumar Kukreja, Ravneet Kaur

Abstract:

In this paper, one dimensional advection diffusion model is analyzed using finite difference method based on Crank-Nicolson scheme. A practical problem of filter cake washing of chemical engineering is analyzed. The model is converted into dimensionless form. For the grid Ω × ω = [0, 1] × [0, T], the Crank-Nicolson spatial derivative scheme is used in space domain and forward difference scheme is used in time domain. The scheme is found to be unconditionally convergent, stable, first order accurate in time and second order accurate in space domain. For a test problem, numerical results are compared with the analytical ones for different values of parameter.

Keywords: Crank-Nicolson scheme, Lax-Richtmyer theorem, stability, consistency, Peclet number, Greschgorin circle

Procedia PDF Downloads 211
39159 Review of the Model-Based Supply Chain Management Research in the Construction Industry

Authors: Aspasia Koutsokosta, Stefanos Katsavounis

Abstract:

This paper reviews the model-based qualitative and quantitative Operations Management research in the context of Construction Supply Chain Management (CSCM). Construction industry has been traditionally blamed for low productivity, cost and time overruns, waste, high fragmentation and adversarial relationships. The construction industry has been slower than other industries to employ the Supply Chain Management (SCM) concept and develop models that support the decision-making and planning. However the last decade there is a distinct shift from a project-based to a supply-based approach of construction management. CSCM comes up as a new promising management tool of construction operations and improves the performance of construction projects in terms of cost, time and quality. Modeling the Construction Supply Chain (CSC) offers the means to reap the benefits of SCM, make informed decisions and gain competitive advantage. Different modeling approaches and methodologies have been applied in the multi-disciplinary and heterogeneous research field of CSCM. The literature review reveals that a considerable percentage of CSC modeling accommodates conceptual or process models which discuss general management frameworks and do not relate to acknowledged soft OR methods. We particularly focus on the model-based quantitative research and categorize the CSCM models depending on their scope, mathematical formulation, structure, objectives, solution approach, software used and decision level. Although over the last few years there has been clearly an increase of research papers on quantitative CSC models, we identify that the relevant literature is very fragmented with limited applications of simulation, mathematical programming and simulation-based optimization. Most applications are project-specific or study only parts of the supply system. Thus, some complex interdependencies within construction are neglected and the implementation of the integrated supply chain management is hindered. We conclude this paper by giving future research directions and emphasizing the need to develop robust mathematical optimization models for the CSC. We stress that CSC modeling needs a multi-dimensional, system-wide and long-term perspective. Finally, prior applications of SCM to other industries have to be taken into account in order to model CSCs, but not without the consequential reform of generic concepts to match the unique characteristics of the construction industry.

Keywords: construction supply chain management, modeling, operations research, optimization, simulation

Procedia PDF Downloads 496
39158 Satellite Image Classification Using Firefly Algorithm

Authors: Paramjit Kaur, Harish Kundra

Abstract:

In the recent years, swarm intelligence based firefly algorithm has become a great focus for the researchers to solve the real time optimization problems. Here, firefly algorithm is used for the application of satellite image classification. For experimentation, Alwar area is considered to multiple land features like vegetation, barren, hilly, residential and water surface. Alwar dataset is considered with seven band satellite images. Firefly Algorithm is based on the attraction of less bright fireflies towards more brightener one. For the evaluation of proposed concept accuracy assessment parameters are calculated using error matrix. With the help of Error matrix, parameters of Kappa Coefficient, Overall Accuracy and feature wise accuracy parameters of user’s accuracy & producer’s accuracy can be calculated. Overall results are compared with BBO, PSO, Hybrid FPAB/BBO, Hybrid ACO/SOFM and Hybrid ACO/BBO based on the kappa coefficient and overall accuracy parameters.

Keywords: image classification, firefly algorithm, satellite image classification, terrain classification

Procedia PDF Downloads 385
39157 Lecture Video Indexing and Retrieval Using Topic Keywords

Authors: B. J. Sandesh, Saurabha Jirgi, S. Vidya, Prakash Eljer, Gowri Srinivasa

Abstract:

In this paper, we propose a framework to help users to search and retrieve the portions in the lecture video of their interest. This is achieved by temporally segmenting and indexing the lecture video using the topic keywords. We use transcribed text from the video and documents relevant to the video topic extracted from the web for this purpose. The keywords for indexing are found by applying the non-negative matrix factorization (NMF) topic modeling techniques on the web documents. Our proposed technique first creates indices on the transcribed documents using the topic keywords, and these are mapped to the video to find the start and end time of the portions of the video for a particular topic. This time information is stored in the index table along with the topic keyword which is used to retrieve the specific portions of the video for the query provided by the users.

Keywords: video indexing and retrieval, lecture videos, content based video search, multimodal indexing

Procedia PDF Downloads 237
39156 Graph Neural Network-Based Classification for Disease Prediction in Health Care Heterogeneous Data Structures of Electronic Health Record

Authors: Raghavi C. Janaswamy

Abstract:

In the healthcare sector, heterogenous data elements such as patients, diagnosis, symptoms, conditions, observation text from physician notes, and prescriptions form the essentials of the Electronic Health Record (EHR). The data in the form of clear text and images are stored or processed in a relational format in most systems. However, the intrinsic structure restrictions and complex joins of relational databases limit the widespread utility. In this regard, the design and development of realistic mapping and deep connections as real-time objects offer unparallel advantages. Herein, a graph neural network-based classification of EHR data has been developed. The patient conditions have been predicted as a node classification task using a graph-based open source EHR data, Synthea Database, stored in Tigergraph. The Synthea DB dataset is leveraged due to its closer representation of the real-time data and being voluminous. The graph model is built from the EHR heterogeneous data using python modules, namely, pyTigerGraph to get nodes and edges from the Tigergraph database, PyTorch to tensorize the nodes and edges, PyTorch-Geometric (PyG) to train the Graph Neural Network (GNN) and adopt the self-supervised learning techniques with the AutoEncoders to generate the node embeddings and eventually perform the node classifications using the node embeddings. The model predicts patient conditions ranging from common to rare situations. The outcome is deemed to open up opportunities for data querying toward better predictions and accuracy.

Keywords: electronic health record, graph neural network, heterogeneous data, prediction

Procedia PDF Downloads 74
39155 Integration of Big Data to Predict Transportation for Smart Cities

Authors: Sun-Young Jang, Sung-Ah Kim, Dongyoun Shin

Abstract:

The Intelligent transportation system is essential to build smarter cities. Machine learning based transportation prediction could be highly promising approach by delivering invisible aspect visible. In this context, this research aims to make a prototype model that predicts transportation network by using big data and machine learning technology. In detail, among urban transportation systems this research chooses bus system.  The research problem that existing headway model cannot response dynamic transportation conditions. Thus, bus delay problem is often occurred. To overcome this problem, a prediction model is presented to fine patterns of bus delay by using a machine learning implementing the following data sets; traffics, weathers, and bus statues. This research presents a flexible headway model to predict bus delay and analyze the result. The prototyping model is composed by real-time data of buses. The data are gathered through public data portals and real time Application Program Interface (API) by the government. These data are fundamental resources to organize interval pattern models of bus operations as traffic environment factors (road speeds, station conditions, weathers, and bus information of operating in real-time). The prototyping model is designed by the machine learning tool (RapidMiner Studio) and conducted tests for bus delays prediction. This research presents experiments to increase prediction accuracy for bus headway by analyzing the urban big data. The big data analysis is important to predict the future and to find correlations by processing huge amount of data. Therefore, based on the analysis method, this research represents an effective use of the machine learning and urban big data to understand urban dynamics.

Keywords: big data, machine learning, smart city, social cost, transportation network

Procedia PDF Downloads 241
39154 Predicting Costs in Construction Projects with Machine Learning: A Detailed Study Based on Activity-Level Data

Authors: Soheila Sadeghi

Abstract:

Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.

Keywords: cost prediction, machine learning, project management, random forest, neural networks

Procedia PDF Downloads 21
39153 The Data-Driven Localized Wave Solution of the Fokas-Lenells Equation Using Physics-Informed Neural Network

Authors: Gautam Kumar Saharia, Sagardeep Talukdar, Riki Dutta, Sudipta Nandy

Abstract:

The physics-informed neural network (PINN) method opens up an approach for numerically solving nonlinear partial differential equations leveraging fast calculating speed and high precession of modern computing systems. We construct the PINN based on a strong universal approximation theorem and apply the initial-boundary value data and residual collocation points to weekly impose initial and boundary conditions to the neural network and choose the optimization algorithms adaptive moment estimation (ADAM) and Limited-memory Broyden-Fletcher-Golfard-Shanno (L-BFGS) algorithm to optimize learnable parameter of the neural network. Next, we improve the PINN with a weighted loss function to obtain both the bright and dark soliton solutions of the Fokas-Lenells equation (FLE). We find the proposed scheme of adjustable weight coefficients into PINN has a better convergence rate and generalizability than the basic PINN algorithm. We believe that the PINN approach to solve the partial differential equation appearing in nonlinear optics would be useful in studying various optical phenomena.

Keywords: deep learning, optical soliton, physics informed neural network, partial differential equation

Procedia PDF Downloads 59
39152 The Effect of Improvement Programs in the Mean Time to Repair and in the Mean Time between Failures on Overall Lead Time: A Simulation Using the System Dynamics-Factory Physics Model

Authors: Marcel Heimar Ribeiro Utiyama, Fernanda Caveiro Correia, Dario Henrique Alliprandini

Abstract:

The importance of the correct allocation of improvement programs is of growing interest in recent years. Due to their limited resources, companies must ensure that their financial resources are directed to the correct workstations in order to be the most effective and survive facing the strong competition. However, to our best knowledge, the literature about allocation of improvement programs does not analyze in depth this problem when the flow shop process has two capacity constrained resources. This is a research gap which is deeply studied in this work. The purpose of this work is to identify the best strategy to allocate improvement programs in a flow shop with two capacity constrained resources. Data were collected from a flow shop process with seven workstations in an industrial control and automation company, which process 13.690 units on average per month. The data were used to conduct a simulation with the System Dynamics-Factory Physics model. The main variables considered, due to their importance on lead time reduction, were the mean time between failures and the mean time to repair. The lead time reduction was the output measure of the simulations. Ten different strategies were created: (i) focused time to repair improvement, (ii) focused time between failures improvement, (iii) distributed time to repair improvement, (iv) distributed time between failures improvement, (v) focused time to repair and time between failures improvement, (vi) distributed time to repair and between failures improvement, (vii) hybrid time to repair improvement, (viii) hybrid time between failures improvements, (ix) time to repair improvement strategy towards the two capacity constrained resources, (x) time between failures improvement strategy towards the two capacity constrained resources. The ten strategies tested are variations of the three main strategies for improvement programs named focused, distributed and hybrid. Several comparisons among the effect of the ten strategies in lead time reduction were performed. The results indicated that for the flow shop analyzed, the focused strategies delivered the best results. When it is not possible to perform a large investment on the capacity constrained resources, companies should use hybrid approaches. An important contribution to the academy is the hybrid approach, which proposes a new way to direct the efforts of improvements. In addition, the study in a flow shop with two strong capacity constrained resources (more than 95% of utilization) is an important contribution to the literature. Another important contribution is the problem of allocation with two CCRs and the possibility of having floating capacity constrained resources. The results provided the best improvement strategies considering the different strategies of allocation of improvement programs and different positions of the capacity constrained resources. Finally, it is possible to state that both strategies, hybrid time to repair improvement and hybrid time between failures improvement, delivered best results compared to the respective distributed strategies. The main limitations of this study are mainly regarding the flow shop analyzed. Future work can further investigate different flow shop configurations like a varying number of workstations, different number of products or even different positions of the two capacity constrained resources.

Keywords: allocation of improvement programs, capacity constrained resource, hybrid strategy, lead time, mean time to repair, mean time between failures

Procedia PDF Downloads 107
39151 Solar-Blind Ni-Schottky Photodetector Based on MOCVD Grown ZnGa₂O₄

Authors: Taslim Khan, Ray Hua Horng, Rajendra Singh

Abstract:

This study presents a comprehensive analysis of the design, fabrication, and performance evaluation of a solar-blind Schottky photodetector based on ZnGa₂O₄ grown via MOCVD, utilizing Ni/Au as the Schottky electrode. ZnGa₂O₄, with its wide bandgap of 5.2 eV, is well-suited for high-performance solar-blind photodetection applications. The photodetector demonstrates an impressive responsivity of 280 A/W, indicating its exceptional sensitivity within the solar-blind ultraviolet band. One of the device's notable attributes is its high rejection ratio of 10⁵, which effectively filters out unwanted background signals, enhancing its reliability in various environments. The photodetector also boasts a photodetector responsivity contrast ratio (PDCR) of 10⁷, showcasing its ability to detect even minor changes in incident UV light. Additionally, the device features an outstanding detective of 10¹⁸ Jones, underscoring its capability to precisely detect faint UV signals. It exhibits a fast response time of 80 ms and an ON/OFF ratio of 10⁵, making it suitable for real-time UV sensing applications. The noise-equivalent power (NEP) of 10^-17 W/Hz further highlights its efficiency in detecting low-intensity UV signals. The photodetector also achieves a high forward-to-backward current rejection ratio of 10⁶, ensuring high selectivity. Furthermore, the device maintains an extremely low dark current of approximately 0.1 pA. These findings position the ZnGa₂O₄-based Schottky photodetector as a leading candidate for solar-blind UV detection applications. It offers a compelling combination of sensitivity, selectivity, and operational efficiency, making it a highly promising tool for environments requiring precise and reliable UV detection.

Keywords: wideband gap, solar blind photodetector, MOCVD, zinc gallate

Procedia PDF Downloads 17
39150 Simulation of Utility Accrual Scheduling and Recovery Algorithm in Multiprocessor Environment

Authors: A. Idawaty, O. Mohamed, A. Z. Zuriati

Abstract:

This paper presents the development of an event based Discrete Event Simulation (DES) for a recovery algorithm known Backward Recovery Global Preemptive Utility Accrual Scheduling (BR_GPUAS). This algorithm implements the Backward Recovery (BR) mechanism as a fault recovery solution under the existing Time/Utility Function/ Utility Accrual (TUF/UA) scheduling domain for multiprocessor environment. The BR mechanism attempts to take the faulty tasks back to its initial safe state and then proceeds to re-execute the affected section of the faulty tasks to enable recovery. Considering that faults may occur in the components of any system; a fault tolerance system that can nullify the erroneous effect is necessary to be developed. Current TUF/UA scheduling algorithm uses the abortion recovery mechanism and it simply aborts the erroneous task as their fault recovery solution. None of the existing algorithm in TUF/UA scheduling domain in multiprocessor scheduling environment have considered the transient fault and implement the BR mechanism as a fault recovery mechanism to nullify the erroneous effect and solve the recovery problem in this domain. The developed BR_GPUAS simulator has derived the set of parameter, events and performance metrics according to a detailed analysis of the base model. Simulation results revealed that BR_GPUAS algorithm can saved almost 20-30% of the accumulated utilities making it reliable and efficient for the real-time application in the multiprocessor scheduling environment.

Keywords: real-time system (RTS), time utility function/ utility accrual (TUF/UA) scheduling, backward recovery mechanism, multiprocessor, discrete event simulation (DES)

Procedia PDF Downloads 292
39149 Challenges of the Implementation of Real Time Online Learning in a South African Context

Authors: Thifhuriwi Emmanuel Madzunye, Patricia Harpur, Ephias Ruhode

Abstract:

A review of the pertinent literature identified a gap concerning the hindrances and opportunities accompanying the implementation of real-time online learning systems (RTOLs) in rural areas. Whilst RTOLs present a possible solution to teaching and learning issues in rural areas, little is known about the implementation of digital strategies among schools in isolated communities. This study explores associated guidelines that have the potential to inform decision-making where Internet-based education could improve educational opportunities. A systematic literature review has the potential to consolidate and focus on disparate literature served to collect interlinked data from specific sources in a structured manner. During qualitative data analysis (QDA) of selected publications via the application of a QDA tool - ATLAS.ti, the following overarching themes emerged: digital divide, educational strategy, human factors, and support. Furthermore, findings from data collection and literature review suggest that signiant factors include a lack of digital knowledge, infrastructure shortcomings such as a lack of computers, poor internet connectivity, and handicapped real-time online may limit students’ progress. The study recommends that timeous consideration should be given to the influence of the digital divide. Additionally, the evolution of educational strategy that adopts digital approaches, a focus on training of role-players and stakeholders concerning human factors, and the seeking of governmental funding and support are essential to the implementation and success of RTOLs.

Keywords: communication, digital divide, digital skills, distance, educational strategy, government, ICT, infrastructures, learners, limpopo, lukalo, network, online learning systems, political-unrest, real-time, real-time online learning, real-time online learning system, pass-rate, resources, rural area, school, support, teachers, teaching and learning and training

Procedia PDF Downloads 316
39148 Study of Natural Radioactive and Radiation Hazard Index of Soil from Sembrong Catchment Area, Johor, Malaysia

Authors: M. I. A. Adziz, J. Sharib Sarip, M. T. Ishak, D. N. A. Tugi

Abstract:

Radiation exposure to humans and the environment is caused by natural radioactive material sources. Given that exposure to people and communities can occur through several pathways, it is necessary to pay attention to the increase in naturally radioactive material, particularly in the soil. Continuous research and monitoring on the distribution and determination of these natural radionuclides' activity as a guide and reference are beneficial, especially in an accidental exposure. Surface soil/sediment samples from several locations identified around the Sembrong catchment area were taken for the study. After 30 days of secular equilibrium with their daughters, the activity concentrations of the naturally occurring radioactive material (NORM) members, i.e. ²²⁶Ra, ²²⁸Ra, ²³⁸U, ²³²Th, and ⁴⁰K, were measured using high purity germanium (HPGe) gamma spectrometer. The results obtained showed that the radioactivity concentration of ²³⁸U ranged between 17.13 - 30.13 Bq/kg, ²³²Th ranged between 22.90 - 40.05 Bq/kg, ²²⁶Ra ranged between 19.19 - 32.10 Bq/kg, ²²⁸Ra ranged between 21.08 - 39.11 Bq/kg and ⁴⁰K ranged between 9.22 - 51.07 Bq/kg with average values of 20.98 Bq/kg, 27.39 Bq/kg, 23.55 Bq/kg, 26.93 Bq/kg and 23.55 Bq/kg respectively. The values obtained from this study were low or equivalent to previously reported in previous studies. It was also found that the mean/mean values obtained for the four parameters of the Radiation Hazard Index, namely radium equivalent activity (Raeq), external dose rate (D), annual effective dose and external hazard index (Hₑₓ), were 65.40 Bq/kg, 29.33 nGy/h, 19.18 ¹⁰⁻⁶Sv and 0.19 respectively. These obtained values are low compared to the world average values and the values of globally applied standards. Comparison with previous studies (dry season) also found that the values for all four parameters were low and equivalent. This indicates the level of radiation hazard in the area around the study is safe for the public.

Keywords: catchment area, gamma spectrometry, naturally occurring radioactive material (NORM), soil

Procedia PDF Downloads 85
39147 Detecting Impact of Allowance Trading Behaviors on Distribution of NOx Emission Reductions under the Clean Air Interstate Rule

Authors: Yuanxiaoyue Yang

Abstract:

Emissions trading, or ‘cap-and-trade', has been long promoted by economists as a more cost-effective pollution control approach than traditional performance standard approaches. While there is a large body of empirical evidence for the overall effectiveness of emissions trading, relatively little attention has been paid to other unintended consequences brought by emissions trading. One important consequence is that cap-and-trade could introduce the risk of creating high-level emission concentrations in areas where emitting facilities purchase a large number of emission allowances, which may cause an unequal distribution of environmental benefits. This study will contribute to the current environmental policy literature by linking trading activity with environmental injustice concerns and empirically analyzing the causal relationship between trading activity and emissions reduction under a cap-and-trade program for the first time. To investigate the potential environmental injustice concern in cap-and-trade, this paper uses a differences-in-differences (DID) with instrumental variable method to identify the causal effect of allowance trading behaviors on emission reduction levels under the clean air interstate rule (CAIR), a cap-and-trade program targeting on the power sector in the eastern US. The major data source is the facility-year level emissions and allowance transaction data collected from US EPA air market databases. While polluting facilities from CAIR are the treatment group under our DID identification, we use non-CAIR facilities from the Acid Rain Program - another NOx control program without a trading scheme – as the control group. To isolate the causal effects of trading behaviors on emissions reduction, we also use eligibility for CAIR participation as the instrumental variable. The DID results indicate that the CAIR program was able to reduce NOx emissions from affected facilities by about 10% more than facilities who did not participate in the CAIR program. Therefore, CAIR achieves excellent overall performance in emissions reduction. The IV regression results also indicate that compared with non-CAIR facilities, purchasing emission permits still decreases a CAIR participating facility’s emissions level significantly. This result implies that even buyers under the cap-and-trade program have achieved a great amount of emissions reduction. Therefore, we conclude little evidence of environmental injustice from the CAIR program.

Keywords: air pollution, cap-and-trade, emissions trading, environmental justice

Procedia PDF Downloads 130
39146 Municipal Asset Management Planning 2.0 – A New Framework For Policy And Program Design In Ontario

Authors: Scott R. Butler

Abstract:

Ontario, Canada’s largest province, is in the midst of an interesting experiment in mandated asset management planning for local governments. At the beginning of 2021, Ontario’s 444 municipalities were responsible for the management of 302,864 lane kilometers of roads that have a replacement cost of $97.545 billion CDN. Roadways are by far the most complex, expensive, and extensive assets that a municipality is responsible for overseeing. Since adopting Ontario Regulation 588/47: Asset Management Planning for Municipal Infrastructure in 2017, the provincial government has established prescriptions for local road authorities regarding asset category and levels of service being provided. This provincial regulation further stipulates that asset data such as extent, condition, and life cycle costing are to be captured in manner compliant with qualitative descriptions and technical metrics. The Ontario Good Roads Association undertook an exercise to aggregate the road-related data contained within the 444 asset management plans that municipalities have filed with the provincial government. This analysis concluded that collectively Ontario municipal roadways have a $34.7 billion CDN in deferred maintenance. The ill-state of repair of Ontario municipal roads has lasting implications for province’s economic competitiveness and has garnered considerable political attention. Municipal efforts to address the maintenance backlog are stymied by the extremely limited fiscal parameters municipalities must operate within in Ontario. Further exacerbating the program are provincially designed programs that are ineffective, administratively burdensome, and not necessarily aligned with local priorities or strategies. This paper addresses how municipal asset management plans – and more specifically, the data contained in these plans – can be used to design innovative policy frameworks, flexible funding programs, and new levels of service that respond to these funding challenges, as well as emerging issues such as local economic development and climate change. To fully unlock the potential that Ontario Regulation 588/17 has imposed will require a resolute commitment to data standardization and horizontal collaboration between municipalities within regions.

Keywords: transportation, municipal asset management, subnational policy design, subnational funding program design

Procedia PDF Downloads 82
39145 Rescaled Range Analysis of Seismic Time-Series: Example of the Recent Seismic Crisis of Alhoceima

Authors: Marina Benito-Parejo, Raul Perez-Lopez, Miguel Herraiz, Carolina Guardiola-Albert, Cesar Martinez

Abstract:

Persistency, long-term memory and randomness are intrinsic properties of time-series of earthquakes. The Rescaled Range Analysis (RS-Analysis) was introduced by Hurst in 1956 and modified by Mandelbrot and Wallis in 1964. This method represents a simple and elegant analysis which determines the range of variation of one natural property (the seismic energy released in this case) in a time interval. Despite the simplicity, there is complexity inherent in the property measured. The cumulative curve of the energy released in time is the well-known fractal geometry of a devil’s staircase. This geometry is used for determining the maximum and minimum value of the range, which is normalized by the standard deviation. The rescaled range obtained obeys a power-law with the time, and the exponent is the Hurst value. Depending on this value, time-series can be classified in long-term or short-term memory. Hence, an algorithm has been developed for compiling the RS-Analysis for time series of earthquakes by days. Completeness time distribution and locally stationarity of the time series are required. The interest of this analysis is their application for a complex seismic crisis where different earthquakes take place in clusters in a short period. Therefore, the Hurst exponent has been obtained for the seismic crisis of Alhoceima (Mediterranean Sea) of January-March, 2016, where at least five medium-sized earthquakes were triggered. According to the values obtained from the Hurst exponent for each cluster, a different mechanical origin can be detected, corroborated by the focal mechanisms calculated by the official institutions. Therefore, this type of analysis not only allows an approach to a greater understanding of a seismic series but also makes possible to discern different types of seismic origins.

Keywords: Alhoceima crisis, earthquake time series, Hurst exponent, rescaled range analysis

Procedia PDF Downloads 305
39144 Comparison Between a Droplet Digital PCR and Real Time PCR Method in Quantification of HBV DNA

Authors: Surangrat Srisurapanon, Chatchawal Wongjitrat, Navin Horthongkham, Ruengpung Sutthent

Abstract:

HBV infection causes a potential serious public health problem. The ability to detect the HBV DNA concentration is of the importance and improved continuously. By using quantitative Polymerase Chain Reaction (qPCR), several factors in standardized; source of material, calibration standard curve and PCR efficiency are inconsistent. Digital PCR (dPCR) is an alternative PCR-based technique for absolute quantification using Poisson's statistics without requiring a standard curve. Therefore, the aim of this study is to compare the data set of HBV DNA generated between dPCR and qPCR methods. All samples were quantified by Abbott’s real time PCR and 54 samples with 2 -6 log10 HBV DNA were selected for comparison with dPCR. Of these 54 samples, there were two outlier samples defined as negative by dPCR. Of these two, samples were defined as negative by dPCR, whereas 52 samples were positive by both the tests. The difference between the two assays was less than 0.25 log IU/mL in 24/52 samples (46%) of paired samples; less than 0.5 log IU/mL in 46/52 samples (88%) and less than 1 log in 50/52 samples (96%). The correlation coefficient was r=0.788 and P-value <0.0001. Comparison to qPCR, data generated by dPCR tend to be the overestimation in the sample with low HBV DNA concentration and underestimated in the sample with high viral load. The variation in DNA by dPCR measurement might be due to the pre-amplification bias, template. Moreover, a minor drawback of dPCR is the large quantity of DNA had to be used when compare to the qPCR. Since the technology is relatively new, the limitations of this assay will be improved.

Keywords: hepatitis B virus, real time PCR, digital PCR, DNA quantification

Procedia PDF Downloads 470
39143 Evaluation of Natural Gums: Gum Tragacanth, Xanthan Gum, Guar Gum and Gum Acacia as Potential Hemostatic Agents

Authors: Himanshu Kushwah, Nidhi Sandal, Meenakshi K. Chauhan, Gaurav Mittal

Abstract:

Excessive bleeding is the primary factor of avoidable death in both civilian trauma centers as well as the military battlefield. Hundreds of Indian troops die every year due to blood loss caused by combat-related injuries. These deaths are avoidable and can be prevented to a large extent by making available a suitable hemostatic dressing in an emergency medical kit. In this study, natural gums were evaluated as potential hemostatic agents in combination with calcium gluconate. The study compares the hemostatic activity of Gum Tragacanth (GT), Guar Gum (GG), Xanthan Gum (XG) and Gum Acacia (GA) by carrying out different in-vitro and in-vivo studies. In-vitro studies were performed using the Lee-White method and Eustrek method, which includes the visual and microscopic analysis of blood clotting. MTT assay was also performed using human lymphocytes to check the cytotoxicity of the gums. The in-vivo studies were performed in Sprague Dawley rats using tail bleeding assay to evaluate the hemostatic efficacy of the gums and compared with a commercially available hemostatic sponge, Surgispon. Erythrocyte agglutination test was also performed to check the interaction between blood cells and the natural gums. Other parameters like blood loss, adherence strength of the developed hemostatic dressing material incorporating these gums, re-bleeding, and survival of the animals were also studied. The data obtained from the MTT assay showed that Guar gum, Gum Tragacanth, and Gum Acacia were not significantly cytotoxic, but substantial cytotoxicity was observed in Xanthan gum samples at high concentrations. Also, Xanthan gum took the least time with its minimum concentration to achieve hemostasis, (approximately 50 seconds at 3mg concentration). Gum Tragacanth also showed efficient hemostasis at a concentration of 35mg at the same time, but the other two gums tested were not able to clot the blood in significantly less time. A sponge dressing made of Tragacanth gum was found to be more efficient in achieving hemostasis and showed better practical applicability among all the gums studied and also when compared to the commercially available product, Surgispon, thus making it a potentially better alternative.

Keywords: cytotoxicity, hemostasis, natural gums, sponge

Procedia PDF Downloads 130
39142 A Genetic Algorithm Approach for Multi Constraint Team Orienteering Problem with Time Windows

Authors: Uyanga Sukhbaatar, Ahmed Lbath, Mendamar Majig

Abstract:

The Orienteering Problem is the most known example to start modeling tourist trip design problem. In order to meet tourist’s interest and constraint the OP is becoming more and more complicate to solve. The Multi Constraint Team Orienteering Problem with Time Windows is the last extension of the OP which differentiates from other extensions by including more extra associated constraints. The goal of the MCTOPTW is maximizing tourist’s satisfaction score in same time not to violate any of these constraints. This paper presents a genetic algorithmic approach to tackle the MCTOPTW. The benchmark data from literature is tested by our algorithm and the performance results are compared.

Keywords: multi constraint team orienteering problem with time windows, genetic algorithm, tour planning system

Procedia PDF Downloads 612
39141 Graphic Animation: Innovative Language Learning for Autistic Children

Authors: Norfishah Mat Rabi, Rosma Osman, Norziana Mat Rabi

Abstract:

It is difficult for autistic children to mix with and be around with other people. Language difficulties are a problem that affects their social life. A lack of knowledge and ability in language are factors that greatly influence their behavior, and their ability to communicate and interact. Autistic children need to be assisted to improve their language abilities through the use of suitable learning resources. This study is conducted to identify weather graphic animation resources can help autistic children learn and use transitive verbs more effectively. The study was conducted in a rural secondary school in Penang, Malaysia. The research subject comprised of three autistic students ranging in age from 14 years to 16 years. The 14-year-old student is placed in A Class and two 16-year-old students placed in B Class. The class placement of the subjects is based on the diagnostic test results conducted by the teacher and not based on age. Data collection is done through observation and interviews for the duration of five weeks; with the researcher allocating 30 minutes for every learning activity carried out. The research finding shows that the subjects learn transitive verbs better using graphic animation compared to static pictures. It is hoped that this study will give a new perspective towards the learning processes of autistic children.

Keywords: graphic animation, autistic children, language learning, teaching

Procedia PDF Downloads 261
39140 Comparison of Different Data Acquisition Techniques for Shape Optimization Problems

Authors: Attila Vámosi, Tamás Mankovits, Dávid Huri, Imre Kocsis, Tamás Szabó

Abstract:

Non-linear FEM calculations are indispensable when important technical information like operating performance of a rubber component is desired. Rubber bumpers built into air-spring structures may undergo large deformations under load, which in itself shows non-linear behavior. The changing contact range between the parts and the incompressibility of the rubber increases this non-linear behavior further. The material characterization of an elastomeric component is also a demanding engineering task. The shape optimization problem of rubber parts led to the study of FEM based calculation processes. This type of problems was posed and investigated by several authors. In this paper the time demand of certain calculation methods are studied and the possibilities of time reduction is presented.

Keywords: rubber bumper, data acquisition, finite element analysis, support vector regression

Procedia PDF Downloads 461
39139 Training Volume and Myoelectric Responses of Lower Body Muscles with Differing Foam Rolling Periods

Authors: Humberto Miranda, Haroldo G. Santana, Gabriel A. Paz, Vicente P. Lima, Jeffrey M. Willardson

Abstract:

Foam rolling is a practice that has increased in popularity before and after strength training. The purpose of this study was to compare the acute effects of different foam rolling periods for the lower body muscles on subsequent performance (total repetitions and training volume), myoelectric activity and rating of perceived exertion in trained men. Fourteen trained men (26.2 ± 3.2 years, 178 ± 0.04 cm height, 82.2 ± 10 kg weight and body mass index 25.9 ± 3.3kg/m2) volunteered for this study. Four repetition maximum (4-RM) loads were determined for hexagonal bar deadlift and 45º angled leg press during test and retest sessions over two nonconsecutive days. Five experimental protocols were applied in a randomized design, which included: a traditional protocol (control)—a resistance training session without prior foam rolling; or resistance training sessions performed following one (P1), two (P2), three (P3), or four (P4) sets of 30 sec. foam rolling for the lower extremity musculature. Subjects were asked to roll over the medial and lateral aspects of each muscle group with as much pressure as possible. All foam rolling was completed at a cadence of 50 bpm. These procedures were performed on both sides unilaterally as described below. Quadriceps: between the apex of the patella and the ASIS; Hamstring: between the gluteal fold and popliteal fossa; Triceps surae: between popliteal fossa and calcaneus tendon. The resistance training consisted of five sets with 4-RM loads and two-minute rest intervals between sets, and a four-minute rest interval between the hexagonal bar deadlift and the 45º angled leg press. The number of repetitions completed, the myoelectric activity of vastus lateralis (VL), vastus medialis oblique (VMO), semitendinosus (SM) and medial gastrocnemius (GM) were recorded, as well as the rating of perceived exertion for each protocol. There were no differences between the protocols in the total repetitions for the hexagonal bar deadlift (Control - 16.2 ± 5.9; P1 - 16.9 ± 5.5; P2 - 19.2 ± 5.7; P3 - 19.4 ± 5.2; P4 - 17.2 ± 8.2) (p > 0.05) and 45º angled leg press (Control - 23.3 ± 9.7; P1 - 25.9 ± 9.5; P2 - 29.1 ± 13.8; P3 - 28.0 ± 11.7; P4 - 30.2 ± 11.2) exercises. Similar results between protocols were also noted for myoelectric activity (p > 0.05) and rating of perceived exertion (p > 0.05). Therefore, the results of the present study indicated no deleterious effects on performance, myoelectric activity and rating of perceived exertion responses during lower body resistance training.

Keywords: self myofascial release, foam rolling, electromyography, resistance training

Procedia PDF Downloads 214
39138 Optimal Pricing Based on Real Estate Demand Data

Authors: Vanessa Kummer, Maik Meusel

Abstract:

Real estate demand estimates are typically derived from transaction data. However, in regions with excess demand, transactions are driven by supply and therefore do not indicate what people are actually looking for. To estimate the demand for housing in Switzerland, search subscriptions from all important Swiss real estate platforms are used. These data do, however, suffer from missing information—for example, many users do not specify how many rooms they would like or what price they would be willing to pay. In economic analyses, it is often the case that only complete data is used. Usually, however, the proportion of complete data is rather small which leads to most information being neglected. Also, the data might have a strong distortion if it is complete. In addition, the reason that data is missing might itself also contain information, which is however ignored with that approach. An interesting issue is, therefore, if for economic analyses such as the one at hand, there is an added value by using the whole data set with the imputed missing values compared to using the usually small percentage of complete data (baseline). Also, it is interesting to see how different algorithms affect that result. The imputation of the missing data is done using unsupervised learning. Out of the numerous unsupervised learning approaches, the most common ones, such as clustering, principal component analysis, or neural networks techniques are applied. By training the model iteratively on the imputed data and, thereby, including the information of all data into the model, the distortion of the first training set—the complete data—vanishes. In a next step, the performances of the algorithms are measured. This is done by randomly creating missing values in subsets of the data, estimating those values with the relevant algorithms and several parameter combinations, and comparing the estimates to the actual data. After having found the optimal parameter set for each algorithm, the missing values are being imputed. Using the resulting data sets, the next step is to estimate the willingness to pay for real estate. This is done by fitting price distributions for real estate properties with certain characteristics, such as the region or the number of rooms. Based on these distributions, survival functions are computed to obtain the functional relationship between characteristics and selling probabilities. Comparing the survival functions shows that estimates which are based on imputed data sets do not differ significantly from each other; however, the demand estimate that is derived from the baseline data does. This indicates that the baseline data set does not include all available information and is therefore not representative for the entire sample. Also, demand estimates derived from the whole data set are much more accurate than the baseline estimation. Thus, in order to obtain optimal results, it is important to make use of all available data, even though it involves additional procedures such as data imputation.

Keywords: demand estimate, missing-data imputation, real estate, unsupervised learning

Procedia PDF Downloads 271
39137 Active Deformable Micro-Cutters with Nano-Abrasives

Authors: M. Pappa, C. Efstathiou, G. Livanos, P. Xidas, D. Vakondios, E. Maravelakis, M. Zervakis, A. Antoniadis

Abstract:

The choice of cutting tools in manufacturing processes is an essential parameter on which the required manufacturing time, the consumed energy and the cost effort all depend. If the number of tool changing times could be minimized or even eliminated by using a single convex tool providing multiple profiles, then a significant benefit of time and energy saving, as well as tool cost, would be achieved. A typical machine contains a variety of tools in order to deal with different curvatures and material removal rates. In order to minimize the required cutting tool changes, Actively Deformable micro-Cutters (ADmC) will be developed. The design of the Actively Deformable micro-Cutters will be based on the same cutting technique and mounting method as that in typical cutters.

Keywords: deformable cutters, cutting tool, milling, turning, manufacturing

Procedia PDF Downloads 441
39136 Unveiling Cardiovascular and Behavioral Effects of Aerobic Exercise: Insights from Morocco

Authors: Ahmed Boujdad

Abstract:

Morocco, situated in North Africa and celebrated for its diverse landscapes and vibrant cultural heritage, confronts evolving challenges in the realms of cardiovascular well-being and psychological health. In this context, this article aims to highlight distinctive findings stemming from Moroccan research concerning the effects of aerobic exercise on cardiovascular physiology and psychological states. The discourse will encompass a wide array of subjects, including adaptations in cardiac function due to exercise, management of blood pressure, and vascular well-being tailored to the Moroccan populace. A prominent focal point of the article will be the exploration of the interplay between aerobic exercise and Moroccan behavioral tendencies and socio-cultural influences. The research will delve into the correlations between consistent physical activity and its potential to mitigate stress, anxiety, and depression within the Moroccan framework. This inquiry will also extend to examining how exercise contributes to strengthening the societal tapestry of Morocco, fostering community involvement, and cultivating a sense of holistic wellness.

Keywords: kinesiology, cardiovascular, event-related potential‎, physical activity

Procedia PDF Downloads 46
39135 Design of Effective Decoupling Point in Build-To-Order Systems: Focusing on Trade-Off Relation between Order-To-Delivery Lead Time and Work in Progress

Authors: Zhiyong Li, Hiroshi Katayama

Abstract:

Since 1990s, e-commerce and internet business have been grown gradually over the word and customers tend to express their demand attributes in terms of specification requirement on parts, component, product structure etc. This paper deals with designing effective decoupling points for build to order systems under e-commerce environment, which can be realized through tradeoff relation analysis between two major criteria, customer order lead time and value of work in progress. These KPIs are critical for successful BTO business, namely time-based service effectiveness on coping with customer requirements for the first issue and cost effective ness with risk aversive operations for the second issue. Approach of this paper consists of investigation of successful business standing for BTO scheme, manufacturing model development of this scheme, quantitative evaluation of proposed models by calculation of two KPI values under various decoupling point distributions and discussion of the results brought by pattern of decoupling point distribution, where some cases provide the pareto optimum performances. To extract the relevant trade-off relation between considered KPIs among 2-dimensional resultant performance, useful logic developed by former research work, i.e. Katayama and Fonseca, is applied. Obtained characteristics are evaluated as effective information for managing BTO manufacturing businesses.

Keywords: build-to-order (BTO), decoupling point, e-commerce, order-to-delivery lead time (ODLT), work in progress (WIP)

Procedia PDF Downloads 311
39134 Microwave Assisted Extraction (MAE) of Castor Oil from Castor Bean

Authors: Ghazi Faisal Najmuldeen, Rosli Mohd Yunus, Nurfarahin Bt Harun, Mardhiana Binti Ismail

Abstract:

The microwave extraction has attracted great interest among the researchers. The main virtue of the microwave technique is cost-effective, time saving and simple handling procedure. Castor beans was chosen because of its high content in fatty acid, especially ricinoleic acid. The purpose of this research is to extract the castor oil by using the microwave assisted extraction (MAE) using ethanol as solvent and to investigate the influence of extraction time on castor oil yield and to characterize the main composition of the produced castor oil by using the GC-MS. It was found that there is a direct dependence between the oil yield and the time of extraction as it increases from 45% to 58% as the time increase from 10 min to 60 min. The major components of castor oil detected by GC-MS were ricinoleic acid, linoleic acid and oleic acid.

Keywords: microwave assisted extraction (MAE), castor oil, ricinoleic acid, linoleic acid

Procedia PDF Downloads 488
39133 Overview of Multi-Chip Alternatives for 2.5 and 3D Integrated Circuit Packagings

Authors: Ching-Feng Chen, Ching-Chih Tsai

Abstract:

With the size of the transistor gradually approaching the physical limit, it challenges the persistence of Moore’s Law due to the development of the high numerical aperture (high-NA) lithography equipment and other issues such as short channel effects. In the context of the ever-increasing technical requirements of portable devices and high-performance computing, relying on the law continuation to enhance the chip density will no longer support the prospects of the electronics industry. Weighing the chip’s power consumption-performance-area-cost-cycle time to market (PPACC) is an updated benchmark to drive the evolution of the advanced wafer nanometer (nm). The advent of two and half- and three-dimensional (2.5 and 3D)- Very-Large-Scale Integration (VLSI) packaging based on Through Silicon Via (TSV) technology has updated the traditional die assembly methods and provided the solution. This overview investigates the up-to-date and cutting-edge packaging technologies for 2.5D and 3D integrated circuits (ICs) based on the updated transistor structure and technology nodes. The author concludes that multi-chip solutions for 2.5D and 3D IC packagings are feasible to prolong Moore’s Law.

Keywords: moore’s law, high numerical aperture, power consumption-performance-area-cost-cycle time to market, 2.5 and 3D- very-large-scale integration, packaging, through silicon via

Procedia PDF Downloads 106
39132 A Cellular Automaton Model Examining the Effects of Oxygen, Hydrogen Ions, and Lactate on Early Tumour Growth

Authors: Maymona Al-Husari, Craig Murdoch, Steven Webb

Abstract:

Some tumors are known to exhibit an extracellular pH that is more acidic than the intracellular, creating a 'reversed pH gradient' across the cell membrane and this has been shown to affect their invasive and metastatic potential. Tumour hypoxia also plays an important role in tumour development and has been directly linked to both tumour morphology and aggressiveness. In this paper, we present a hybrid mathematical model of intracellular pH regulation that examines the effect of oxygen and pH on tumour growth and morphology. In particular, we investigate the impact of pH regulatory mechanisms on the cellular pH gradient and tumour morphology. Analysis of the model shows that: low activity of the Na+/H+ exchanger or a high rate of anaerobic glycolysis can give rise to a 'fingering' tumour morphology; and a high activity of the lactate/H+ symporter can result in a reversed transmembrane pH gradient across a large portion of the tumour mass. Also, the reversed pH gradient is spatially heterogenous within the tumour, with a normal pH gradient observed within an intermediate growth layer, that is the layer between the proliferative inner and outermost layer of the tumour.

Keywords: acidic pH, cellular automaton, ebola, tumour growth

Procedia PDF Downloads 319