Search results for: time series prediction
19491 Airy Wave Packet for a Particle in a Time-Dependant Linear Potential
Authors: M. Berrehail, F. Benamira
Abstract:
We study the quantum motion of a particle in the presence of a time- dependent linear potential using an operator invariant that is quadratic in p and linear in q within the framework of the Lewis-Riesenfeld invariant, The special invariant operator proposed in this work is demonstrated to be an Hermitian operator which has an Airy wave packet as its EigenfunctionKeywords: airy wave packet, ivariant, time-dependent linear potential, unitary transformation
Procedia PDF Downloads 49619490 Field Prognostic Factors on Discharge Prediction of Traumatic Brain Injuries
Authors: Mohammad Javad Behzadnia, Amir Bahador Boroumand
Abstract:
Introduction: Limited facility situations require allocating the most available resources for most casualties. Accordingly, Traumatic Brain Injury (TBI) is the one that may need to transport the patient as soon as possible. In a mass casualty event, deciding when the facilities are restricted is hard. The Extended Glasgow Outcome Score (GOSE) has been introduced to assess the global outcome after brain injuries. Therefore, we aimed to evaluate the prognostic factors associated with GOSE. Materials and Methods: In a multicenter cross-sectional study conducted on 144 patients with TBI admitted to trauma emergency centers. All the patients with isolated TBI who were mentally and physically healthy before the trauma entered the study. The patient’s information was evaluated, including demographic characteristics, duration of hospital stays, mechanical ventilation on admission laboratory measurements, and on-admission vital signs. We recorded the patients’ TBI-related symptoms and brain computed tomography (CT) scan findings. Results: GOSE assessments showed an increasing trend by the comparison of on-discharge (7.47 ± 1.30), within a month (7.51 ± 1.30), and within three months (7.58 ± 1.21) evaluations (P < 0.001). On discharge, GOSE was positively correlated with Glasgow Coma Scale (GCS) (r = 0.729, P < 0.001) and motor GCS (r = 0.812, P < 0.001), and inversely with age (r = −0.261, P = 0.002), hospitalization period (r = −0.678, P < 0.001), pulse rate (r = −0.256, P = 0.002) and white blood cell (WBC). Among imaging signs and trauma-related symptoms in univariate analysis, intracranial hemorrhage (ICH), interventricular hemorrhage (IVH) (P = 0.006), subarachnoid hemorrhage (SAH) (P = 0.06; marginally at P < 0.1), subdural hemorrhage (SDH) (P = 0.032), and epidural hemorrhage (EDH) (P = 0.037) were significantly associated with GOSE at discharge in multivariable analysis. Conclusion: Our study showed some predictive factors that could help to decide which casualty should transport earlier to a trauma center. According to the current study findings, GCS, pulse rate, WBC, and among imaging signs and trauma-related symptoms, ICH, IVH, SAH, SDH, and EDH are significant independent predictors of GOSE at discharge in TBI patients.Keywords: field, Glasgow outcome score, prediction, traumatic brain injury.
Procedia PDF Downloads 7919489 Fast Terminal Synergetic Converter Control
Authors: Z. Bouchama, N. Essounbouli, A. Hamzaoui, M. N. Harmas
Abstract:
A new robust finite time synergetic controller is presented based on recently developed synergetic control methodology and a terminal attractor technique. A Fast Terminal Synergetic Control (FTSC) is proposed for controlling DC-DC buck converter. Unlike Synergetic Control (SC) and sliding mode control, the proposed control scheme has the characteristics of finite time convergence and chattering free phenomena. Simulation of stabilization and reference tracking for buck converter systems illustrates the approach effectiveness while stability is assured in the Lyapunov sense and converse Lyapunov results involving scalar differential inequalities are given for finite-time stability.Keywords: dc-dc buck converter, synergetic control, finite time convergence, terminal synergetic control, fast terminal synergetic control, Lyapunov
Procedia PDF Downloads 46519488 Molecular Design and Synthesis of Heterocycles Based Anticancer Agents
Authors: Amna J. Ghith, Khaled Abu Zid, Khairia Youssef, Nasser Saad
Abstract:
Backgrounds: The multikinase and vascular endothelial growth factor (VEGF) receptor inhibitors interrupt the pathway by which angiogenesis becomes established and promulgated, resulting in the inadequate nourishment of metastatic disease. VEGFR-2 has been the principal target of anti-angiogenic therapies. We disclose the new thieno pyrimidines as inhibitors of VEGFR-2 designed by a molecular modeling approach with increased synergistic activity and decreased side effects. Purpose: 2-substituted thieno pyrimidines are designed and synthesized with anticipated anticancer activity based on its in silico molecular docking study that supports the initial pharmacophoric hypothesis with a same binding mode of interaction at the ATP-binding site of VEGFR-2 (PDB 2QU5) with high docking score. Methods: A series of compounds were designed using discovery studio 4.1/CDOCKER with a rational that mimic the pharmacophoric features present in the reported active compounds that targeted VEGFR-2. An in silico ADMET study was also performed to validate the bioavailability of the newly designed compounds. Results: The Compounds to be synthesized showed interaction energy comparable to or within the range of the benzimidazole inhibitor ligand when docked with VEGFR-2. ADMET study showed comparable results most of the compounds showed absorption within (95-99) zone varying according to different substitutions attached to thieno pyrimidine ring system. Conclusions: A series of 2-subsituted thienopyrimidines are to be synthesized with anticipated anticancer activity and according to docking study structure requirement for the design of VEGFR-2 inhibitors which can act as powerful anticancer agents.Keywords: docking, discovery studio 4.1/CDOCKER, heterocycles based anticancer agents, 2-subsituted thienopyrimidines
Procedia PDF Downloads 24919487 Modelling Hydrological Time Series Using Wakeby Distribution
Authors: Ilaria Lucrezia Amerise
Abstract:
The statistical modelling of precipitation data for a given portion of territory is fundamental for the monitoring of climatic conditions and for Hydrogeological Management Plans (HMP). This modelling is rendered particularly complex by the changes taking place in the frequency and intensity of precipitation, presumably to be attributed to the global climate change. This paper applies the Wakeby distribution (with 5 parameters) as a theoretical reference model. The number and the quality of the parameters indicate that this distribution may be the appropriate choice for the interpolations of the hydrological variables and, moreover, the Wakeby is particularly suitable for describing phenomena producing heavy tails. The proposed estimation methods for determining the value of the Wakeby parameters are the same as those used for density functions with heavy tails. The commonly used procedure is the classic method of moments weighed with probabilities (probability weighted moments, PWM) although this has often shown difficulty of convergence, or rather, convergence to a configuration of inappropriate parameters. In this paper, we analyze the problem of the likelihood estimation of a random variable expressed through its quantile function. The method of maximum likelihood, in this case, is more demanding than in the situations of more usual estimation. The reasons for this lie, in the sampling and asymptotic properties of the estimators of maximum likelihood which improve the estimates obtained with indications of their variability and, therefore, their accuracy and reliability. These features are highly appreciated in contexts where poor decisions, attributable to an inefficient or incomplete information base, can cause serious damages.Keywords: generalized extreme values, likelihood estimation, precipitation data, Wakeby distribution
Procedia PDF Downloads 14519486 On Consolidated Predictive Model of the Natural History of Breast Cancer Considering Primary Tumor and Primary Distant Metastases Growth
Authors: Ella Tyuryumina, Alexey Neznanov
Abstract:
Finding algorithms to predict the growth of tumors has piqued the interest of researchers ever since the early days of cancer research. A number of studies were carried out as an attempt to obtain reliable data on the natural history of breast cancer growth. Mathematical modeling can play a very important role in the prognosis of tumor process of breast cancer. However, mathematical models describe primary tumor growth and metastases growth separately. Consequently, we propose a mathematical growth model for primary tumor and primary metastases which may help to improve predicting accuracy of breast cancer progression using an original mathematical model referred to CoM-IV and corresponding software. We are interested in: 1) modelling the whole natural history of primary tumor and primary metastases; 2) developing adequate and precise CoM-IV which reflects relations between PT and MTS; 3) analyzing the CoM-IV scope of application; 4) implementing the model as a software tool. The CoM-IV is based on exponential tumor growth model and consists of a system of determinate nonlinear and linear equations; corresponds to TNM classification. It allows to calculate different growth periods of primary tumor and primary metastases: 1) ‘non-visible period’ for primary tumor; 2) ‘non-visible period’ for primary metastases; 3) ‘visible period’ for primary metastases. The new predictive tool: 1) is a solid foundation to develop future studies of breast cancer models; 2) does not require any expensive diagnostic tests; 3) is the first predictor which makes forecast using only current patient data, the others are based on the additional statistical data. Thus, the CoM-IV model and predictive software: a) detect different growth periods of primary tumor and primary metastases; b) make forecast of the period of primary metastases appearance; c) have higher average prediction accuracy than the other tools; d) can improve forecasts on survival of BC and facilitate optimization of diagnostic tests. The following are calculated by CoM-IV: the number of doublings for ‘nonvisible’ and ‘visible’ growth period of primary metastases; tumor volume doubling time (days) for ‘nonvisible’ and ‘visible’ growth period of primary metastases. The CoM-IV enables, for the first time, to predict the whole natural history of primary tumor and primary metastases growth on each stage (pT1, pT2, pT3, pT4) relying only on primary tumor sizes. Summarizing: a) CoM-IV describes correctly primary tumor and primary distant metastases growth of IV (T1-4N0-3M1) stage with (N1-3) or without regional metastases in lymph nodes (N0); b) facilitates the understanding of the appearance period and manifestation of primary metastases.Keywords: breast cancer, exponential growth model, mathematical modelling, primary metastases, primary tumor, survival
Procedia PDF Downloads 33719485 National Digital Soil Mapping Initiatives in Europe: A Review and Some Examples
Authors: Dominique Arrouays, Songchao Chen, Anne C. Richer-De-Forges
Abstract:
Soils are at the crossing of many issues such as food and water security, sustainable energy, climate change mitigation and adaptation, biodiversity protection, human health and well-being. They deliver many ecosystem services that are essential to life on Earth. Therefore, there is a growing demand for soil information on a national and global scale. Unfortunately, many countries do not have detailed soil maps, and, when existing, these maps are generally based on more or less complex and often non-harmonized soil classifications. An estimate of their uncertainty is also often missing. Thus, there are not easy to understand and often not properly used by end-users. Therefore, there is an urgent need to provide end-users with spatially exhaustive grids of essential soil properties, together with an estimate of their uncertainty. One way to achieve this is digital soil mapping (DSM). The concept of DSM relies on the hypothesis that soils and their properties are not randomly distributed, but that they depend on the main soil-forming factors that are climate, organisms, relief, parent material, time (age), and position in space. All these forming factors can be approximated using several exhaustive spatial products such as climatic grids, remote sensing products or vegetation maps, digital elevation models, geological or lithological maps, spatial coordinates of soil information, etc. Thus, DSM generally relies on models calibrated with existing observed soil data (point observations or maps) and so-called “ancillary co-variates” that come from other available spatial products. Then the model is generalized on grids where soil parameters are unknown in order to predict them, and the prediction performances are validated using various methods. With the growing demand for soil information at a national and global scale and the increase of available spatial co-variates national and continental DSM initiatives are continuously increasing. This short review illustrates the main national and continental advances in Europe, the diversity of the approaches and the databases that are used, the validation techniques and the main scientific and other issues. Examples from several countries illustrate the variety of products that were delivered during the last ten years. The scientific production on this topic is continuously increasing and new models and approaches are developed at an incredible speed. Most of the digital soil mapping (DSM) products rely mainly on machine learning (ML) prediction models and/or the use or pedotransfer functions (PTF) in which calibration data come from soil analyses performed in labs or for existing conventional maps. However, some scientific issues remain to be solved and also political and legal ones related, for instance, to data sharing and to different laws in different countries. Other issues related to communication to end-users and education, especially on the use of uncertainty. Overall, the progress is very important and the willingness of institutes and countries to join their efforts is increasing. Harmonization issues are still remaining, mainly due to differences in classifications or in laboratory standards between countries. However numerous initiatives are ongoing at the EU level and also at the global level. All these progress are scientifically stimulating and also promissing to provide tools to improve and monitor soil quality in countries, EU and at the global level.Keywords: digital soil mapping, global soil mapping, national and European initiatives, global soil mapping products, mini-review
Procedia PDF Downloads 18819484 MiRNA Regulation of CXCL12β during Inflammation
Authors: Raju Ranjha, Surbhi Aggarwal
Abstract:
Background: Inflammation plays an important role in infectious and non-infectious diseases. MiRNA is also reported to play role in inflammation and associated cancers. Chemokine CXCL12 is also known to play role in inflammation and various cancers. CXCL12/CXCR4 chemokine axis was involved in pathogenesis of IBD specially UC. Supplementation of CXCL12 induces homing of dendritic cells to spleen and enhances control of plasmodium parasite in BALB/c mice. We looked at the regulation of CXCL12β by miRNA in UC colitis. Prolonged inflammation of colon in UC patient increases the risk of developing colorectal cancer. We looked at the expression differences of CXCl12β and its targeting miRNA in cancer susceptible area of colon of UC patients. Aim: Aim of this study was to find out the expression regulation of CXCL12β by miRNA in inflammation. Materials and Methods: Biopsy samples and blood samples were collected from UC patients and non-IBD controls. mRNA expression was analyzed using microarray and real-time PCR. CXCL12β targeting miRNA were looked by using online target prediction tools. Expression of CXCL12β in blood samples and cell line supernatant was analyzed using ELISA. miRNA target was validated using dual luciferase assay. Results and conclusion: We found miR-200a regulate the expression of CXCL12β in UC. Expression of CXCL12β was increased in cancer susceptible part of colon and expression of its targeting miRNA was decreased in the same part of colon. miR-200a regulate CXCL12β expression in inflammation and may be an important therapeutic target in inflammation associated cancer.Keywords: inflammation, miRNA, regulation, CXCL12
Procedia PDF Downloads 27919483 New Advanced Medical Software Technology Challenges and Evolution of the Regulatory Framework in Expert Software, Artificial Intelligence, and Machine Learning
Authors: Umamaheswari Shanmugam, Silvia Ronchi, Radu Vornicu
Abstract:
Software, artificial intelligence, and machine learning can improve healthcare through innovative and advanced technologies that are able to use the large amount and variety of data generated during healthcare services every day. As we read the news, over 500 machine learning or other artificial intelligence medical devices have now received FDA clearance or approval, the first ones even preceding the year 2000. One of the big advantages of these new technologies is the ability to get experience and knowledge from real-world use and to continuously improve their performance. Healthcare systems and institutions can have a great benefit because the use of advanced technologies improves the same time efficiency and efficacy of healthcare. Software-defined as a medical device, is stand-alone software that is intended to be used for patients for one or more of these specific medical intended uses: - diagnosis, prevention, monitoring, prediction, prognosis, treatment or alleviation of a disease, any other health conditions, replacing or modifying any part of a physiological or pathological process–manage the received information from in vitro specimens derived from the human samples (body) and without principal main action of its principal intended use by pharmacological, immunological or metabolic definition. Software qualified as medical devices must comply with the general safety and performance requirements applicable to medical devices. These requirements are necessary to ensure high performance and quality and also to protect patients’ safety. The evolution and the continuous improvement of software used in healthcare must take into consideration the increase in regulatory requirements, which are becoming more complex in each market. The gap between these advanced technologies and the new regulations is the biggest challenge for medical device manufacturers. Regulatory requirements can be considered a market barrier, as they can delay or obstacle the device approval, but they are necessary to ensure performance, quality, and safety, and at the same time, they can be a business opportunity if the manufacturer is able to define in advance the appropriate regulatory strategy. The abstract will provide an overview of the current regulatory framework, the evolution of the international requirements, and the standards applicable to medical device software in the potential market all over the world.Keywords: artificial intelligence, machine learning, SaMD, regulatory, clinical evaluation, classification, international requirements, MDR, 510k, PMA, IMDRF, cyber security, health care systems.
Procedia PDF Downloads 9619482 An Econometric Analysis of the Impacts of Inflation on the Economic Growth of South Africa
Authors: Gisele Mah, Paul Saah
Abstract:
The rising rates of inflation are hindering economic growth in developing nations. Hence, this study investigated the effects of inflation rates on the economic growth of South Africa using the secondary time series data from 1987 to 2022. The main objectives of this study were to investigate the long run relationship between inflation and economic growth, and also to determine the causality direction between these two variables. The study utilized the Autoregressive Distributed Lag (ARDL) bounds test of co-integration to investigate whether there is a long-run relationship between inflation and economic growth. The Pairwise Granger causality approach was employed to determine the second objective, which is the direction of causality. The study discovered only one co-integration relationship between our variables and it was between inflation and economic growth. The results showed that there is a negative and significant relationship between inflation and economic growth. There appeared to be a positive and significant relationship between economic growth and exchange rate. The interest rates have shown to be negative and insignificant in explaining economic growth. The study also established that inflation does Granger cause economic growth which is given as GDP. Similarly, the study discovered that inflation Granger causes exchange rates. Therefore, the study recommends that inflation should be decreased in South Africa, in order for economic growth to increase. Contrary, this study recommends that South Africa should increase its exchange rates, in order for economic growth to also increase.Keywords: inflation rate, economic growth, South Africa, autoregressive distributed lag model
Procedia PDF Downloads 5519481 Determination of Direct Solar Radiation Using Atmospheric Physics Models
Authors: Pattra Pukdeekiat, Siriluk Ruangrungrote
Abstract:
This work was originated to precisely determine direct solar radiation by using atmospheric physics models since the accurate prediction of solar radiation is necessary and useful for solar energy applications including atmospheric research. The possible models and techniques for a calculation of regional direct solar radiation were challenging and compulsory for the case of unavailable instrumental measurement. The investigation was mathematically governed by six astronomical parameters i.e. declination (δ), hour angle (ω), solar time, solar zenith angle (θz), extraterrestrial radiation (Iso) and eccentricity (E0) along with two atmospheric parameters i.e. air mass (mr) and dew point temperature at Bangna meteorological station (13.67° N, 100.61° E) in Bangkok, Thailand. Analyses of five models of solar radiation determination with the assumption of clear sky were applied accompanied by three statistical tests: Mean Bias Difference (MBD), Root Mean Square Difference (RMSD) and Coefficient of determination (R2) in order to validate the accuracy of obtainable results. The calculated direct solar radiation was in a range of 491-505 Watt/m2 with relative percentage error 8.41% for winter and 532-540 Watt/m2 with relative percentage error 4.89% for summer 2014. Additionally, dataset of seven continuous days, representing both seasons were considered with the MBD, RMSD and R2 of -0.08, 0.25, 0.86 and -0.14, 0.35, 3.29, respectively, which belong to Kumar model for winter and CSR model for summer. In summary, the determination of direct solar radiation based on atmospheric models and empirical equations could advantageously provide immediate and reliable values of the solar components for any site in the region without a constraint of actual measurement.Keywords: atmospheric physics models, astronomical parameters, atmospheric parameters, clear sky condition
Procedia PDF Downloads 41419480 A Real-time Classification of Lying Bodies for Care Application of Elderly Patients
Authors: E. Vazquez-Santacruz, M. Gamboa-Zuniga
Abstract:
In this paper, we show a methodology for bodies classification in lying state using HOG descriptors and pressures sensors positioned in a matrix form (14 x 32 sensors) on the surface where bodies lie down. it will be done in real time. Our system is embedded in a care robot that can assist the elderly patient and medical staff around to get a better quality of life in and out of hospitals. Due to current technology a limited number of sensors is used, wich results in low-resolution data array, that will be used as image of 14 x 32 pixels. Our work considers the problem of human posture classification with few information (sensors), applying digital process to expand the original data of the sensors and so get more significant data for the classification, however, this is done with low-cost algorithms to ensure the real-time execution.Keywords: real-time classification, sensors, robots, health care, elderly patients, artificial intelligence
Procedia PDF Downloads 87019479 Treatment of High Concentration Cutting Fluid Wastewater by Ceramic Membrane Bioreactor
Authors: Kai-Shiang Chang, Shiao-Shing Chen, Saikat Sinha Ray, Hung-Te Hsu
Abstract:
In recent years, membrane bioreactors (MBR) have been widely utilized as it can effectively replace conventional activated sludge process (CAS). Membrane bioreactor (MBR) is found to be more effective technology compared to other conventional activated sludge process and advanced membrane separation technique. Additionally, as far as the MBR is concerned, it is having excellent control of sludge retention time (SRT) and hydraulic retention time (HRT) and conducive to the retention of high concentration of sludge biomass. The membrane bioreactor (MBR) can effectively reduce footprint in terms of area and omit the secondary processing procedures in the conventional activated sludge process (CAS). Currently, as per the membrane technology, the ceramic membrane is found to have highly strong anti-acid-base properties, and it is more suitable than polymeric membrane while using for backwash and chemical cleaning. This study is based upon the treatment of Cutting Fluid wastewater, as the Cutting Fluid is widely used in the cutting equipment. However, the Cutting Fluid wastewater is very difficult to treat. In this study, the ceramic membrane was used and combine with of MBR system to treat the Cutting Fluid wastewater. In this present study, different kind of chemical coagulants have been utilized for pretreatment purpose in order to get the supernatant and simultaneously this wastewater (supernatant) was treated by MBR process. Nevertheless, ceramic membrane has three advantages such as high mechanical strength, drug resistance and reuse. During the experiment, the backwash technique was used for every interval of 10 minutes in order to avoid fouling of the membrane. In this study, during pretreatment the Chemical Oxygen Demand (COD) removal efficiency was found to be 71-86% and oil removal efficiency was analyzed to be 83-92%. This pretreatment study suggests that it is quiet effective methodology to reduce COD and oil concentration. Finally, In the MBR system when the HRT is more than 7.5 hour, the COD removal efficiency was found to be 87-93% and could achieve 100% oil removal efficiency. Coagulation test series were seen in Refs coagulants for the treatment of wastewater containing cutting oil with better oil and COD removal efficiency. The results also showed that the oil removal efficiency in the MBR system could reduce the oil content to less than 1 mg / L when the oil quality was 126 mg / L. Therefore, in this paper, the performance of membrane bioreactor by utilizing ceramic membrane has been demonstrated for treatment of Cutting Fluid wastewater.Keywords: membrane bioreactor, cutting fluid, oil, chemical oxygen demand
Procedia PDF Downloads 31919478 Prediction of Compressive Strength of Concrete from Early Age Test Result Using Design of Experiments (Rsm)
Authors: Salem Alsanusi, Loubna Bentaher
Abstract:
Response Surface Methods (RSM) provide statistically validated predictive models that can then be manipulated for finding optimal process configurations. Variation transmitted to responses from poorly controlled process factors can be accounted for by the mathematical technique of propagation of error (POE), which facilitates ‘finding the flats’ on the surfaces generated by RSM. The dual response approach to RSM captures the standard deviation of the output as well as the average. It accounts for unknown sources of variation. Dual response plus propagation of error (POE) provides a more useful model of overall response variation. In our case, we implemented this technique in predicting compressive strength of concrete of 28 days in age. Since 28 days is quite time consuming, while it is important to ensure the quality control process. This paper investigates the potential of using design of experiments (DOE-RSM) to predict the compressive strength of concrete at 28th day. Data used for this study was carried out from experiment schemes at university of Benghazi, civil engineering department. A total of 114 sets of data were implemented. ACI mix design method was utilized for the mix design. No admixtures were used, only the main concrete mix constituents such as cement, coarse-aggregate, fine aggregate and water were utilized in all mixes. Different mix proportions of the ingredients and different water cement ratio were used. The proposed mathematical models are capable of predicting the required concrete compressive strength of concrete from early ages.Keywords: mix proportioning, response surface methodology, compressive strength, optimal design
Procedia PDF Downloads 27119477 Transport Mode Selection under Lead Time Variability and Emissions Constraint
Authors: Chiranjit Das, Sanjay Jharkharia
Abstract:
This study is focused on transport mode selection under lead time variability and emissions constraint. In order to reduce the carbon emissions generation due to transportation, organization has often faced a dilemmatic choice of transport mode selection since logistic cost and emissions reduction are complementary with each other. Another important aspect of transportation decision is lead-time variability which is least considered in transport mode selection problem. Thus, in this study, we provide a comprehensive mathematical based analytical model to decide transport mode selection under emissions constraint. We also extend our work through analysing the effect of lead time variability in the transport mode selection by a sensitivity analysis. In order to account lead time variability into the model, two identically normally distributed random variables are incorporated in this study including unit lead time variability and lead time demand variability. Therefore, in this study, we are addressing following questions: How the decisions of transport mode selection will be affected by lead time variability? How lead time variability will impact on total supply chain cost under carbon emissions? To accomplish these objectives, a total transportation cost function is developed including unit purchasing cost, unit transportation cost, emissions cost, holding cost during lead time, and penalty cost for stock out due to lead time variability. A set of modes is available to transport each node, in this paper, we consider only four transport modes such as air, road, rail, and water. Transportation cost, distance, emissions level for each transport mode is considered as deterministic and static in this paper. Each mode is having different emissions level depending on the distance and product characteristics. Emissions cost is indirectly affected by the lead time variability if there is any switching of transport mode from lower emissions prone transport mode to higher emissions prone transport mode in order to reduce penalty cost. We provide a numerical analysis in order to study the effectiveness of the mathematical model. We found that chances of stock out during lead time will be higher due to the higher variability of lead time and lad time demand. Numerical results show that penalty cost of air transport mode is negative that means chances of stock out zero, but, having higher holding and emissions cost. Therefore, air transport mode is only selected when there is any emergency order to reduce penalty cost, otherwise, rail and road transport is the most preferred mode of transportation. Thus, this paper is contributing to the literature by a novel approach to decide transport mode under emissions cost and lead time variability. This model can be extended by studying the effect of lead time variability under some other strategic transportation issues such as modal split option, full truck load strategy, and demand consolidation strategy etc.Keywords: carbon emissions, inventory theoretic model, lead time variability, transport mode selection
Procedia PDF Downloads 43819476 The Impact of Community Settlement on Leisure Time Use and Body Composition in Determining Physical Lifestyles among Women
Authors: Mawarni Mohamed, Sharifah Shahira A. Hamid
Abstract:
Leisure time is an important component to offset the sedentary lifestyle of the people. Women tend to benefit from leisure activities not only to reduce stress but also to provide opportunities for well-being and self-satisfaction. This study was conducted to investigate body composition and leisure time use among women in Selangor from the influences of community settlement. A total of 419 women aged 18-65 years were selected to participate in this study. Descriptive statistics, t-test and ANOVA were used to analyze the level of physical activity and the relationship between leisure-time use and body composition were made to analyze the physical lifestyles. The results showed that women with normal body composition seem to be involved in more passive activities than women with less weight gain and obesity. Thus, the study recommended that the government and other health and recreational agencies should develop more places and activities suitable for leisure preference for women in their community settlement so they become more interested to engage in more active recreational and physical activities.Keywords: body composition, community settlement, leisure time, physical lifestyles
Procedia PDF Downloads 45519475 Improving the Residence Time of a Rectangular Contact Tank by Varying the Geometry Using Numerical Modeling
Authors: Yamileth P. Herrera, Ronald R. Gutierrez, Carlos, Pacheco-Bustos
Abstract:
This research aims at the numerical modeling of a rectangular contact tank in order to improve the hydrodynamic behavior and the retention time of the water to be treated with the disinfecting agent. The methodology to be followed includes a hydraulic analysis of the tank to observe the fluid velocities, which will allow evidence of low-speed areas that may generate pathogenic agent incubation or high-velocity areas, which may decrease the optimal contact time between the disinfecting agent and the microorganisms to be eliminated. Based on the results of the numerical model, the efficiency of the tank under the geometric and hydraulic conditions considered will be analyzed. This would allow the performance of the tank to be improved before starting a construction process, thus avoiding unnecessary costs.Keywords: contact tank, numerical models, hydrodynamic modeling, residence time
Procedia PDF Downloads 17419474 Human-factor and Ergonomics in Bottling Lines
Authors: Parameshwaran Nair
Abstract:
Filling and packaging lines for bottling of beverages into glass, PET or aluminum containers require specialized expertise and a different configuration of equipment like – Filler, Warmer, Labeller, Crater/Recrater, Shrink Packer, Carton Erector, Carton Sealer, Date Coder, Palletizer, etc. Over the period of time, the packaging industry has evolved from manually operated single station machines to highly automized high-speed lines. Human factor and ergonomics have gained significant consideration in this course of transformation. A pre-requisite for such bottling lines, irrespective of the container type and size, is to be suitable for multi-format applications. It should also be able to handle format changeovers with minimal adjustment. It should have variable capacity and speeds, for providing great flexibility of use in managing accumulation times as a function of production characteristics. In terms of layout as well, it should demonstrate flexibility for operator movement and access to machine areas for maintenance. Packaging technology during the past few decades has risen to these challenges by a series of major breakthroughs interspersed with periods of refinement and improvement. The milestones are many and varied and are described briefly in this paper. In order to have a brief understanding of the human factor and ergonomics in the modern packaging lines, this paper, highlights the various technologies, design considerations and statutory requirements in packaging equipment for different types of containers used in India.Keywords: human-factor, ergonomics, bottling lines, automized high-speed lines
Procedia PDF Downloads 44119473 The Effectiveness of the Counselling Module in Counseling Interventions for Low Performance Employees
Authors: Hazaila Hassan
Abstract:
This research aims and discusses about the effectiveness of the Psynnova i-Behaviour Modification Technique (iBMT) module towards the change in behaviour of low-performing employees. The purpose of the study is to examine the effectiveness of the Psynnova Module on changing behaviour through five factors among low-performing employees in the public sector. The five main factors/constructs were cognitive enhancement and rationality, emotional stability, attitude alignment and adjustment, social skills development and psycho-spirituality enhancement. In this research, 5 main constructs will be using to indicate behaviour changing performance of the employees after attending The Psynnova Program that using this Psynnova IBMT Module. The respondents are among those who have low scores in terms of annual performance through annual performance value reports and have gone through various stages before being required to attend Psynnova Program. Besides that, the research plan was also to critically examine and understand the change in behaviour among the low-performing employees through the five dimensions in the Psynnova Module. A total of 50 respondent will purposively sampled to be the respondents of this research. This study will use the Experimental Method to One Group Purposively Pre and Post Test using the Time Series Design. Experimental SPSS software version 22.0 will be used to analyse this data. Hopefully this research can see the changing of their behaviour in five factors as an indicator to the respondent after attending the Psynnova Programme. Findings from this study are also used to propose to assisting psychologist to see the changes that occurred to the respondents with the best framework of behaviour changing for them.Keywords: five dimension of behaviour changing, among adult, low performance, modul effectiveness
Procedia PDF Downloads 17519472 Statistical Analysis of Extreme Flow (Regions of Chlef)
Authors: Bouthiba Amina
Abstract:
The estimation of the statistics bound to the precipitation represents a vast domain, which puts numerous challenges to meteorologists and hydrologists. Sometimes, it is necessary, to approach in value the extreme events for sites where there is little, or no datum, as well as their periods of return. The search for a model of the frequency of the heights of daily rains dresses a big importance in operational hydrology: It establishes a basis for predicting the frequency and intensity of floods by estimating the amount of precipitation in past years. The most known and the most common approach is the statistical approach, It consists in looking for a law of probability that fits best the values observed by the random variable " daily maximal rain " after a comparison of various laws of probability and methods of estimation by means of tests of adequacy. Therefore, a frequent analysis of the annual series of daily maximal rains was realized on the data of 54 pluviometric stations of the pond of high and average. This choice was concerned with five laws usually applied to the study and the analysis of frequent maximal daily rains. The chosen period is from 1970 to 2013. It was of use to the forecast of quantiles. The used laws are the law generalized by extremes to three components, those of the extreme values to two components (Gumbel and log-normal) in two parameters, the law Pearson typifies III and Log-Pearson III in three parameters. In Algeria, Gumbel's law has been used for a long time to estimate the quantiles of maximum flows. However, and we will check and choose the most reliable law.Keywords: return period, extreme flow, statistics laws, Gumbel, estimation
Procedia PDF Downloads 8119471 The Influence of Alvar Aalto on the Early Work of Álvaro Siza
Authors: Eduardo Jorge Cabral dos Santos Fernandes
Abstract:
The expression ‘Porto School’, usually associated with an educational institution, the School of Fine Arts of Porto, is applied for the first time with the sense of an architectural trend by Nuno Portas in a text published in 1983. The expression is used to characterize a set of works by Porto architects, in which common elements are found, namely the desire to reuse languages and forms of the German and Dutch rationalism of the twenties, using the work of Alvar Aalto as a mediation for the reinterpretation of these models. In the same year, Álvaro Siza classifies the Finnish architect as a miscegenation agent who transforms experienced models and introduces them to different realities in a text published in Jornal de Letras, Artes e Ideias. The influence of foreign models and their adaptation to the context has been a recurrent theme in Portuguese architecture, which finds important contributions in the writings of Alexandre Alves Costa, at this time. However, the identification of these characteristics in Siza’s work is not limited to the Portuguese theoretical production: it is the recognition of this attitude towards the context that leads Kenneth Frampton to include Siza in the restricted group of architects who embody Critical Regionalism (in his book Modern architecture: a critical history). For Frampton, his work focuses on the territory and on the consequences of the intervention in the context, viewing architecture as a tectonic fact rather than a series of scenographic episodes and emphasizing site-specific aspects (topography, light, climate). Therefore, the motto of this paper is the dichotomous opposition between foreign influences and adaptation to the context in the early work of Álvaro Siza (designed in the sixties) in which the influence (theoretical, methodological, and formal) of Alvar Aalto manifests itself in the form and the language: the pool at Quinta da Conceição, the Seaside Pools and the Tea House (three works in Leça da Palmeira) and the Lordelo Cooperative (in Porto). This work is part of a more comprehensive project, which considers several case studies throughout the Portuguese architect's vast career, built in Portugal and abroad, in order to obtain a holistic view.Keywords: Alvar Aalto, Álvaro Siza, foreign influences, adaptation to the context
Procedia PDF Downloads 4219470 Driver Behavior Analysis and Inter-Vehicular Collision Simulation Approach
Authors: Lu Zhao, Nadir Farhi, Zoi Christoforou, Nadia Haddadou
Abstract:
The safety test of deploying intelligent connected vehicles (ICVs) on the road network is a critical challenge. Road traffic network simulation can be used to test the functionality of ICVs, which is not only time-saving and less energy-consuming but also can create scenarios with car collisions. However, the relationship between different human driver behaviors and the car-collision occurrences has been not understood clearly; meanwhile, the procedure of car-collisions generation in the traffic numerical simulators is not fully integrated. In this paper, we propose an approach to identify specific driver profiles from real driven data; then, we replicate them in numerical traffic simulations with the purpose of generating inter-vehicular collisions. We proposed three profiles: (i) 'aggressive': short time-headway, (ii) 'inattentive': long reaction time, and (iii) 'normal' with intermediate values of reaction time and time-headway. These three driver profiles are extracted from the NGSIM dataset and simulated using the intelligent driver model (IDM), with an extension of reaction time. At last, the generation of inter-vehicular collisions is performed by varying the percentages of different profiles.Keywords: vehicular collisions, human driving behavior, traffic modeling, car-following models, microscopic traffic simulation
Procedia PDF Downloads 17519469 Joint Modeling of Longitudinal and Time-To-Event Data with Latent Variable
Authors: Xinyuan Y. Song, Kai Kang
Abstract:
Joint models for analyzing longitudinal and survival data are widely used to investigate the relationship between a failure time process and time-variant predictors. A common assumption in conventional joint models in the survival analysis literature is that all predictors are observable. However, this assumption may not always be supported because unobservable traits, namely, latent variables, which are indirectly observable and should be measured through multiple observed variables, are commonly encountered in the medical, behavioral, and financial research settings. In this study, a joint modeling approach to deal with this feature is proposed. The proposed model comprises three parts. The first part is a dynamic factor analysis model for characterizing latent variables through multiple observed indicators over time. The second part is a random coefficient trajectory model for describing the individual trajectories of latent variables. The third part is a proportional hazard model for examining the effects of time-invariant predictors and the longitudinal trajectories of time-variant latent risk factors on hazards of interest. A Bayesian approach coupled with a Markov chain Monte Carlo algorithm to perform statistical inference. An application of the proposed joint model to a study on the Alzheimer's disease neuroimaging Initiative is presented.Keywords: Bayesian analysis, joint model, longitudinal data, time-to-event data
Procedia PDF Downloads 14819468 Measuring Enterprise Growth: Pitfalls and Implications
Authors: N. Šarlija, S. Pfeifer, M. Jeger, A. Bilandžić
Abstract:
Enterprise growth is generally considered as a key driver of competitiveness, employment, economic development and social inclusion. As such, it is perceived to be a highly desirable outcome of entrepreneurship for scholars and decision makers. The huge academic debate resulted in the multitude of theoretical frameworks focused on explaining growth stages, determinants and future prospects. It has been widely accepted that enterprise growth is most likely nonlinear, temporal and related to the variety of factors which reflect the individual, firm, organizational, industry or environmental determinants of growth. However, factors that affect growth are not easily captured, instruments to measure those factors are often arbitrary, causality between variables and growth is elusive, indicating that growth is not easily modeled. Furthermore, in line with heterogeneous nature of the growth phenomenon, there is a vast number of measurement constructs assessing growth which are used interchangeably. Differences among various growth measures, at conceptual as well as at operationalization level, can hinder theory development which emphasizes the need for more empirically robust studies. In line with these highlights, the main purpose of this paper is twofold. Firstly, to compare structure and performance of three growth prediction models based on the main growth measures: Revenues, employment and assets growth. Secondly, to explore the prospects of financial indicators, set as exact, visible, standardized and accessible variables, to serve as determinants of enterprise growth. Finally, to contribute to the understanding of the implications on research results and recommendations for growth caused by different growth measures. The models include a range of financial indicators as lag determinants of the enterprises’ performances during the 2008-2013, extracted from the national register of the financial statements of SMEs in Croatia. The design and testing stage of the modeling used the logistic regression procedures. Findings confirm that growth prediction models based on different measures of growth have different set of predictors. Moreover, the relationship between particular predictors and growth measure is inconsistent, namely the same predictor positively related to one growth measure may exert negative effect on a different growth measure. Overall, financial indicators alone can serve as good proxy of growth and yield adequate predictive power of the models. The paper sheds light on both methodology and conceptual framework of enterprise growth by using a range of variables which serve as a proxy for the multitude of internal and external determinants, but are unlike them, accessible, available, exact and free of perceptual nuances in building up the model. Selection of the growth measure seems to have significant impact on the implications and recommendations related to growth. Furthermore, the paper points out to potential pitfalls of measuring and predicting growth. Overall, the results and the implications of the study are relevant for advancing academic debates on growth-related methodology, and can contribute to evidence-based decisions of policy makers.Keywords: growth measurement constructs, logistic regression, prediction of growth potential, small and medium-sized enterprises
Procedia PDF Downloads 25519467 Development of GIS-Based Geotechnical Guidance Maps for Prediction of Soil Bearing Capacity
Authors: Q. Toufeeq, R. Kauser, U. R. Jamil, N. Sohaib
Abstract:
Foundation design of a structure needs soil investigation to avoid failures due to settlements. This soil investigation is expensive and time-consuming. Developments of new residential societies involve huge leveling of large sites that is accompanied by heavy land filling. Poor practices of land fill for deep depths cause differential settlements and consolidations of underneath soil that sometimes result in the collapse of structures. The extent of filling remains unknown to the individual developer unless soil investigation is carried out. Soil investigation cannot be performed on each available site due to involved costs. However, fair estimate of bearing capacity can be made if such tests are already done in the surrounding areas. The geotechnical guidance maps can provide a fair assessment of soil properties. Previously, GIS-based approaches have been used to develop maps using extrapolation and interpolations techniques for bearing capacities, underground recharge, soil classification, geological hazards, landslide hazards, socio-economic, and soil liquefaction mapping. Standard penetration test (SPT) data of surrounding sites were already available. Google Earth is used for digitization of collected data. Few points were considered for data calibration and validation. Resultant Geographic information system (GIS)-based guidance maps are helpful to anticipate the bearing capacity in the real estate industry.Keywords: bearing capacity, soil classification, geographical information system, inverse distance weighted, radial basis function
Procedia PDF Downloads 14019466 Modern Sports and Imperial Solidarity: Sports, Mutiny and British Army in Colonial Malabar (1900-1930)
Authors: Anas Ali
Abstract:
The British administration at Malabar, the southern coastal commercial outpost in the Indian Subcontinent, faced with a series of perpetual revolts from the Mappila Muslim peasants during the last decades of the 19th and early decades of the 20th century. The control of Malabar region was a concern for the British administrators as the region was a prime centre of spice trade and plantation products. The Madras government set up a special police battalion called the Malabar Special Police in 1884 and summoned different army battalions to Malabar to crush the revolts. The setting up of army camps in the rural Malabar led to the diffusion of modern sports as the army men played different games in the garrisons and with the local people. For the imperial army men deployed in Malabar, sports acted as a viable medium to strengthen solidarity with other European settlers. They actively participated in the ‘Canterbury Week’, an annual sporting event organized by the European planters and organized tournaments among themselves. This paper would argue that, sports enabled the imperial army men, European planters and British administrators to build camaraderie that enabled them to manifest their imperial solidarity during the time of these constant revolts. Based on newspaper reports and colonial memoirs, this paper would look at how modern sports enabled the imperial army men to be ‘good in health’ and create a feeling of ‘being at home’ during this period.Keywords: imperial army, Malabar, modern sports, mutiny
Procedia PDF Downloads 21619465 Time's Arrow and Entropy: Violations to the Second Law of Thermodynamics Disrupt Time Perception
Authors: Jason Clarke, Michaela Porubanova, Angela Mazzoli, Gulsah Kut
Abstract:
What accounts for our perception that time inexorably passes in one direction, from the past to the future, the so-called arrow of time, given that the laws of physics permit motion in one temporal direction to also happen in the reverse temporal direction? Modern physics says that the reason for time’s unidirectional physical arrow is the relationship between time and entropy, the degree of disorder in the universe, which is evolving from low entropy (high order; thermal disequilibrium) toward high entropy (high disorder; thermal equilibrium), the second law of thermodynamics. Accordingly, our perception of the direction of time, from past to future, is believed to emanate as a result of the natural evolution of entropy from low to high, with low entropy defining our notion of ‘before’ and high entropy defining our notion of ‘after’. Here we explored this proposed relationship between entropy and the perception of time’s arrow. We predicted that if the brain has some mechanism for detecting entropy, whose output feeds into processes involved in constructing our perception of the direction of time, presentation of violations to the expectation that low entropy defines ‘before’ and high entropy defines ‘after’ would alert this mechanism, leading to measurable behavioral effects, namely a disruption in duration perception. To test this hypothesis, participants were shown briefly-presented (1000 ms or 500 ms) computer-generated visual dynamic events: novel 3D shapes that were seen either to evolve from whole figures into parts (low to high entropy condition) or were seen in the reverse direction: parts that coalesced into whole figures (high to low entropy condition). On each trial, participants were instructed to reproduce the duration of their visual experience of the stimulus by pressing and releasing the space bar. To ensure that attention was being deployed to the stimuli, a secondary task was to report the direction of the visual event (forward or reverse motion). Participants completed 60 trials. As predicted, we found that duration reproduction was significantly longer for the high to low entropy condition compared to the low to high entropy condition (p=.03). This preliminary data suggests the presence of a neural mechanism that detects entropy, which is used by other processes to construct our perception of the direction of time or time’s arrow.Keywords: time perception, entropy, temporal illusions, duration perception
Procedia PDF Downloads 17619464 Volarization of Sugarcane Bagasse: The Effect of Alkali Concentration, Soaking Time and Temperature on Fibre Yield
Authors: Tamrat Tesfaye, Tilahun Seyoum, K. Shabaridharan
Abstract:
The objective of this paper was to determine the effect of NaOH concentration, soaking time, soaking temperature and their interaction on percentage yield of fibre extract using Response Surface Methodology (RSM). A Box-Behnken design was employed to optimize the extraction process of cellulosic fibre from sugar cane by-product bagasse using low alkaline extraction technique. The quadratic model with the optimal technological conditions resulted in a maximum fibre yield of 56.80% at 0.55N NaOH concentration, 4 h steeping time and 60ᵒC soaking temperature. Among the independent variables concentration was found to be the most significant (P < 0.005) variable and the interaction effect of concentration and soaking time leads to securing the optimized processes.Keywords: sugarcane bagasse, low alkaline, Box-Behnken, fibre
Procedia PDF Downloads 25319463 Comparing Performance of Neural Network and Decision Tree in Prediction of Myocardial Infarction
Authors: Reza Safdari, Goli Arji, Robab Abdolkhani Maryam zahmatkeshan
Abstract:
Background and purpose: Cardiovascular diseases are among the most common diseases in all societies. The most important step in minimizing myocardial infarction and its complications is to minimize its risk factors. The amount of medical data is increasingly growing. Medical data mining has a great potential for transforming these data into information. Using data mining techniques to generate predictive models for identifying those at risk for reducing the effects of the disease is very helpful. The present study aimed to collect data related to risk factors of heart infarction from patients’ medical record and developed predicting models using data mining algorithm. Methods: The present work was an analytical study conducted on a database containing 350 records. Data were related to patients admitted to Shahid Rajaei specialized cardiovascular hospital, Iran, in 2011. Data were collected using a four-sectioned data collection form. Data analysis was performed using SPSS and Clementine version 12. Seven predictive algorithms and one algorithm-based model for predicting association rules were applied to the data. Accuracy, precision, sensitivity, specificity, as well as positive and negative predictive values were determined and the final model was obtained. Results: five parameters, including hypertension, DLP, tobacco smoking, diabetes, and A+ blood group, were the most critical risk factors of myocardial infarction. Among the models, the neural network model was found to have the highest sensitivity, indicating its ability to successfully diagnose the disease. Conclusion: Risk prediction models have great potentials in facilitating the management of a patient with a specific disease. Therefore, health interventions or change in their life style can be conducted based on these models for improving the health conditions of the individuals at risk.Keywords: decision trees, neural network, myocardial infarction, Data Mining
Procedia PDF Downloads 43319462 Asymmetrical Informative Estimation for Macroeconomic Model: Special Case in the Tourism Sector of Thailand
Authors: Chukiat Chaiboonsri, Satawat Wannapan
Abstract:
This paper used an asymmetric informative concept to apply in the macroeconomic model estimation of the tourism sector in Thailand. The variables used to statistically analyze are Thailand international and domestic tourism revenues, the expenditures of foreign and domestic tourists, service investments by private sectors, service investments by the government of Thailand, Thailand service imports and exports, and net service income transfers. All of data is a time-series index which was observed between 2002 and 2015. Empirically, the tourism multiplier and accelerator were estimated by two statistical approaches. The first was the result of the Generalized Method of Moments model (GMM) based on the assumption which the tourism market in Thailand had perfect information (Symmetrical data). The second was the result of the Maximum Entropy Bootstrapping approach (MEboot) based on the process that attempted to deal with imperfect information and reduced uncertainty in data observations (Asymmetrical data). In addition, the tourism leakages were investigated by a simple model based on the injections and leakages concept. The empirical findings represented the parameters computed from the MEboot approach which is different from the GMM method. However, both of the MEboot estimation and GMM model suggests that Thailand’s tourism sectors are in a period capable of stimulating the economy.Keywords: TThailand tourism, Maximum Entropy Bootstrapping approach, macroeconomic model, asymmetric information
Procedia PDF Downloads 296