Search results for: post model selection
15381 The Use of Rotigotine to Improve Hemispatial Neglect in Stroke Patients at the Charing Cross Neurorehabilitation Unit
Authors: Malab Sana Balouch, Meenakshi Nayar
Abstract:
Hemispatial Neglect is a common disorder primarily associated with right hemispheric stroke, in the acute phase of which it can occur up to 82% of the time. Such individuals fail to acknowledge or respond to people and objects in their left field of vision due to deficits in attention and awareness. Persistent hemispatial neglect significantly impedes post-stroke recovery, leading to longer hospital stays post-stroke, increased functional dependency, longer-term disability in ADLs and increased risk of falls. Recently, evidence has emerged for the use of dopamine agonist Rotigotine in neglect. The aim of our Quality Improvement Project (QIP) is to evaluate and better the current protocols and practice in assessment, documentation and management of neglect and rotigotine use at the Neurorehabilitation unit at Charing Cross Hospital (CNRU). In addition, it brings light to rotigotine use in the management of hemispatial neglect and paves the way for future research in the field. Our QIP was based in the CNRU. All patients admitted to the CNRU suffering from a right-sided stroke from 2nd of February 2018 to the 2nd of February 2021 were included in the project. Each patient’s multidisciplinary team report and hospital notes were searched for information, including bio-data, fulfilment of the inclusion criteria (having hemispatial neglect) and data related to rotigotine use. This includes whether or not the drug was administered, any contraindications to drug in patients that did not receive it, and any therapeutic benefits(subjective or objective improvement in neglect) in those that did receive the rotigotine. Data was simultaneously entered into excel sheet and further statistical analysis was done on SPSS 20.0. Out of 80 patients suffering from right sided strokes, 72.5% were infarcts and 27.5% were hemorrhagic strokes, with vast majority of both types of strokes were in the middle cerebral artery territory (MCA). A total of 31 (38.8%) of our patients were noted to have hemispatial neglect, with the highest number of cases being associated with MCA strokes. Almost half of our patients with MCA strokes suffered from neglect. Neglect was more common in male patients. Out of the 31 patients suffering from visuospatial neglect, only 16% actually received rotigotine and 80% of them were noted to have an objective improvement in their neglect tests and 20% revealed subjective improvement. After thoroughly going through neglect-associated documentation, the following recommendations/plans were put in place for the future. We plan to liaise with the occupational therapy team at our rehab unit to set a battery of tests that would be done on all patients presenting with neglect and recommend clear documentation of outcomes of each neglect screen under it. Also to create two proformas; one for the therapy team to aid in systematic documentation of neglect screens done prior to and after rotigotine administration and a second proforma for the medical team with clear documentation of rotigotine use, its benefits and any contraindications if not administered.Keywords: hemispatial Neglect, right hemispheric stroke, rotigotine, neglect, dopamine agonist
Procedia PDF Downloads 7915380 Study on the Protection and Transformation of Stone House Building in Shitang Town, Wenling, Zhejiang
Authors: Zhang Jiafeng
Abstract:
Stone houses, represented by Shitang town, Wenling town, Taizhou city, are very precious cultural relics in Zhejiang province and even in the whole country. The coastal residences in eastern Zhejiang with distinctive regional characteristics are completely different from the traditional residential styles in the inland areas of Zhejiang. However, with the aggravation of the conflict between the use function of traditional stone houses and the modern lifestyle, and the lack of effective protection, stone houses are disappearing in large numbers. Therefore, it is very important to protect and inherit the stone house building, and make effective and feasible development strategies. This paper will analyze the formation background, location selection, plane layout, architectural form, spatial organization, material application, and construction technology of the stone houses through literature research and field investigation. In addition, a series of feasibility studies are carried out on the protection and renovation of stone houses. The ultimate purpose is to attract people's attention and provide some reference for the protection, inheritance, development, and utilization of traditional houses in coastal areas.Keywords: regional, stone house building, traditional houses, Wenling Shitang
Procedia PDF Downloads 15115379 The Purification of Waste Printing Developer with the Fixed Bed Adsorption Column
Authors: Kiurski S. Jelena, Ranogajec G. Jonjaua, Kecić S. Vesna, Oros B. Ivana
Abstract:
The present study investigates the effectiveness of newly designed clayey pellets (fired clay pellets diameter sizes of 5 and 8 mm, and unfired clay pellets with the diameter size of 15 mm) as the beds in the column adsorption process. The adsorption experiments in the batch mode were performed before the column experiment with the purpose to determine the order of adsorbent package in the column which was to be designed in the investigation. The column experiment was performed by using a known mass of the clayey beds and the volume of the waste printing developer, which was purified. The column was filled in the following order: fired clay pellets of the diameter size of 5 mm, fired clay pellets of the diameter size of 8 mm, and unfired clay pellets of the diameter size of 15 mm. The selected order of the adsorbents showed a high removal efficiency for zinc (97.8%) and copper (81.5%) ions. These efficiencies were better than those in the case of the already existing mode adsorption. The obtained experimental data present a good basis for the selection of an appropriate column fill, but further testing is necessary in order to obtain more accurate results.Keywords: clay materials, fix bed adsorption column, metal ions, printing developer
Procedia PDF Downloads 32715378 Three-Dimensional Model of Leisure Activities: Activity, Relationship, and Expertise
Authors: Taekyun Hur, Yoonyoung Kim, Junkyu Lim
Abstract:
Previous works on leisure activities had been categorizing activities arbitrarily and subjectively while focusing on a single dimension (e.g. active-passive, individual-group). To overcome these problems, this study proposed a Korean leisure activities’ matrix model that considered multidimensional features of leisure activities, which was comprised of 3 main factors and 6 sub factors: (a) Active (physical, mental), (b) Relational (quantity, quality), (c) Expert (entry barrier, possibility of improving). We developed items for measuring the degree of each dimension for every leisure activity. Using the developed Leisure Activities Dimensions (LAD) questionnaire, we investigated the presented dimensions of a total of 78 leisure activities which had been enjoyed by most Koreans recently (e.g. watching movie, taking a walk, watching media). The study sample consisted of 1348 people (726 men, 658 women) ranging in age from teenagers to elderlies in their seventies. This study gathered 60 data for each leisure activity, a total of 4860 data, which were used for statistical analysis. First, this study compared 3-factor model (Activity, Relation, Expertise) fit with 6-factor model (physical activity, mental activity, relational quantity, relational quality, entry barrier, possibility of improving) fit by using confirmatory factor analysis. Based on several goodness-of-fit indicators, the 6-factor model for leisure activities was a better fit for the data. This result indicates that it is adequate to take account of enough dimensions of leisure activities (6-dimensions in our study) to specifically apprehend each leisure attributes. In addition, the 78 leisure activities were cluster-analyzed with the scores calculated based on the 6-factor model, which resulted in 8 leisure activity groups. Cluster 1 (e.g. group sports, group musical activity) and Cluster 5 (e.g. individual sports) had generally higher scores on all dimensions than others, but Cluster 5 had lower relational quantity than Cluster 1. In contrast, Cluster 3 (e.g. SNS, shopping) and Cluster 6 (e.g. playing a lottery, taking a nap) had low scores on a whole, though Cluster 3 showed medium levels of relational quantity and quality. Cluster 2 (e.g. machine operating, handwork/invention) required high expertise and mental activity, but low physical activity. Cluster 4 indicated high mental activity and relational quantity despite low expertise. Cluster 7 (e.g. tour, joining festival) required not only moderate degrees of physical activity and relation, but low expertise. Lastly, Cluster 8 (e.g. meditation, information searching) had the appearance of high mental activity. Even though clusters of our study had a few similarities with preexisting taxonomy of leisure activities, there was clear distinctiveness between them. Unlike the preexisting taxonomy that had been created subjectively, we assorted 78 leisure activities based on objective figures of 6-dimensions. We also could identify that some leisure activities, which used to belong to the same leisure group, were included in different clusters (e.g. filed ball sports, net sports) because of different features. In other words, the results can provide a different perspective on leisure activities research and be helpful for figuring out what various characteristics leisure participants have.Keywords: leisure, dimensional model, activity, relationship, expertise
Procedia PDF Downloads 31315377 Improving Chest X-Ray Disease Detection with Enhanced Data Augmentation Using Novel Approach of Diverse Conditional Wasserstein Generative Adversarial Networks
Authors: Malik Muhammad Arslan, Muneeb Ullah, Dai Shihan, Daniyal Haider, Xiaodong Yang
Abstract:
Chest X-rays are instrumental in the detection and monitoring of a wide array of diseases, including viral infections such as COVID-19, tuberculosis, pneumonia, lung cancer, and various cardiac and pulmonary conditions. To enhance the accuracy of diagnosis, artificial intelligence (AI) algorithms, particularly deep learning models like Convolutional Neural Networks (CNNs), are employed. However, these deep learning models demand a substantial and varied dataset to attain optimal precision. Generative Adversarial Networks (GANs) can be employed to create new data, thereby supplementing the existing dataset and enhancing the accuracy of deep learning models. Nevertheless, GANs have their limitations, such as issues related to stability, convergence, and the ability to distinguish between authentic and fabricated data. In order to overcome these challenges and advance the detection and classification of CXR normal and abnormal images, this study introduces a distinctive technique known as DCWGAN (Diverse Conditional Wasserstein GAN) for generating synthetic chest X-ray (CXR) images. The study evaluates the effectiveness of this Idiosyncratic DCWGAN technique using the ResNet50 model and compares its results with those obtained using the traditional GAN approach. The findings reveal that the ResNet50 model trained on the DCWGAN-generated dataset outperformed the model trained on the classic GAN-generated dataset. Specifically, the ResNet50 model utilizing DCWGAN synthetic images achieved impressive performance metrics with an accuracy of 0.961, precision of 0.955, recall of 0.970, and F1-Measure of 0.963. These results indicate the promising potential for the early detection of diseases in CXR images using this Inimitable approach.Keywords: CNN, classification, deep learning, GAN, Resnet50
Procedia PDF Downloads 9315376 Physico-Chemical Characterization of an Algerian Biomass: Application in the Adsorption of an Organic Pollutant
Authors: Djelloul Addad, Fatiha Belkhadem Mokhtari
Abstract:
The objective of this work is to study the retention of methylene blue (MB) by biomass. The Biomass is characterized by X-ray diffraction (XRD), infrared absorption (IRTF). Results show that the biomass contains organic and mineral substances. The effect of certain physicochemical parameters on the adsorption of MB is studied (effect of the pH). This study shows that the increase in the initial concentration of MB leads to an increase in the adsorbed quantity. The adsorption efficiency of MB decreases with increasing biomass mass. The adsorption kinetics show that the adsorption is rapid, and the maximum amount is reached after 120 min of contact time. It is noted that the pH has no great influence on the adsorption. The isotherms are best modelled by the Langmuir model. The adsorption kinetics follow the pseudo-second-order model. The thermodynamic study of adsorption shows that the adsorption is spontaneous and exothermic.Keywords: dyes, adsorption, biomass, methylene blue, langmuir
Procedia PDF Downloads 7415375 Structural Analysis and Modelling in an Evolving Iron Ore Operation
Authors: Sameh Shahin, Nannang Arrys
Abstract:
Optimizing pit slope stability and reducing strip ratio of a mining operation are two key tasks in geotechnical engineering. With a growing demand for minerals and an increasing cost associated with extraction, companies are constantly re-evaluating the viability of mineral deposits and challenging their geological understanding. Within Rio Tinto Iron Ore, the Structural Geology (SG) team investigate and collect critical data, such as point based orientations, mapping and geological inferences from adjacent pits to re-model deposits where previous interpretations have failed to account for structurally controlled slope failures. Utilizing innovative data collection methods and data-driven investigation, SG aims to address the root causes of slope instability. Committing to a resource grid drill campaign as the primary source of data collection will often bias data collection to a specific orientation and significantly reduce the capability to identify and qualify complexity. Consequently, these limitations make it difficult to construct a realistic and coherent structural model that identifies adverse structural domains. Without the consideration of complexity and the capability of capturing these structural domains, mining operations run the risk of inadequately designed slopes that may fail and potentially harm people. Regional structural trends have been considered in conjunction with surface and in-pit mapping data to model multi-batter fold structures that were absent from previous iterations of the structural model. The risk is evident in newly identified dip-slope and rock-mass controlled sectors of the geotechnical design rather than a ubiquitous dip-slope sector across the pit. The reward is two-fold: 1) providing sectors of rock-mass controlled design in previously interpreted structurally controlled domains and 2) the opportunity to optimize the slope angle for mineral recovery and reduced strip ratio. Furthermore, a resulting high confidence model with structures and geometries that can account for historic slope instabilities in structurally controlled domains where design assumptions failed.Keywords: structural geology, geotechnical design, optimization, slope stability, risk mitigation
Procedia PDF Downloads 5215374 Body of Dialectics: Exploring a Dynamic-Adaptational Model of Physical Self-Integrity and the Pursuit of Happiness in a Hostile World
Authors: Noam Markovitz
Abstract:
People with physical disabilities constitute a very large and simultaneously a diverse group of general population, as the term physical disabilities is extensive and covers a wide range of disabilities. Therefore, individuals with physical disabilities are often faced with a new, threatening and stressful reality leading possibly to a multi-crisis in their lives due to the great changes they experience in somatic, socio-economic, occupational and psychological level. The current study seeks to advance understanding of the complex adaptation to physical disabilities by expanding the dynamic-adaptational model of the pursuit of happiness in a hostile world with a new conception of physical self-integrity. Physical self-integrity incorporates an objective dimension, namely physical self-functioning (PSF), and a subjective dimension, namely physical self-concept (PSC). Both of these dimensions constitute an experience of wholeness in the individual’s identification with her or his physical body. The model guiding this work is dialectical in nature and depicts two systems in the individual’s sense of happiness: subjective well-being (SWB) and meaning in life (MIL). Both systems serve as self-adaptive agents that moderate the complementary system of the hostile-world scenario (HWS), which integrates one’s perceived threats to one’s integrity. Thus, in situations of increased HWS, the moderation may take a form of joint activity in which SWB and MIL are amplified or a form of compensation in which one system produces a stronger effect while the other system produces a weaker effect. The current study investigated PSC in relations to SWB and MIL through pleasantness and meanings that are physically or metaphorically grounded in one’s body. In parallel, PSC also relates to HWS by activating representations of inappropriateness, deformation and vulnerability. In view of possibly dialectical positions of opposing and complementary forces within the current model, the current field study that aims to explore PSC as appearing in an independent, cross-sectional, design addressing the model’s variables in a focal group of people with physical disabilities. This study delineated the participation of the PSC in the adaptational functions of SWB and MIL vis-à-vis HWS-related life adversities. The findings showed that PSC could fully complement the main variables of the pursuit of happiness in a hostile world model. The assumed dialectics in the form of a stronger relationship between SWB and MIL in the face of physical disabilities was not supported. However, it was found that when HWS increased, PSC and MIL were strongly linked, whereas PSC and SWB were weakly linked. This highlights the compensatory role of MIL. From a conceptual viewpoint, the current investigation may clarify the role of PSC as an adaptational agent of the individual’s positive health in complementary senses of bodily wholeness. Methodologically, the advantage of the current investigation is the application of an integrative, model-based approach within a specially focused design with a particular relevance to PSC. Moreover, from an applicative viewpoint, the current investigation may suggest how an innovative model may be translated to therapeutic interventions used by clinicians, counselors and practitioners in improving wellness and psychological well-being, particularly among people with physical disabilities.Keywords: older adults, physical disabilities, physical self-concept, pursuit of happiness in a hostile-world
Procedia PDF Downloads 15515373 Effect of Climate Change on Runoff in the Upper Mun River Basin, Thailand
Authors: Preeyaphorn Kosa, Thanutch Sukwimolseree
Abstract:
The climate change is a main parameter which affects the element of hydrological cycle especially runoff. Then, the purpose of this study is to determine the impact of the climate change on surface runoff using land use map on 2008 and daily weather data during January 1, 1979 to September 30, 2010 for SWAT model. SWAT continuously simulate time model and operates on a daily time step at basin scale. The results present that the effect of temperature change cannot be clearly presented on the change of runoff while the rainfall, relative humidity and evaporation are the parameters for the considering of runoff change. If there are the increasing of rainfall and relative humidity, there is also the increasing of runoff. On the other hand, if there is the increasing of evaporation, there is the decreasing of runoff.Keywords: climate, runoff, SWAT, upper Mun River basin
Procedia PDF Downloads 40015372 Social Business Model: Leveraging Business and Social Value of Social Enterprises
Authors: Miriam Borchardt, Agata M. Ritter, Macaliston G. da Silva, Mauricio N. de Carvalho, Giancarlo M. Pereira
Abstract:
This paper aims to analyze the barriers faced by social enterprises and based on that to propose a social business model framework that helps them to leverage their businesses and the social value delivered. A business model for social enterprises should amplify the value perception including social value for the beneficiaries while generating enough profit to escalate the business. Most of the social value beneficiaries are people from the base of the economic pyramid (BOP) or the ones that have specific needs. Because of this, products and services should be affordable to consumers while solving social needs of the beneficiaries. Developing products and services with social value require tie relationship among the social enterprises and universities, public institutions, accelerators, and investors. Despite being focused on social value and contributing to the beneficiaries’ quality of life as well as contributing to the governments that cannot properly guarantee public services and infrastructure to the BOP, many barriers are faced by the social enterprises to escalate their businesses. This is a work in process and five micro- and small-sized social enterprises in Brazil have been studied: (i) one has developed a kit for cervical uterine cancer detection to allow the BOP women to collect their own material and deliver to a laboratory for U$1,00; (ii) other has developed special products without lactose and it is about 70% cheaper than the traditional brands in the market; (iii) the third has developed prosthesis and orthosis to surplus needs that health public system have not done efficiently; (iv) the fourth has produced and commercialized menstrual panties aiming to reduce the consumption of dischargeable ones while saving money to the consumers; (v) the fifth develops and commercializes clothes from fabric wastes in a partnership with BOP artisans. The preliminary results indicate that the main barriers are related to the public system to recognize these products as public money that could be saved if they bought products from these enterprises instead of the multinational pharmaceutical companies, to the traditional distribution system (e.g. pharmacies) that avoid these products because of the low or non-existing profit, to the difficulty buying raw material in small quantities, to leverage investment by the investors, to cultural barriers and taboos. Interesting strategies to reduce the costs have been observed: some enterprises have focused on simplifying products, others have invested in partnerships with local producers and have developed their machines focusing on process efficiency to leverage investment by the investors.Keywords: base of the pyramid, business model, social business, social business model, social enterprises
Procedia PDF Downloads 10515371 Neurocognitive and Executive Function in Cocaine Addicted Females
Authors: Gwendolyn Royal-Smith
Abstract:
Cocaine ranks as one of the world’s most addictive and commonly abused stimulant drugs. Recent evidence indicates that the abuse of cocaine has risen so quickly among females that this group now accounts for about 40 percent of all users in the United States. Neuropsychological studies have demonstrated that specific neural activation patterns carry higher risks for neurocognitive and executive function in cocaine addicted females thereby increasing their vulnerability for poorer treatment outcomes and more frequent post-treatment relapse when compared to males. This study examined secondary data with a convenience sample of 164 cocaine addicted male and females to assess neurocognitive and executive function. The principal objective of this study was to assess whether individual performance on the Stroop Word Color Task is predictive of treatment success by gender. A second objective of the study evaluated whether individual performance employing neurocognitive measures including the Stroop Word-Color task, the Rey Auditory Verbal Learning Test (RALVT), the Iowa Gambling Task, the Wisconsin Card Sorting Task (WISCT), the total score from the Barratte Impulsiveness Scale (Version 11) (BIS-11) and the total score from the Frontal Systems Behavioral Scale (FrSBE) test demonstrated differences in neurocognitive and executive function performance by gender. Logistic regression models were employed utilizing a covariate adjusted model application. Initial analyses of the Stroop Word color tasks indicated significant differences in the performance of males and females, with females experiencing more challenges in derived interference reaction time and associate recall ability. In early testing including the Rey Auditory Verbal Learning Test (RALVT), the number of advantageous vs disadvantageous cards from the Iowa Gambling Task, the number of perseverance errors from the Wisconsin Card Sorting Task (WISCT), the total score from the Barratte Impulsiveness Scale (Version 11) (BIS-11) and the total score from the Frontal Systems Behavioral Scale, results were mixed with women scoring lower in multiple indicators in both neurocognitive and executive function.Keywords: cocaine addiction, gender, neuropsychology, neurocognitive, executive function
Procedia PDF Downloads 40415370 Impact of Urbanization on the Performance of Higher Education Institutions
Authors: Chandan Jha, Amit Sachan, Arnab Adhikari, Sayantan Kundu
Abstract:
The purpose of this study is to evaluate the performance of Higher Education Institutions (HEIs) of India and examine the impact of urbanization on the performance of HEIs. In this study, the Data Envelopment Analysis (DEA) has been used, and the authors have collected the required data related to performance measures from the National Institutional Ranking Framework web portal. In this study, the authors have evaluated the performance of HEIs by using two different DEA models. In the first model, geographic locations of the institutes have been categorized into two categories, i.e., Urban Vs. Non-Urban. However, in the second model, these geographic locations have been classified into three categories, i.e., Urban, Semi-Urban, Non-Urban. The findings of this study provide several insights related to the degree of urbanization and the performance of HEIs.Keywords: DEA, higher education, performance evaluation, urbanization
Procedia PDF Downloads 21915369 Visualization and Performance Measure to Determine Number of Topics in Twitter Data Clustering Using Hybrid Topic Modeling
Authors: Moulana Mohammed
Abstract:
Topic models are widely used in building clusters of documents for more than a decade, yet problems occurring in choosing optimal number of topics. The main problem is the lack of a stable metric of the quality of topics obtained during the construction of topic models. The authors analyzed from previous works, most of the models used in determining the number of topics are non-parametric and quality of topics determined by using perplexity and coherence measures and concluded that they are not applicable in solving this problem. In this paper, we used the parametric method, which is an extension of the traditional topic model with visual access tendency for visualization of the number of topics (clusters) to complement clustering and to choose optimal number of topics based on results of cluster validity indices. Developed hybrid topic models are demonstrated with different Twitter datasets on various topics in obtaining the optimal number of topics and in measuring the quality of clusters. The experimental results showed that the Visual Non-negative Matrix Factorization (VNMF) topic model performs well in determining the optimal number of topics with interactive visualization and in performance measure of the quality of clusters with validity indices.Keywords: interactive visualization, visual mon-negative matrix factorization model, optimal number of topics, cluster validity indices, Twitter data clustering
Procedia PDF Downloads 13815368 CT Doses Pre and Post SAFIRE: Sinogram Affirmed Iterative Reconstruction
Authors: N. Noroozian, M. Halim, B. Holloway
Abstract:
Computed Tomography (CT) has become the largest source of radiation exposure in modern countries however, recent technological advances have created new methods to reduce dose without negatively affecting image quality. SAFIRE has emerged as a new software package which utilizes full raw data projections for iterative reconstruction, thereby allowing for lower CT dose to be used. this audit was performed to compare CT doses in certain examinations before and after the introduction of SAFIRE at our Radiology department which showed CT doses were significantly lower using SAFIRE compared with pre-SAFIRE software at SAFIRE 3 setting for the following studies:CSKUH Unenhanced brain scans (-20.9%), CABPEC Abdomen and pelvis with contrast (-21.5%), CCHAPC Chest with contrast (-24.4%), CCHAPC Abdomen and pelvis with contrast (-16.1%), CCHAPC Total chest, abdomen and pelvis (-18.7%).Keywords: dose reduction, iterative reconstruction, low dose CT techniques, SAFIRE
Procedia PDF Downloads 28915367 Facebook Spam and Spam Filter Using Artificial Neural Networks
Authors: A. Fahim, Mutahira N. Naseem
Abstract:
SPAM is any unwanted electronic message or material in any form posted to many people. As the world is growing as global world, social networking sites play an important role in making world global providing people from different parts of the world a platform to meet and express their views. Among different social networking sites facebook become the leading one. With increase in usage different users start abusive use of facebook by posting or creating ways to post spam. This paper highlights the potential spam types nowadays facebook users faces. This paper also provide the reason how user become victim to spam attack. A methodology is proposed in the end discusses how to handle different types of spam.Keywords: artificial neural networks, facebook spam, social networking sites, spam filter
Procedia PDF Downloads 37815366 Vulnerability Assessment of Healthcare Interdependent Critical Infrastructure Coloured Petri Net Model
Authors: N. Nivedita, S. Durbha
Abstract:
Critical Infrastructure (CI) consists of services and technological networks such as healthcare, transport, water supply, electricity supply, information technology etc. These systems are necessary for the well-being and to maintain effective functioning of society. Critical Infrastructures can be represented as nodes in a network where they are connected through a set of links depicting the logical relationship among them; these nodes are interdependent on each other and interact with each at other at various levels, such that the state of each infrastructure influences or is correlated to the state of another. Disruption in the service of one infrastructure nodes of the network during a disaster would lead to cascading and escalating disruptions across other infrastructures nodes in the network. The operation of Healthcare Infrastructure is one such Critical Infrastructure that depends upon a complex interdependent network of other Critical Infrastructure, and during disasters it is very vital for the Healthcare Infrastructure to be protected, accessible and prepared for a mass casualty. To reduce the consequences of a disaster on the Critical Infrastructure and to ensure a resilient Critical Health Infrastructure network, knowledge, understanding, modeling, and analyzing the inter-dependencies between the infrastructures is required. The paper would present inter-dependencies related to Healthcare Critical Infrastructure based on Hierarchical Coloured Petri Nets modeling approach, given a flood scenario as the disaster which would disrupt the infrastructure nodes. The model properties are being analyzed for the various state changes which occur when there is a disruption or damage to any of the Critical Infrastructure. The failure probabilities for the failure risk of interconnected systems are calculated by deriving a reachability graph, which is later mapped to a Markov chain. By analytically solving and analyzing the Markov chain, the overall vulnerability of the Healthcare CI HCPN model is demonstrated. The entire model would be integrated with Geographic information-based decision support system to visualize the dynamic behavior of the interdependency of the Healthcare and related CI network in a geographically based environment.Keywords: critical infrastructure interdependency, hierarchical coloured petrinet, healthcare critical infrastructure, Petri Nets, Markov chain
Procedia PDF Downloads 53115365 Moderating and Mediating Effects of Business Model Innovation Barriers during Crises: A Structural Equation Model Tested on German Chemical Start-Ups
Authors: Sarah Mueller-Saegebrecht, André Brendler
Abstract:
Business model innovation (BMI) as an intentional change of an existing business model (BM) or the design of a new BM is essential to a firm's development in dynamic markets. The relevance of BMI is also evident in the ongoing COVID-19 pandemic, in which start-ups, in particular, are affected by limited access to resources. However, first studies also show that they react faster to the pandemic than established firms. A strategy to successfully handle such threatening dynamic changes represents BMI. Entrepreneurship literature shows how and when firms should utilize BMI in times of crisis and which barriers one can expect during the BMI process. Nevertheless, research merging BMI barriers and crises is still underexplored. Specifically, further knowledge about antecedents and the effect of moderators on the BMI process is necessary for advancing BMI research. The addressed research gap of this study is two-folded: First, foundations to the subject on how different crises impact BM change intention exist, yet their analysis lacks the inclusion of barriers. Especially, entrepreneurship literature lacks knowledge about the individual perception of BMI barriers, which is essential to predict managerial reactions. Moreover, internal BMI barriers have been the focal point of current research, while external BMI barriers remain virtually understudied. Second, to date, BMI research is based on qualitative methodologies. Thus, a lack of quantitative work can specify and confirm these qualitative findings. By focusing on the crisis context, this study contributes to BMI literature by offering a first quantitative attempt to embed BMI barriers into a structural equation model. It measures managers' perception of BMI development and implementation barriers in the BMI process, asking the following research question: How does a manager's perception of BMI barriers influence BMI development and implementation in times of crisis? Two distinct research streams in economic literature explain how individuals react when perceiving a threat. "Prospect Theory" claims that managers demonstrate risk-seeking tendencies when facing a potential loss, and opposing "Threat-Rigidity Theory" suggests that managers demonstrate risk-averse behavior when facing a potential loss. This study quantitively tests which theory can best predict managers' BM reaction to a perceived crisis. Out of three in-depth interviews in the German chemical industry, 60 past BMIs were identified. The participating start-up managers gave insights into their start-up's strategic and operational functioning. After, each interviewee described crises that had already affected their BM. The participants explained how they conducted BMI to overcome these crises, which development and implementation barriers they faced, and how severe they perceived them, assessed on a 5-point Likert scale. In contrast to current research, results reveal that a higher perceived threat level of a crisis harms BM experimentation. Managers seem to conduct less BMI in times of crisis, whereby BMI development barriers dampen this relation. The structural equation model unveils a mediating role of BMI implementation barriers on the link between the intention to change a BM and the concrete BMI implementation. In conclusion, this study confirms the threat-rigidity theory.Keywords: barrier perception, business model innovation, business model innovation barriers, crises, prospect theory, start-ups, structural equation model, threat-rigidity theory
Procedia PDF Downloads 9915364 Petrology of the Post-Collisional Dolerites, Basalts from the Javakheti Highland, South Georgia
Authors: Bezhan Tutberidze
Abstract:
The Neogene-Quaternary volcanic rocks of the Javakheti Highland are products of post-collisional continental magmatism and are related to divergent and convergent margins of Eurasian-Afroarabian lithospheric plates. The studied area constitutes an integral part of the volcanic province of Central South Georgia. Three cycles of volcanic activity are identified here: 1. Late Miocene-Early Pliocene, 2. Late Pliocene-Early /Middle/ Pleistocene and 3. Late Pleistocene. An intense basic dolerite magmatic activity occurred within the time span of the Late Pliocene and lasted until at least Late /Middle/ Pleistocene. The age of the volcanogenic and volcanogenic-sedimentary formation was dated by geomorphological, paleomagnetic, paleontological and geochronological methods /1.7-1.9 Ma/. The volcanic area of the Javakheti Highland contains multiple dolerite Plateaus: Akhalkalaki, Gomarethi, Dmanisi, and Tsalka. Petrographic observations of these doleritic rocks reveal fairly constant mineralogical composition: olivine / Fo₈₇.₆₋₈₂.₇ /, plagioclase / Ab₂₂.₈ An₇₅.₉ Or₁.₃; Ab₄₅.₀₋₃₂.₃ An₅₂.₉₋₆₂.₃ Or₂.₁₋₅.₄/. The pyroxene is an augite and may exhibit a visible zoning: / Wo 39.7-43.1 En 43.5-45.2 Fs 16.8-11.7/. Opaque minerals /magnetite, titanomagnetite/ is abundant as inclusions within olivine and pyroxene crystals. The texture of dolerites exhibits intergranular, holocrystalline to ophitic to sub ophitic granular. Dolerites are most common vesicular rocks. Vesicles range in shape from spherical to elongated and in size from 0.5 mm to than 1.5-2 cm and makeup about 20-50 % of the volume. The dolerites have been subjected to considerable alteration. The secondary minerals in the geothermal field are: zeolite, calcite, chlorite, aragonite, clay-like mineral /dominated by smectites/ and iddingsite –like mineral; rare quartz and pumpellyite are present. These vesicles are filled by secondary minerals. In the chemistry, dolerites are the calc-alkalic transition to sub-alkaline with a predominance of Na₂O over K₂O. Chemical analyses indicate that dolerites of all plateaus of the Javakheti Highland have similar geochemical compositions, signifying that they were formed from the same magmatic source by crystallization of olivine basalis magma which less differentiated / ⁸⁷Sr \ ⁸⁶Sr 0.703920-0704195/. There is one argument, which is less convincing, according to which the dolerites/basalts of the Javakheti Highland are considered to be an activity of a mantle plume. Unfortunately, there does not exist reliable evidence to prove this. The petrochemical peculiarities and eruption nature of the dolerites of the Javakheti Plateau point against their plume origin. Nevertheless, it is not excluded that they influence the formation of dolerite producing primary basaltic magma.Keywords: calc-alkalic, dolerite, Georgia, Javakheti Highland
Procedia PDF Downloads 27615363 Microwave-Assisted Chemical Pre-Treatment of Waste Sorghum Leaves: Process Optimization and Development of an Intelligent Model for Determination of Volatile Compound Fractions
Authors: Daneal Rorke, Gueguim Kana
Abstract:
The shift towards renewable energy sources for biofuel production has received increasing attention. However, the use and pre-treatment of lignocellulosic material are inundated with the generation of fermentation inhibitors which severely impact the feasibility of bioprocesses. This study reports the profiling of all volatile compounds generated during microwave assisted chemical pre-treatment of sorghum leaves. Furthermore, the optimization of reducing sugar (RS) from microwave assisted acid pre-treatment of sorghum leaves was assessed and gave a coefficient of determination (R2) of 0.76, producing an optimal RS yield of 2.74 g FS/g substrate. The development of an intelligent model to predict volatile compound fractions gave R2 values of up to 0.93 for 21 volatile compounds. Sensitivity analysis revealed that furfural and phenol exhibited high sensitivity to acid concentration, alkali concentration and S:L ratio, while phenol showed high sensitivity to microwave duration and intensity as well. These findings illustrate the potential of using an intelligent model to predict the volatile compound fraction profile of compounds generated during pre-treatment of sorghum leaves in order to establish a more robust and efficient pre-treatment regime for biofuel production.Keywords: artificial neural networks, fermentation inhibitors, lignocellulosic pre-treatment, sorghum leaves
Procedia PDF Downloads 25215362 A Hierarchical Bayesian Calibration of Data-Driven Models for Composite Laminate Consolidation
Authors: Nikolaos Papadimas, Joanna Bennett, Amir Sakhaei, Timothy Dodwell
Abstract:
Composite modeling of consolidation processes is playing an important role in the process and part design by indicating the formation of possible unwanted prior to expensive experimental iterative trial and development programs. Composite materials in their uncured state display complex constitutive behavior, which has received much academic interest, and this with different models proposed. Errors from modeling and statistical which arise from this fitting will propagate through any simulation in which the material model is used. A general hyperelastic polynomial representation was proposed, which can be readily implemented in various nonlinear finite element packages. In our case, FEniCS was chosen. The coefficients are assumed uncertain, and therefore the distribution of parameters learned using Markov Chain Monte Carlo (MCMC) methods. In engineering, the approach often followed is to select a single set of model parameters, which on average, best fits a set of experiments. There are good statistical reasons why this is not a rigorous approach to take. To overcome these challenges, A hierarchical Bayesian framework was proposed in which population distribution of model parameters is inferred from an ensemble of experiments tests. The resulting sampled distribution of hyperparameters is approximated using Maximum Entropy methods so that the distribution of samples can be readily sampled when embedded within a stochastic finite element simulation. The methodology is validated and demonstrated on a set of consolidation experiments of AS4/8852 with various stacking sequences. The resulting distributions are then applied to stochastic finite element simulations of the consolidation of curved parts, leading to a distribution of possible model outputs. With this, the paper, as far as the authors are aware, represents the first stochastic finite element implementation in composite process modelling.Keywords: data-driven , material consolidation, stochastic finite elements, surrogate models
Procedia PDF Downloads 14915361 Applying And Connecting The Microgrid Of Artificial Intelligence In The Form Of A Spiral Model To Optimize Renewable Energy Sources
Authors: PR
Abstract:
Renewable energy is a sustainable substitute to fossil fuels, which are depleting and attributing to global warming as well as greenhouse gas emissions. Renewable energy innovations including solar, wind, and geothermal have grown significantly and play a critical role in meeting energy demands recently. Consequently, Artificial Intelligence (AI) could further enhance the benefits of renewable energy systems. The combination of renewable technologies and AI could facilitate the development of smart grids that can better manage energy distribution and storage. AI thus has the potential to optimize the efficiency and reliability of renewable energy systems, reduce costs, and improve their overall performance. The conventional methods of using smart micro-grids are to connect these micro-grids in series or parallel or a combination of series and parallel. Each of these methods has its advantages and disadvantages. In this study, the proposal of using the method of connecting microgrids in a spiral manner is investigated. One of the important reasons for choosing this type of structure is the two-way reinforcement and exchange of each inner layer with the outer and upstream layer. With this model, we have the ability to increase energy from a small amount to a significant amount based on exponential functions. The geometry used to close the smart microgrids is based on nature.This study provides an overview of the applications of algorithms and models of AI as well as its advantages and challenges in renewable energy systems.Keywords: artificial intelligence, renewable energy sources, spiral model, optimize
Procedia PDF Downloads 2115360 '3D City Model' through Quantum Geographic Information System: A Case Study of Gujarat International Finance Tec-City, Gujarat, India
Authors: Rahul Jain, Pradhir Parmar, Dhruvesh Patel
Abstract:
Planning and drawing are the important aspects of civil engineering. For testing theories about spatial location and interaction between land uses and related activities the computer based solution of urban models are used. The planner’s primary interest is in creation of 3D models of building and to obtain the terrain surface so that he can do urban morphological mappings, virtual reality, disaster management, fly through generation, visualization etc. 3D city models have a variety of applications in urban studies. Gujarat International Finance Tec-City (GIFT) is an ongoing construction site between Ahmedabad and Gandhinagar, Gujarat, India. It will be built on 3590000 m2 having a geographical coordinates of North Latitude 23°9’5’’N to 23°10’55’’ and East Longitude 72°42’2’’E to 72°42’16’’E. Therefore to develop 3D city models of GIFT city, the base map of the city is collected from GIFT office. Differential Geographical Positioning System (DGPS) is used to collect the Ground Control Points (GCP) from the field. The GCP points are used for the registration of base map in QGIS. The registered map is projected in WGS 84/UTM zone 43N grid and digitized with the help of various shapefile tools in QGIS. The approximate height of the buildings that are going to build is collected from the GIFT office and placed on the attribute table of each layer created using shapefile tools. The Shuttle Radar Topography Mission (SRTM) 1 Arc-Second Global (30 m X 30 m) grid data is used to generate the terrain of GIFT city. The Google Satellite Map is used to place on the background to get the exact location of the GIFT city. Various plugins and tools in QGIS are used to convert the raster layer of the base map of GIFT city into 3D model. The fly through tool is used for capturing and viewing the entire area in 3D of the city. This paper discusses all techniques and their usefulness in 3D city model creation from the GCP, base map, SRTM and QGIS.Keywords: 3D model, DGPS, GIFT City, QGIS, SRTM
Procedia PDF Downloads 25015359 Ranking All of the Efficient DMUs in DEA
Authors: Elahe Sarfi, Esmat Noroozi, Farhad Hosseinzadeh Lotfi
Abstract:
One of the important issues in Data Envelopment Analysis is the ranking of Decision Making Units. In this paper, a method for ranking DMUs is presented through which the weights related to efficient units should be chosen in a way that the other units preserve a certain percentage of their efficiency with the mentioned weights. To this end, a model is presented for ranking DMUs on the base of their superefficiency by considering the mentioned restrictions related to weights. This percentage can be determined by decision Maker. If the specific percentage is unsuitable, we can find a suitable and feasible one for ranking DMUs accordingly. Furthermore, the presented model is capable of ranking all of the efficient units including nonextreme efficient ones. Finally, the presented models are utilized for two sets of data and related results are reported.Keywords: data envelopment analysis, efficiency, ranking, weight
Procedia PDF Downloads 46115358 Estimating Estimators: An Empirical Comparison of Non-Invasive Analysis Methods
Authors: Yan Torres, Fernanda Simoes, Francisco Petrucci-Fonseca, Freddie-Jeanne Richard
Abstract:
The non-invasive samples are an alternative of collecting genetic samples directly. Non-invasive samples are collected without the manipulation of the animal (e.g., scats, feathers and hairs). Nevertheless, the use of non-invasive samples has some limitations. The main issue is degraded DNA, leading to poorer extraction efficiency and genotyping. Those errors delayed for some years a widespread use of non-invasive genetic information. Possibilities to limit genotyping errors can be done using analysis methods that can assimilate the errors and singularities of non-invasive samples. Genotype matching and population estimation algorithms can be highlighted as important analysis tools that have been adapted to deal with those errors. Although, this recent development of analysis methods there is still a lack of empirical performance comparison of them. A comparison of methods with dataset different in size and structure can be useful for future studies since non-invasive samples are a powerful tool for getting information specially for endangered and rare populations. To compare the analysis methods, four different datasets used were obtained from the Dryad digital repository were used. Three different matching algorithms (Cervus, Colony and Error Tolerant Likelihood Matching - ETLM) are used for matching genotypes and two different ones for population estimation (Capwire and BayesN). The three matching algorithms showed different patterns of results. The ETLM produced less number of unique individuals and recaptures. A similarity in the matched genotypes between Colony and Cervus was observed. That is not a surprise since the similarity between those methods on the likelihood pairwise and clustering algorithms. The matching of ETLM showed almost no similarity with the genotypes that were matched with the other methods. The different cluster algorithm system and error model of ETLM seems to lead to a more criterious selection, although the processing time and interface friendly of ETLM were the worst between the compared methods. The population estimators performed differently regarding the datasets. There was a consensus between the different estimators only for the one dataset. The BayesN showed higher and lower estimations when compared with Capwire. The BayesN does not consider the total number of recaptures like Capwire only the recapture events. So, this makes the estimator sensitive to data heterogeneity. Heterogeneity in the sense means different capture rates between individuals. In those examples, the tolerance for homogeneity seems to be crucial for BayesN work properly. Both methods are user-friendly and have reasonable processing time. An amplified analysis with simulated genotype data can clarify the sensibility of the algorithms. The present comparison of the matching methods indicates that Colony seems to be more appropriated for general use considering a time/interface/robustness balance. The heterogeneity of the recaptures affected strongly the BayesN estimations, leading to over and underestimations population numbers. Capwire is then advisable to general use since it performs better in a wide range of situations.Keywords: algorithms, genetics, matching, population
Procedia PDF Downloads 14815357 Computational Fluid Dynamics Design and Analysis of Aerodynamic Drag Reduction Devices for a Mazda T3500 Truck
Authors: Basil Nkosilathi Dube, Wilson R. Nyemba, Panashe Mandevu
Abstract:
In highway driving, over 50 percent of the power produced by the engine is used to overcome aerodynamic drag, which is a force that opposes a body’s motion through the air. Aerodynamic drag and thus fuel consumption increase rapidly at speeds above 90kph. It is desirable to minimize fuel consumption. Aerodynamic drag reduction in highway driving is the best approach to minimize fuel consumption and to reduce the negative impacts of greenhouse gas emissions on the natural environment. Fuel economy is the ultimate concern of automotive development. This study aims to design and analyze drag-reducing devices for a Mazda T3500 truck, namely, the cab roof and rear (trailer tail) fairings. The aerodynamic effects of adding these append devices were subsequently investigated. To accomplish this, two 3D CAD models of the Mazda truck were designed using the Design Modeler. One, with these, append devices and the other without. The models were exported to ANSYS Fluent for computational fluid dynamics analysis, no wind tunnel tests were performed. A fine mesh with more than 10 million cells was applied in the discretization of the models. The realizable k-ε turbulence model with enhanced wall treatment was used to solve the Reynold’s Averaged Navier-Stokes (RANS) equation. In order to simulate the highway driving conditions, the tests were simulated with a speed of 100 km/h. The effects of these devices were also investigated for low-speed driving. The drag coefficients for both models were obtained from the numerical calculations. By adding the cab roof and rear (trailer tail) fairings, the simulations show a significant reduction in aerodynamic drag at a higher speed. The results show that the greatest drag reduction is obtained when both devices are used. Visuals from post-processing show that the rear fairing minimized the low-pressure region at the rear of the trailer when moving at highway speed. The rear fairing achieved this by streamlining the turbulent airflow, thereby delaying airflow separation. For lower speeds, there were no significant differences in drag coefficients for both models (original and modified). The results show that these devices can be adopted for improving the aerodynamic efficiency of the Mazda T3500 truck at highway speeds.Keywords: aerodynamic drag, computation fluid dynamics, fluent, fuel consumption
Procedia PDF Downloads 14315356 Habitat Model Review and a Proposed Methodology to Value Economic Trade-Off between Cage Culture and Habitat of an Endemic Species in Lake Maninjau, Indonesia
Authors: Ivana Yuniarti, Iwan Ridwansyah
Abstract:
This paper delivers a review of various methodologies for habitat assessment and a proposed methodology to assess an endemic fish species habitat in Lake Maninjau, Indonesia as a part of a Ph.D. project. This application is mainly aimed to assess the trade-off between the economic value of aquaculture and the fisheries. The proposed methodology is a generalized linear model (GLM) combined with GIS to assess presence-absence data or habitat suitability index (HSI) combined with the analytical hierarchy process (AHP). Further, a cost of habitat replacement approach is planned to be used to calculate the habitat value as well as its trade-off with the economic value of aquaculture. The result of the study is expected to be a scientific consideration in local decision making and to provide a reference for other areas in the country.Keywords: AHP, habitat, GLM, HSI, Maninjau
Procedia PDF Downloads 15715355 Predicting Survival in Cancer: How Cox Regression Model Compares to Artifial Neural Networks?
Authors: Dalia Rimawi, Walid Salameh, Amal Al-Omari, Hadeel AbdelKhaleq
Abstract:
Predication of Survival time of patients with cancer, is a core factor that influences oncologist decisions in different aspects; such as offered treatment plans, patients’ quality of life and medications development. For a long time proportional hazards Cox regression (ph. Cox) was and still the most well-known statistical method to predict survival outcome. But due to the revolution of data sciences; new predication models were employed and proved to be more flexible and provided higher accuracy in that type of studies. Artificial neural network is one of those models that is suitable to handle time to event predication. In this study we aim to compare ph Cox regression with artificial neural network method according to data handling and Accuracy of each model.Keywords: Cox regression, neural networks, survival, cancer.
Procedia PDF Downloads 20715354 Survival and Hazard Maximum Likelihood Estimator with Covariate Based on Right Censored Data of Weibull Distribution
Authors: Al Omari Mohammed Ahmed
Abstract:
This paper focuses on Maximum Likelihood Estimator with Covariate. Covariates are incorporated into the Weibull model. Under this regression model with regards to maximum likelihood estimator, the parameters of the covariate, shape parameter, survival function and hazard rate of the Weibull regression distribution with right censored data are estimated. The mean square error (MSE) and absolute bias are used to compare the performance of Weibull regression distribution. For the simulation comparison, the study used various sample sizes and several specific values of the Weibull shape parameter.Keywords: weibull regression distribution, maximum likelihood estimator, survival function, hazard rate, right censoring
Procedia PDF Downloads 44315353 On the PTC Thermistor Model with a Hyperbolic Tangent Electrical Conductivity
Authors: M. O. Durojaye, J. T. Agee
Abstract:
This paper is on the one-dimensional, positive temperature coefficient (PTC) thermistor model with a hyperbolic tangent function approximation for the electrical conductivity. The method of asymptotic expansion was adopted to obtain the steady state solution and the unsteady-state response was obtained using the method of lines (MOL) which is a well-established numerical technique. The approach is to reduce the partial differential equation to a vector system of ordinary differential equations and solve numerically. Our analysis shows that the hyperbolic tangent approximation introduced is well suitable for the electrical conductivity. Numerical solutions obtained also exhibit correct physical characteristics of the thermistor and are in good agreement with the exact steady state solutions.Keywords: electrical conductivity, hyperbolic tangent function, PTC thermistor, method of lines
Procedia PDF Downloads 32415352 Deterioration Prediction of Pavement Load Bearing Capacity from FWD Data
Authors: Kotaro Sasai, Daijiro Mizutani, Kiyoyuki Kaito
Abstract:
Expressways in Japan have been built in an accelerating manner since the 1960s with the aid of rapid economic growth. About 40 percent in length of expressways in Japan is now 30 years and older and has become superannuated. Time-related deterioration has therefore reached to a degree that administrators, from a standpoint of operation and maintenance, are forced to take prompt measures on a large scale aiming at repairing inner damage deep in pavements. These measures have already been performed for bridge management in Japan and are also expected to be embodied for pavement management. Thus, planning methods for the measures are increasingly demanded. Deterioration of layers around road surface such as surface course and binder course is brought about at the early stages of whole pavement deterioration process, around 10 to 30 years after construction. These layers have been repaired primarily because inner damage usually becomes significant after outer damage, and because surveys for measuring inner damage such as Falling Weight Deflectometer (FWD) survey and open-cut survey are costly and time-consuming process, which has made it difficult for administrators to focus on inner damage as much as they have been supposed to. As expressways today have serious time-related deterioration within them deriving from the long time span since they started to be used, it is obvious the idea of repairing layers deep in pavements such as base course and subgrade must be taken into consideration when planning maintenance on a large scale. This sort of maintenance requires precisely predicting degrees of deterioration as well as grasping the present situations of pavements. Methods for predicting deterioration are determined to be either mechanical or statistical. While few mechanical models have been presented, as far as the authors know of, previous studies have presented statistical methods for predicting deterioration in pavements. One describes deterioration process by estimating Markov deterioration hazard model, while another study illustrates it by estimating Proportional deterioration hazard model. Both of the studies analyze deflection data obtained from FWD surveys and present statistical methods for predicting deterioration process of layers around road surface. However, layers of base course and subgrade remain unanalyzed. In this study, data collected from FWD surveys are analyzed to predict deterioration process of layers deep in pavements in addition to surface layers by a means of estimating a deterioration hazard model using continuous indexes. This model can prevent the loss of information of data when setting rating categories in Markov deterioration hazard model when evaluating degrees of deterioration in roadbeds and subgrades. As a result of portraying continuous indexes, the model can predict deterioration in each layer of pavements and evaluate it quantitatively. Additionally, as the model can also depict probability distribution of the indexes at an arbitrary point and establish a risk control level arbitrarily, it is expected that this study will provide knowledge like life cycle cost and informative content during decision making process referring to where to do maintenance on as well as when.Keywords: deterioration hazard model, falling weight deflectometer, inner damage, load bearing capacity, pavement
Procedia PDF Downloads 393