Search results for: CA-Markov model
12969 Analysis and Design of Exo-Skeleton System Based on Multibody Dynamics
Authors: Jatin Gupta, Bishakh Bhattacharya
Abstract:
With the aging process, many people start suffering from the problem of weak limbs resulting in mobility disorders and loss of sensory and motor function of limbs. Wearable robotic devices are viable solutions to help people suffering from these issues by augmenting their strength. These robotic devices, popularly known as exoskeletons aides user by providing external power and controlling the dynamics so as to achieve desired motion. Present work studies a simplified dynamic model of the human gait. A four link open chain kinematic model is developed to describe the dynamics of Single Support Phase (SSP) of the human gait cycle. The dynamic model is developed integrating mathematical models of the motion of inverted and triple pendulums. Stance leg is modeled as inverted pendulum having single degree of freedom and swing leg as triple pendulum having three degrees of freedom viz. thigh, knee, and ankle joints. The kinematic model is formulated using forward kinematics approach. Lagrangian approach is used to formulate governing dynamic equation of the model. For a system of nonlinear differential equations, numerical method is employed to obtain system response. Reference trajectory is generated using human body simulator, LifeMOD. For optimal mechanical design and controller design of exoskeleton system, it is imperative to study parameter sensitivity of the system. Six different parameters viz. thigh, shank, and foot masses and lengths are varied from 85% to 115% of the original value for the present work. It is observed that hip joint of swing leg is the most sensitive and ankle joint of swing leg is the least sensitive one. Changing link lengths causes more deviation in system response than link masses. Also, shank length and thigh mass are most sensitive parameters. Finally, the present study gives an insight on different factors that should be considered while designing a lower extremity exoskeleton.Keywords: lower limb exoskeleton, multibody dynamics, energy based formulation, optimal design
Procedia PDF Downloads 20012968 Simplified Linear Regression Model to Quantify the Thermal Resilience of Office Buildings in Three Different Power Outage Day Times
Authors: Nagham Ismail, Djamel Ouahrani
Abstract:
Thermal resilience in the built environment reflects the building's capacity to adapt to extreme climate changes. In hot climates, power outages in office buildings pose risks to the health and productivity of workers. Therefore, it is of interest to quantify the thermal resilience of office buildings by developing a user-friendly simplified model. This simplified model begins with creating an assessment metric of thermal resilience that measures the duration between the power outage and the point at which the thermal habitability condition is compromised, considering different power interruption times (morning, noon, and afternoon). In this context, energy simulations of an office building are conducted for Qatar's summer weather by changing different parameters that are related to the (i) wall characteristics, (ii) glazing characteristics, (iii) load, (iv) orientation and (v) air leakage. The simulation results are processed using SPSS to derive linear regression equations, aiding stakeholders in evaluating the performance of commercial buildings during different power interruption times. The findings reveal the significant influence of glazing characteristics on thermal resilience, with the morning power outage scenario posing the most detrimental impact in terms of the shortest duration before compromising thermal resilience.Keywords: thermal resilience, thermal envelope, energy modeling, building simulation, thermal comfort, power disruption, extreme weather
Procedia PDF Downloads 7612967 A Transformer-Based Question Answering Framework for Software Contract Risk Assessment
Authors: Qisheng Hu, Jianglei Han, Yue Yang, My Hoa Ha
Abstract:
When a company is considering purchasing software for commercial use, contract risk assessment is critical to identify risks to mitigate the potential adverse business impact, e.g., security, financial and regulatory risks. Contract risk assessment requires reviewers with specialized knowledge and time to evaluate the legal documents manually. Specifically, validating contracts for a software vendor requires the following steps: manual screening, interpreting legal documents, and extracting risk-prone segments. To automate the process, we proposed a framework to assist legal contract document risk identification, leveraging pre-trained deep learning models and natural language processing techniques. Given a set of pre-defined risk evaluation problems, our framework utilizes the pre-trained transformer-based models for question-answering to identify risk-prone sections in a contract. Furthermore, the question-answering model encodes the concatenated question-contract text and predicts the start and end position for clause extraction. Due to the limited labelled dataset for training, we leveraged transfer learning by fine-tuning the models with the CUAD dataset to enhance the model. On a dataset comprising 287 contract documents and 2000 labelled samples, our best model achieved an F1 score of 0.687.Keywords: contract risk assessment, NLP, transfer learning, question answering
Procedia PDF Downloads 12912966 New Model of Immersive Experiential Branding for International Universities
Authors: Kakhaber Djakeli
Abstract:
For market leadership, iconic brands already start to establish their unique digital avatars into Metaverse and offer Non Fungible Tokens to their fans. Metaverse can be defined as an evolutionary step of Internet development. So if companies and brands use the internet, logically, they can find new solutions for them and their customers in Metaverse. Marketing and Management today must learn how to combine physical world activities with those either entitled as digital, virtual, and immersive. A “Phygital” Solution uniting physical and digital competitive activities of the company covering the questions about how to use virtual worlds for Brand Development and Non Fungible Tokens for more attractiveness soon will be most relevant question for Branding. Thinking comprehensively, we can entitle this type of branding as an Immersive one. As we see, the Immersive Brands give customers more mesmerizing feelings than traditional ones. Accordingly, the Branding can be divided by the company in its own understanding into two models: traditional and immersive. Immersive Branding being more directed to Sensorial challenges of Humans will be big job for International Universities in near future because they target the Generation - Z. To try to help those International Universities opening the door to the mesmerizing, immersive branding, the Marketing Research have been undertaken. The main goal of the study was to establish the model for Immersive Branding at International Universities and answer on many questions what logically arises in university life. The type of Delphi Surveys entitled as an Expert Studies was undertaken for one great mission, to help International Universities to open the opportunities to Phygital activities with reliable knowledge with Model of Immersive Branding. The Questionnaire sent to Experts of Education were covering professional type of questions from education to segmentation of customers, branding, attitude to students, and knowledge to Immersive Marketing. The research results being very interesting and encouraging enough to make author to establish the New Model of Immersive Experiential Branding for International Universities.Keywords: branding, immersive marketing, students, university
Procedia PDF Downloads 8112965 Arabic Character Recognition Using Regression Curves with the Expectation Maximization Algorithm
Authors: Abdullah A. AlShaher
Abstract:
In this paper, we demonstrate how regression curves can be used to recognize 2D non-rigid handwritten shapes. Each shape is represented by a set of non-overlapping uniformly distributed landmarks. The underlying models utilize 2nd order of polynomials to model shapes within a training set. To estimate the regression models, we need to extract the required coefficients which describe the variations for a set of shape class. Hence, a least square method is used to estimate such modes. We then proceed by training these coefficients using the apparatus Expectation Maximization algorithm. Recognition is carried out by finding the least error landmarks displacement with respect to the model curves. Handwritten isolated Arabic characters are used to evaluate our approach.Keywords: character recognition, regression curves, handwritten Arabic letters, expectation maximization algorithm
Procedia PDF Downloads 14512964 Count Data Regression Modeling: An Application to Spontaneous Abortion in India
Authors: Prashant Verma, Prafulla K. Swain, K. K. Singh, Mukti Khetan
Abstract:
Objective: In India, around 20,000 women die every year due to abortion-related complications. In the modelling of count variables, there is sometimes a preponderance of zero counts. This article concerns the estimation of various count regression models to predict the average number of spontaneous abortion among women in the Punjab state of India. It also assesses the factors associated with the number of spontaneous abortions. Materials and methods: The study included 27,173 married women of Punjab obtained from the DLHS-4 survey (2012-13). Poisson regression (PR), Negative binomial (NB) regression, zero hurdle negative binomial (ZHNB), and zero-inflated negative binomial (ZINB) models were employed to predict the average number of spontaneous abortions and to identify the determinants affecting the number of spontaneous abortions. Results: Statistical comparisons among four estimation methods revealed that the ZINB model provides the best prediction for the number of spontaneous abortions. Antenatal care (ANC) place, place of residence, total children born to a woman, woman's education and economic status were found to be the most significant factors affecting the occurrence of spontaneous abortion. Conclusions: The study offers a practical demonstration of techniques designed to handle count variables. Statistical comparisons among four estimation models revealed that the ZINB model provided the best prediction for the number of spontaneous abortions and is recommended to be used to predict the number of spontaneous abortions. The study suggests that women receive institutional Antenatal care to attain limited parity. It also advocates promoting higher education among women in Punjab, India.Keywords: count data, spontaneous abortion, Poisson model, negative binomial model, zero hurdle negative binomial, zero-inflated negative binomial, regression
Procedia PDF Downloads 15512963 The Effect of Metformin in Combination with Dexamethasone on the CXCR4 Level in Multiple Myeloma Cell Line
Authors: Seyede Sanaz Seyedebrahimi, Shima Rahimi, Shohreh Fakhari, Ali Jalili
Abstract:
Background: CXCR4, as a chemokine receptor, plays well-known roles in various types of cancers. Several studies have been conducted to overcome CXCR4 axis acts in multiple myeloma (MM) pathogenesis and progression. Dexamethasone, a standard treatment for multiple myeloma, has been shown to increase CXCR4 levels in multiple myeloma cell lines. Herein, we focused on the effects of metformin and dexamethasone on CXCR4 at the cellular level and the migration rate of cell lines after exposure to a combination compared to single-agent models. Materials and Method: Multiple myeloma cell lines (U266 and RPMI8226) were cultured with different metformin and dexamethasone concentrations in single-agent and combination models. The simultaneous combination doses were calculated by CompuSyn software. Cell surface and mRNA expression of CXCR4 were determined using flow cytometry and the quantitative reverse transcription-polymerase chain reaction (qRT-PCR) assay, respectively. The Transwell cell migration assay evaluated the migration ability. Results: In concurred with previous studies, our results showed a dexamethasone up-regulation effect on CXCR4 in a dose-dependent manner. Although, the metformin single-agent model could reduce CXCR4 expression of U266 and RPMI8226 in cell surface and mRNA expression level. Moreover, the administration of metformin and dexamethasone simultaneously exerted a higher suppression effect on CXCR4 expression than the metformin single-agent model. The migration rate through the combination model's matrigel membrane was remarkably lower than the metformin and dexamethasone single-agent model. Discussion: According to our findings, the combination of metformin and dexamethasone effectively inhibited dexamethasone-induced CXCR4 expression in multiple myeloma cell lines. As a result, metformin may be counted as an alternative medicine combined with other chemotherapies to combat multiple myeloma. However, more research is required.Keywords: CXCR4, dexamethasone, metformin, migration, multiple myeloma
Procedia PDF Downloads 15612962 Rough Neural Networks in Adapting Cellular Automata Rule for Reducing Image Noise
Authors: Yasser F. Hassan
Abstract:
The reduction or removal of noise in a color image is an essential part of image processing, whether the final information is used for human perception or for an automatic inspection and analysis. This paper describes the modeling system based on the rough neural network model to adaptive cellular automata for various image processing tasks and noise remover. In this paper, we consider the problem of object processing in colored image using rough neural networks to help deriving the rules which will be used in cellular automata for noise image. The proposed method is compared with some classical and recent methods. The results demonstrate that the new model is capable of being trained to perform many different tasks, and that the quality of these results is comparable or better than established specialized algorithms.Keywords: rough sets, rough neural networks, cellular automata, image processing
Procedia PDF Downloads 43912961 Parent’s Expectations and School Achievement: Longitudinal Perspective among Chilean Pupils
Authors: Marine Hascoet, Valentina Giaconi, Ludivine Jamain
Abstract:
The aim of our study is to examine if the family socio-economic status (SES) has an influence on students’ academic achievement. We first make the hypothesis that the more their families have financial and social resources, the more students succeed at school. We second make the hypothesis that this family SES has also an impact on parents’ expectations about their children educational outcomes. Moreover, we want to study if that parents’ expectations play the role of mediator between parents’ socio-economic status and the student’ self-concept and academic outcome. We test this model with a longitudinal design thanks to the census-based assessment from the System of Measurement of the Quality of Education (SIMCE). The SIMCE tests aim to assess all the students attending to regular education in a defined level. The sample used in this study came from the SIMCE assessments done three times: in 4th, 8th and 11th grade during the years 2007, 2011 and 2014 respectively. It includes 156.619 students (75.084 boys and 81.535 girls) that had valid responses for the three years. The family socio-economic status was measured at the first assessment (in 4th grade). The parents’ educational expectations and the students’ self-concept were measured at the second assessment (in 8th grade). The achievement score was measured twice; once when children were in 4th grade and a second time when they were in 11th grade. To test our hypothesis, we have defined a structural equation model. We found that our model fit well the data (CFI = 0.96, TLI = 0.95, RMSEA = 0.05, SRMR = 0.05). Both family SES and prior achievements predict parents’ educational expectations and effect of SES is important in comparison to the other coefficients. These expectations predict students’ achievement three years later (with prior achievement controlled) but not their self-concept. Our model explains 51.9% of the achievement in the 11th grade. Our results confirm the importance of the parents’ expectations and the significant role of socio-economic status in students’ academic achievement in Chile.Keywords: Chilean context, parent’s expectations, school achievement, self-concept, socio-economic status
Procedia PDF Downloads 14112960 Downside Risk Analysis of the Nigerian Stock Market: A Value at Risk Approach
Authors: Godwin Chigozie Okpara
Abstract:
This paper using standard GARCH, EGARCH, and TARCH models on day of the week return series (of 246 days) from the Nigerian Stock market estimated the model variants’ VaR. An asymmetric return distribution and fat-tail phenomenon in financial time series were considered by estimating the models with normal, student t and generalized error distributions. The analysis based on Akaike Information Criterion suggests that the EGARCH model with student t innovation distribution can furnish more accurate estimate of VaR. In the light of this, we apply the likelihood ratio tests of proportional failure rates to VaR derived from EGARCH model in order to determine the short and long positions VaR performances. The result shows that as alpha ranges from 0.05 to 0.005 for short positions, the failure rate significantly exceeds the prescribed quintiles while it however shows no significant difference between the failure rate and the prescribed quantiles for long positions. This suggests that investors and portfolio managers in the Nigeria stock market have long trading position or can buy assets with concern on when the asset prices will fall. Precisely, the VaR estimates for the long position range from -4.7% for 95 percent confidence level to -10.3% for 99.5 percent confidence level.Keywords: downside risk, value-at-risk, failure rate, kupiec LR tests, GARCH models
Procedia PDF Downloads 44312959 Three-Dimensional Model of Leisure Activities: Activity, Relationship, and Expertise
Authors: Taekyun Hur, Yoonyoung Kim, Junkyu Lim
Abstract:
Previous works on leisure activities had been categorizing activities arbitrarily and subjectively while focusing on a single dimension (e.g. active-passive, individual-group). To overcome these problems, this study proposed a Korean leisure activities’ matrix model that considered multidimensional features of leisure activities, which was comprised of 3 main factors and 6 sub factors: (a) Active (physical, mental), (b) Relational (quantity, quality), (c) Expert (entry barrier, possibility of improving). We developed items for measuring the degree of each dimension for every leisure activity. Using the developed Leisure Activities Dimensions (LAD) questionnaire, we investigated the presented dimensions of a total of 78 leisure activities which had been enjoyed by most Koreans recently (e.g. watching movie, taking a walk, watching media). The study sample consisted of 1348 people (726 men, 658 women) ranging in age from teenagers to elderlies in their seventies. This study gathered 60 data for each leisure activity, a total of 4860 data, which were used for statistical analysis. First, this study compared 3-factor model (Activity, Relation, Expertise) fit with 6-factor model (physical activity, mental activity, relational quantity, relational quality, entry barrier, possibility of improving) fit by using confirmatory factor analysis. Based on several goodness-of-fit indicators, the 6-factor model for leisure activities was a better fit for the data. This result indicates that it is adequate to take account of enough dimensions of leisure activities (6-dimensions in our study) to specifically apprehend each leisure attributes. In addition, the 78 leisure activities were cluster-analyzed with the scores calculated based on the 6-factor model, which resulted in 8 leisure activity groups. Cluster 1 (e.g. group sports, group musical activity) and Cluster 5 (e.g. individual sports) had generally higher scores on all dimensions than others, but Cluster 5 had lower relational quantity than Cluster 1. In contrast, Cluster 3 (e.g. SNS, shopping) and Cluster 6 (e.g. playing a lottery, taking a nap) had low scores on a whole, though Cluster 3 showed medium levels of relational quantity and quality. Cluster 2 (e.g. machine operating, handwork/invention) required high expertise and mental activity, but low physical activity. Cluster 4 indicated high mental activity and relational quantity despite low expertise. Cluster 7 (e.g. tour, joining festival) required not only moderate degrees of physical activity and relation, but low expertise. Lastly, Cluster 8 (e.g. meditation, information searching) had the appearance of high mental activity. Even though clusters of our study had a few similarities with preexisting taxonomy of leisure activities, there was clear distinctiveness between them. Unlike the preexisting taxonomy that had been created subjectively, we assorted 78 leisure activities based on objective figures of 6-dimensions. We also could identify that some leisure activities, which used to belong to the same leisure group, were included in different clusters (e.g. filed ball sports, net sports) because of different features. In other words, the results can provide a different perspective on leisure activities research and be helpful for figuring out what various characteristics leisure participants have.Keywords: leisure, dimensional model, activity, relationship, expertise
Procedia PDF Downloads 31112958 Improving Chest X-Ray Disease Detection with Enhanced Data Augmentation Using Novel Approach of Diverse Conditional Wasserstein Generative Adversarial Networks
Authors: Malik Muhammad Arslan, Muneeb Ullah, Dai Shihan, Daniyal Haider, Xiaodong Yang
Abstract:
Chest X-rays are instrumental in the detection and monitoring of a wide array of diseases, including viral infections such as COVID-19, tuberculosis, pneumonia, lung cancer, and various cardiac and pulmonary conditions. To enhance the accuracy of diagnosis, artificial intelligence (AI) algorithms, particularly deep learning models like Convolutional Neural Networks (CNNs), are employed. However, these deep learning models demand a substantial and varied dataset to attain optimal precision. Generative Adversarial Networks (GANs) can be employed to create new data, thereby supplementing the existing dataset and enhancing the accuracy of deep learning models. Nevertheless, GANs have their limitations, such as issues related to stability, convergence, and the ability to distinguish between authentic and fabricated data. In order to overcome these challenges and advance the detection and classification of CXR normal and abnormal images, this study introduces a distinctive technique known as DCWGAN (Diverse Conditional Wasserstein GAN) for generating synthetic chest X-ray (CXR) images. The study evaluates the effectiveness of this Idiosyncratic DCWGAN technique using the ResNet50 model and compares its results with those obtained using the traditional GAN approach. The findings reveal that the ResNet50 model trained on the DCWGAN-generated dataset outperformed the model trained on the classic GAN-generated dataset. Specifically, the ResNet50 model utilizing DCWGAN synthetic images achieved impressive performance metrics with an accuracy of 0.961, precision of 0.955, recall of 0.970, and F1-Measure of 0.963. These results indicate the promising potential for the early detection of diseases in CXR images using this Inimitable approach.Keywords: CNN, classification, deep learning, GAN, Resnet50
Procedia PDF Downloads 8812957 Physico-Chemical Characterization of an Algerian Biomass: Application in the Adsorption of an Organic Pollutant
Authors: Djelloul Addad, Fatiha Belkhadem Mokhtari
Abstract:
The objective of this work is to study the retention of methylene blue (MB) by biomass. The Biomass is characterized by X-ray diffraction (XRD), infrared absorption (IRTF). Results show that the biomass contains organic and mineral substances. The effect of certain physicochemical parameters on the adsorption of MB is studied (effect of the pH). This study shows that the increase in the initial concentration of MB leads to an increase in the adsorbed quantity. The adsorption efficiency of MB decreases with increasing biomass mass. The adsorption kinetics show that the adsorption is rapid, and the maximum amount is reached after 120 min of contact time. It is noted that the pH has no great influence on the adsorption. The isotherms are best modelled by the Langmuir model. The adsorption kinetics follow the pseudo-second-order model. The thermodynamic study of adsorption shows that the adsorption is spontaneous and exothermic.Keywords: dyes, adsorption, biomass, methylene blue, langmuir
Procedia PDF Downloads 6712956 Structural Analysis and Modelling in an Evolving Iron Ore Operation
Authors: Sameh Shahin, Nannang Arrys
Abstract:
Optimizing pit slope stability and reducing strip ratio of a mining operation are two key tasks in geotechnical engineering. With a growing demand for minerals and an increasing cost associated with extraction, companies are constantly re-evaluating the viability of mineral deposits and challenging their geological understanding. Within Rio Tinto Iron Ore, the Structural Geology (SG) team investigate and collect critical data, such as point based orientations, mapping and geological inferences from adjacent pits to re-model deposits where previous interpretations have failed to account for structurally controlled slope failures. Utilizing innovative data collection methods and data-driven investigation, SG aims to address the root causes of slope instability. Committing to a resource grid drill campaign as the primary source of data collection will often bias data collection to a specific orientation and significantly reduce the capability to identify and qualify complexity. Consequently, these limitations make it difficult to construct a realistic and coherent structural model that identifies adverse structural domains. Without the consideration of complexity and the capability of capturing these structural domains, mining operations run the risk of inadequately designed slopes that may fail and potentially harm people. Regional structural trends have been considered in conjunction with surface and in-pit mapping data to model multi-batter fold structures that were absent from previous iterations of the structural model. The risk is evident in newly identified dip-slope and rock-mass controlled sectors of the geotechnical design rather than a ubiquitous dip-slope sector across the pit. The reward is two-fold: 1) providing sectors of rock-mass controlled design in previously interpreted structurally controlled domains and 2) the opportunity to optimize the slope angle for mineral recovery and reduced strip ratio. Furthermore, a resulting high confidence model with structures and geometries that can account for historic slope instabilities in structurally controlled domains where design assumptions failed.Keywords: structural geology, geotechnical design, optimization, slope stability, risk mitigation
Procedia PDF Downloads 4812955 Body of Dialectics: Exploring a Dynamic-Adaptational Model of Physical Self-Integrity and the Pursuit of Happiness in a Hostile World
Authors: Noam Markovitz
Abstract:
People with physical disabilities constitute a very large and simultaneously a diverse group of general population, as the term physical disabilities is extensive and covers a wide range of disabilities. Therefore, individuals with physical disabilities are often faced with a new, threatening and stressful reality leading possibly to a multi-crisis in their lives due to the great changes they experience in somatic, socio-economic, occupational and psychological level. The current study seeks to advance understanding of the complex adaptation to physical disabilities by expanding the dynamic-adaptational model of the pursuit of happiness in a hostile world with a new conception of physical self-integrity. Physical self-integrity incorporates an objective dimension, namely physical self-functioning (PSF), and a subjective dimension, namely physical self-concept (PSC). Both of these dimensions constitute an experience of wholeness in the individual’s identification with her or his physical body. The model guiding this work is dialectical in nature and depicts two systems in the individual’s sense of happiness: subjective well-being (SWB) and meaning in life (MIL). Both systems serve as self-adaptive agents that moderate the complementary system of the hostile-world scenario (HWS), which integrates one’s perceived threats to one’s integrity. Thus, in situations of increased HWS, the moderation may take a form of joint activity in which SWB and MIL are amplified or a form of compensation in which one system produces a stronger effect while the other system produces a weaker effect. The current study investigated PSC in relations to SWB and MIL through pleasantness and meanings that are physically or metaphorically grounded in one’s body. In parallel, PSC also relates to HWS by activating representations of inappropriateness, deformation and vulnerability. In view of possibly dialectical positions of opposing and complementary forces within the current model, the current field study that aims to explore PSC as appearing in an independent, cross-sectional, design addressing the model’s variables in a focal group of people with physical disabilities. This study delineated the participation of the PSC in the adaptational functions of SWB and MIL vis-à-vis HWS-related life adversities. The findings showed that PSC could fully complement the main variables of the pursuit of happiness in a hostile world model. The assumed dialectics in the form of a stronger relationship between SWB and MIL in the face of physical disabilities was not supported. However, it was found that when HWS increased, PSC and MIL were strongly linked, whereas PSC and SWB were weakly linked. This highlights the compensatory role of MIL. From a conceptual viewpoint, the current investigation may clarify the role of PSC as an adaptational agent of the individual’s positive health in complementary senses of bodily wholeness. Methodologically, the advantage of the current investigation is the application of an integrative, model-based approach within a specially focused design with a particular relevance to PSC. Moreover, from an applicative viewpoint, the current investigation may suggest how an innovative model may be translated to therapeutic interventions used by clinicians, counselors and practitioners in improving wellness and psychological well-being, particularly among people with physical disabilities.Keywords: older adults, physical disabilities, physical self-concept, pursuit of happiness in a hostile-world
Procedia PDF Downloads 15012954 Effect of Climate Change on Runoff in the Upper Mun River Basin, Thailand
Authors: Preeyaphorn Kosa, Thanutch Sukwimolseree
Abstract:
The climate change is a main parameter which affects the element of hydrological cycle especially runoff. Then, the purpose of this study is to determine the impact of the climate change on surface runoff using land use map on 2008 and daily weather data during January 1, 1979 to September 30, 2010 for SWAT model. SWAT continuously simulate time model and operates on a daily time step at basin scale. The results present that the effect of temperature change cannot be clearly presented on the change of runoff while the rainfall, relative humidity and evaporation are the parameters for the considering of runoff change. If there are the increasing of rainfall and relative humidity, there is also the increasing of runoff. On the other hand, if there is the increasing of evaporation, there is the decreasing of runoff.Keywords: climate, runoff, SWAT, upper Mun River basin
Procedia PDF Downloads 39612953 Social Business Model: Leveraging Business and Social Value of Social Enterprises
Authors: Miriam Borchardt, Agata M. Ritter, Macaliston G. da Silva, Mauricio N. de Carvalho, Giancarlo M. Pereira
Abstract:
This paper aims to analyze the barriers faced by social enterprises and based on that to propose a social business model framework that helps them to leverage their businesses and the social value delivered. A business model for social enterprises should amplify the value perception including social value for the beneficiaries while generating enough profit to escalate the business. Most of the social value beneficiaries are people from the base of the economic pyramid (BOP) or the ones that have specific needs. Because of this, products and services should be affordable to consumers while solving social needs of the beneficiaries. Developing products and services with social value require tie relationship among the social enterprises and universities, public institutions, accelerators, and investors. Despite being focused on social value and contributing to the beneficiaries’ quality of life as well as contributing to the governments that cannot properly guarantee public services and infrastructure to the BOP, many barriers are faced by the social enterprises to escalate their businesses. This is a work in process and five micro- and small-sized social enterprises in Brazil have been studied: (i) one has developed a kit for cervical uterine cancer detection to allow the BOP women to collect their own material and deliver to a laboratory for U$1,00; (ii) other has developed special products without lactose and it is about 70% cheaper than the traditional brands in the market; (iii) the third has developed prosthesis and orthosis to surplus needs that health public system have not done efficiently; (iv) the fourth has produced and commercialized menstrual panties aiming to reduce the consumption of dischargeable ones while saving money to the consumers; (v) the fifth develops and commercializes clothes from fabric wastes in a partnership with BOP artisans. The preliminary results indicate that the main barriers are related to the public system to recognize these products as public money that could be saved if they bought products from these enterprises instead of the multinational pharmaceutical companies, to the traditional distribution system (e.g. pharmacies) that avoid these products because of the low or non-existing profit, to the difficulty buying raw material in small quantities, to leverage investment by the investors, to cultural barriers and taboos. Interesting strategies to reduce the costs have been observed: some enterprises have focused on simplifying products, others have invested in partnerships with local producers and have developed their machines focusing on process efficiency to leverage investment by the investors.Keywords: base of the pyramid, business model, social business, social business model, social enterprises
Procedia PDF Downloads 10112952 Impact of Urbanization on the Performance of Higher Education Institutions
Authors: Chandan Jha, Amit Sachan, Arnab Adhikari, Sayantan Kundu
Abstract:
The purpose of this study is to evaluate the performance of Higher Education Institutions (HEIs) of India and examine the impact of urbanization on the performance of HEIs. In this study, the Data Envelopment Analysis (DEA) has been used, and the authors have collected the required data related to performance measures from the National Institutional Ranking Framework web portal. In this study, the authors have evaluated the performance of HEIs by using two different DEA models. In the first model, geographic locations of the institutes have been categorized into two categories, i.e., Urban Vs. Non-Urban. However, in the second model, these geographic locations have been classified into three categories, i.e., Urban, Semi-Urban, Non-Urban. The findings of this study provide several insights related to the degree of urbanization and the performance of HEIs.Keywords: DEA, higher education, performance evaluation, urbanization
Procedia PDF Downloads 21512951 Visualization and Performance Measure to Determine Number of Topics in Twitter Data Clustering Using Hybrid Topic Modeling
Authors: Moulana Mohammed
Abstract:
Topic models are widely used in building clusters of documents for more than a decade, yet problems occurring in choosing optimal number of topics. The main problem is the lack of a stable metric of the quality of topics obtained during the construction of topic models. The authors analyzed from previous works, most of the models used in determining the number of topics are non-parametric and quality of topics determined by using perplexity and coherence measures and concluded that they are not applicable in solving this problem. In this paper, we used the parametric method, which is an extension of the traditional topic model with visual access tendency for visualization of the number of topics (clusters) to complement clustering and to choose optimal number of topics based on results of cluster validity indices. Developed hybrid topic models are demonstrated with different Twitter datasets on various topics in obtaining the optimal number of topics and in measuring the quality of clusters. The experimental results showed that the Visual Non-negative Matrix Factorization (VNMF) topic model performs well in determining the optimal number of topics with interactive visualization and in performance measure of the quality of clusters with validity indices.Keywords: interactive visualization, visual mon-negative matrix factorization model, optimal number of topics, cluster validity indices, Twitter data clustering
Procedia PDF Downloads 13412950 Vulnerability Assessment of Healthcare Interdependent Critical Infrastructure Coloured Petri Net Model
Authors: N. Nivedita, S. Durbha
Abstract:
Critical Infrastructure (CI) consists of services and technological networks such as healthcare, transport, water supply, electricity supply, information technology etc. These systems are necessary for the well-being and to maintain effective functioning of society. Critical Infrastructures can be represented as nodes in a network where they are connected through a set of links depicting the logical relationship among them; these nodes are interdependent on each other and interact with each at other at various levels, such that the state of each infrastructure influences or is correlated to the state of another. Disruption in the service of one infrastructure nodes of the network during a disaster would lead to cascading and escalating disruptions across other infrastructures nodes in the network. The operation of Healthcare Infrastructure is one such Critical Infrastructure that depends upon a complex interdependent network of other Critical Infrastructure, and during disasters it is very vital for the Healthcare Infrastructure to be protected, accessible and prepared for a mass casualty. To reduce the consequences of a disaster on the Critical Infrastructure and to ensure a resilient Critical Health Infrastructure network, knowledge, understanding, modeling, and analyzing the inter-dependencies between the infrastructures is required. The paper would present inter-dependencies related to Healthcare Critical Infrastructure based on Hierarchical Coloured Petri Nets modeling approach, given a flood scenario as the disaster which would disrupt the infrastructure nodes. The model properties are being analyzed for the various state changes which occur when there is a disruption or damage to any of the Critical Infrastructure. The failure probabilities for the failure risk of interconnected systems are calculated by deriving a reachability graph, which is later mapped to a Markov chain. By analytically solving and analyzing the Markov chain, the overall vulnerability of the Healthcare CI HCPN model is demonstrated. The entire model would be integrated with Geographic information-based decision support system to visualize the dynamic behavior of the interdependency of the Healthcare and related CI network in a geographically based environment.Keywords: critical infrastructure interdependency, hierarchical coloured petrinet, healthcare critical infrastructure, Petri Nets, Markov chain
Procedia PDF Downloads 52912949 Moderating and Mediating Effects of Business Model Innovation Barriers during Crises: A Structural Equation Model Tested on German Chemical Start-Ups
Authors: Sarah Mueller-Saegebrecht, André Brendler
Abstract:
Business model innovation (BMI) as an intentional change of an existing business model (BM) or the design of a new BM is essential to a firm's development in dynamic markets. The relevance of BMI is also evident in the ongoing COVID-19 pandemic, in which start-ups, in particular, are affected by limited access to resources. However, first studies also show that they react faster to the pandemic than established firms. A strategy to successfully handle such threatening dynamic changes represents BMI. Entrepreneurship literature shows how and when firms should utilize BMI in times of crisis and which barriers one can expect during the BMI process. Nevertheless, research merging BMI barriers and crises is still underexplored. Specifically, further knowledge about antecedents and the effect of moderators on the BMI process is necessary for advancing BMI research. The addressed research gap of this study is two-folded: First, foundations to the subject on how different crises impact BM change intention exist, yet their analysis lacks the inclusion of barriers. Especially, entrepreneurship literature lacks knowledge about the individual perception of BMI barriers, which is essential to predict managerial reactions. Moreover, internal BMI barriers have been the focal point of current research, while external BMI barriers remain virtually understudied. Second, to date, BMI research is based on qualitative methodologies. Thus, a lack of quantitative work can specify and confirm these qualitative findings. By focusing on the crisis context, this study contributes to BMI literature by offering a first quantitative attempt to embed BMI barriers into a structural equation model. It measures managers' perception of BMI development and implementation barriers in the BMI process, asking the following research question: How does a manager's perception of BMI barriers influence BMI development and implementation in times of crisis? Two distinct research streams in economic literature explain how individuals react when perceiving a threat. "Prospect Theory" claims that managers demonstrate risk-seeking tendencies when facing a potential loss, and opposing "Threat-Rigidity Theory" suggests that managers demonstrate risk-averse behavior when facing a potential loss. This study quantitively tests which theory can best predict managers' BM reaction to a perceived crisis. Out of three in-depth interviews in the German chemical industry, 60 past BMIs were identified. The participating start-up managers gave insights into their start-up's strategic and operational functioning. After, each interviewee described crises that had already affected their BM. The participants explained how they conducted BMI to overcome these crises, which development and implementation barriers they faced, and how severe they perceived them, assessed on a 5-point Likert scale. In contrast to current research, results reveal that a higher perceived threat level of a crisis harms BM experimentation. Managers seem to conduct less BMI in times of crisis, whereby BMI development barriers dampen this relation. The structural equation model unveils a mediating role of BMI implementation barriers on the link between the intention to change a BM and the concrete BMI implementation. In conclusion, this study confirms the threat-rigidity theory.Keywords: barrier perception, business model innovation, business model innovation barriers, crises, prospect theory, start-ups, structural equation model, threat-rigidity theory
Procedia PDF Downloads 9412948 Microwave-Assisted Chemical Pre-Treatment of Waste Sorghum Leaves: Process Optimization and Development of an Intelligent Model for Determination of Volatile Compound Fractions
Authors: Daneal Rorke, Gueguim Kana
Abstract:
The shift towards renewable energy sources for biofuel production has received increasing attention. However, the use and pre-treatment of lignocellulosic material are inundated with the generation of fermentation inhibitors which severely impact the feasibility of bioprocesses. This study reports the profiling of all volatile compounds generated during microwave assisted chemical pre-treatment of sorghum leaves. Furthermore, the optimization of reducing sugar (RS) from microwave assisted acid pre-treatment of sorghum leaves was assessed and gave a coefficient of determination (R2) of 0.76, producing an optimal RS yield of 2.74 g FS/g substrate. The development of an intelligent model to predict volatile compound fractions gave R2 values of up to 0.93 for 21 volatile compounds. Sensitivity analysis revealed that furfural and phenol exhibited high sensitivity to acid concentration, alkali concentration and S:L ratio, while phenol showed high sensitivity to microwave duration and intensity as well. These findings illustrate the potential of using an intelligent model to predict the volatile compound fraction profile of compounds generated during pre-treatment of sorghum leaves in order to establish a more robust and efficient pre-treatment regime for biofuel production.Keywords: artificial neural networks, fermentation inhibitors, lignocellulosic pre-treatment, sorghum leaves
Procedia PDF Downloads 24812947 A Hierarchical Bayesian Calibration of Data-Driven Models for Composite Laminate Consolidation
Authors: Nikolaos Papadimas, Joanna Bennett, Amir Sakhaei, Timothy Dodwell
Abstract:
Composite modeling of consolidation processes is playing an important role in the process and part design by indicating the formation of possible unwanted prior to expensive experimental iterative trial and development programs. Composite materials in their uncured state display complex constitutive behavior, which has received much academic interest, and this with different models proposed. Errors from modeling and statistical which arise from this fitting will propagate through any simulation in which the material model is used. A general hyperelastic polynomial representation was proposed, which can be readily implemented in various nonlinear finite element packages. In our case, FEniCS was chosen. The coefficients are assumed uncertain, and therefore the distribution of parameters learned using Markov Chain Monte Carlo (MCMC) methods. In engineering, the approach often followed is to select a single set of model parameters, which on average, best fits a set of experiments. There are good statistical reasons why this is not a rigorous approach to take. To overcome these challenges, A hierarchical Bayesian framework was proposed in which population distribution of model parameters is inferred from an ensemble of experiments tests. The resulting sampled distribution of hyperparameters is approximated using Maximum Entropy methods so that the distribution of samples can be readily sampled when embedded within a stochastic finite element simulation. The methodology is validated and demonstrated on a set of consolidation experiments of AS4/8852 with various stacking sequences. The resulting distributions are then applied to stochastic finite element simulations of the consolidation of curved parts, leading to a distribution of possible model outputs. With this, the paper, as far as the authors are aware, represents the first stochastic finite element implementation in composite process modelling.Keywords: data-driven , material consolidation, stochastic finite elements, surrogate models
Procedia PDF Downloads 14612946 Applying And Connecting The Microgrid Of Artificial Intelligence In The Form Of A Spiral Model To Optimize Renewable Energy Sources
Authors: PR
Abstract:
Renewable energy is a sustainable substitute to fossil fuels, which are depleting and attributing to global warming as well as greenhouse gas emissions. Renewable energy innovations including solar, wind, and geothermal have grown significantly and play a critical role in meeting energy demands recently. Consequently, Artificial Intelligence (AI) could further enhance the benefits of renewable energy systems. The combination of renewable technologies and AI could facilitate the development of smart grids that can better manage energy distribution and storage. AI thus has the potential to optimize the efficiency and reliability of renewable energy systems, reduce costs, and improve their overall performance. The conventional methods of using smart micro-grids are to connect these micro-grids in series or parallel or a combination of series and parallel. Each of these methods has its advantages and disadvantages. In this study, the proposal of using the method of connecting microgrids in a spiral manner is investigated. One of the important reasons for choosing this type of structure is the two-way reinforcement and exchange of each inner layer with the outer and upstream layer. With this model, we have the ability to increase energy from a small amount to a significant amount based on exponential functions. The geometry used to close the smart microgrids is based on nature.This study provides an overview of the applications of algorithms and models of AI as well as its advantages and challenges in renewable energy systems.Keywords: artificial intelligence, renewable energy sources, spiral model, optimize
Procedia PDF Downloads 912945 '3D City Model' through Quantum Geographic Information System: A Case Study of Gujarat International Finance Tec-City, Gujarat, India
Authors: Rahul Jain, Pradhir Parmar, Dhruvesh Patel
Abstract:
Planning and drawing are the important aspects of civil engineering. For testing theories about spatial location and interaction between land uses and related activities the computer based solution of urban models are used. The planner’s primary interest is in creation of 3D models of building and to obtain the terrain surface so that he can do urban morphological mappings, virtual reality, disaster management, fly through generation, visualization etc. 3D city models have a variety of applications in urban studies. Gujarat International Finance Tec-City (GIFT) is an ongoing construction site between Ahmedabad and Gandhinagar, Gujarat, India. It will be built on 3590000 m2 having a geographical coordinates of North Latitude 23°9’5’’N to 23°10’55’’ and East Longitude 72°42’2’’E to 72°42’16’’E. Therefore to develop 3D city models of GIFT city, the base map of the city is collected from GIFT office. Differential Geographical Positioning System (DGPS) is used to collect the Ground Control Points (GCP) from the field. The GCP points are used for the registration of base map in QGIS. The registered map is projected in WGS 84/UTM zone 43N grid and digitized with the help of various shapefile tools in QGIS. The approximate height of the buildings that are going to build is collected from the GIFT office and placed on the attribute table of each layer created using shapefile tools. The Shuttle Radar Topography Mission (SRTM) 1 Arc-Second Global (30 m X 30 m) grid data is used to generate the terrain of GIFT city. The Google Satellite Map is used to place on the background to get the exact location of the GIFT city. Various plugins and tools in QGIS are used to convert the raster layer of the base map of GIFT city into 3D model. The fly through tool is used for capturing and viewing the entire area in 3D of the city. This paper discusses all techniques and their usefulness in 3D city model creation from the GCP, base map, SRTM and QGIS.Keywords: 3D model, DGPS, GIFT City, QGIS, SRTM
Procedia PDF Downloads 24712944 Ranking All of the Efficient DMUs in DEA
Authors: Elahe Sarfi, Esmat Noroozi, Farhad Hosseinzadeh Lotfi
Abstract:
One of the important issues in Data Envelopment Analysis is the ranking of Decision Making Units. In this paper, a method for ranking DMUs is presented through which the weights related to efficient units should be chosen in a way that the other units preserve a certain percentage of their efficiency with the mentioned weights. To this end, a model is presented for ranking DMUs on the base of their superefficiency by considering the mentioned restrictions related to weights. This percentage can be determined by decision Maker. If the specific percentage is unsuitable, we can find a suitable and feasible one for ranking DMUs accordingly. Furthermore, the presented model is capable of ranking all of the efficient units including nonextreme efficient ones. Finally, the presented models are utilized for two sets of data and related results are reported.Keywords: data envelopment analysis, efficiency, ranking, weight
Procedia PDF Downloads 45712943 Habitat Model Review and a Proposed Methodology to Value Economic Trade-Off between Cage Culture and Habitat of an Endemic Species in Lake Maninjau, Indonesia
Authors: Ivana Yuniarti, Iwan Ridwansyah
Abstract:
This paper delivers a review of various methodologies for habitat assessment and a proposed methodology to assess an endemic fish species habitat in Lake Maninjau, Indonesia as a part of a Ph.D. project. This application is mainly aimed to assess the trade-off between the economic value of aquaculture and the fisheries. The proposed methodology is a generalized linear model (GLM) combined with GIS to assess presence-absence data or habitat suitability index (HSI) combined with the analytical hierarchy process (AHP). Further, a cost of habitat replacement approach is planned to be used to calculate the habitat value as well as its trade-off with the economic value of aquaculture. The result of the study is expected to be a scientific consideration in local decision making and to provide a reference for other areas in the country.Keywords: AHP, habitat, GLM, HSI, Maninjau
Procedia PDF Downloads 15212942 Predicting Survival in Cancer: How Cox Regression Model Compares to Artifial Neural Networks?
Authors: Dalia Rimawi, Walid Salameh, Amal Al-Omari, Hadeel AbdelKhaleq
Abstract:
Predication of Survival time of patients with cancer, is a core factor that influences oncologist decisions in different aspects; such as offered treatment plans, patients’ quality of life and medications development. For a long time proportional hazards Cox regression (ph. Cox) was and still the most well-known statistical method to predict survival outcome. But due to the revolution of data sciences; new predication models were employed and proved to be more flexible and provided higher accuracy in that type of studies. Artificial neural network is one of those models that is suitable to handle time to event predication. In this study we aim to compare ph Cox regression with artificial neural network method according to data handling and Accuracy of each model.Keywords: Cox regression, neural networks, survival, cancer.
Procedia PDF Downloads 20012941 Survival and Hazard Maximum Likelihood Estimator with Covariate Based on Right Censored Data of Weibull Distribution
Authors: Al Omari Mohammed Ahmed
Abstract:
This paper focuses on Maximum Likelihood Estimator with Covariate. Covariates are incorporated into the Weibull model. Under this regression model with regards to maximum likelihood estimator, the parameters of the covariate, shape parameter, survival function and hazard rate of the Weibull regression distribution with right censored data are estimated. The mean square error (MSE) and absolute bias are used to compare the performance of Weibull regression distribution. For the simulation comparison, the study used various sample sizes and several specific values of the Weibull shape parameter.Keywords: weibull regression distribution, maximum likelihood estimator, survival function, hazard rate, right censoring
Procedia PDF Downloads 44112940 On the PTC Thermistor Model with a Hyperbolic Tangent Electrical Conductivity
Authors: M. O. Durojaye, J. T. Agee
Abstract:
This paper is on the one-dimensional, positive temperature coefficient (PTC) thermistor model with a hyperbolic tangent function approximation for the electrical conductivity. The method of asymptotic expansion was adopted to obtain the steady state solution and the unsteady-state response was obtained using the method of lines (MOL) which is a well-established numerical technique. The approach is to reduce the partial differential equation to a vector system of ordinary differential equations and solve numerically. Our analysis shows that the hyperbolic tangent approximation introduced is well suitable for the electrical conductivity. Numerical solutions obtained also exhibit correct physical characteristics of the thermistor and are in good agreement with the exact steady state solutions.Keywords: electrical conductivity, hyperbolic tangent function, PTC thermistor, method of lines
Procedia PDF Downloads 322