Search results for: fundamental models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8364

Search results for: fundamental models

1854 Efficient DNN Training on Heterogeneous Clusters with Pipeline Parallelism

Authors: Lizhi Ma, Dan Liu

Abstract:

Pipeline parallelism has been widely used to accelerate distributed deep learning to alleviate GPU memory bottlenecks and to ensure that models can be trained and deployed smoothly under limited graphics memory conditions. However, in highly heterogeneous distributed clusters, traditional model partitioning methods are not able to achieve load balancing. The overlap of communication and computation is also a big challenge. In this paper, HePipe is proposed, an efficient pipeline parallel training method for highly heterogeneous clusters. According to the characteristics of the neural network model pipeline training task, oriented to the 2-level heterogeneous cluster computing topology, a training method based on the 2-level stage division of neural network modeling and partitioning is designed to improve the parallelism. Additionally, a multi-forward 1F1B scheduling strategy is designed to accelerate the training time of each stage by executing the computation units in advance to maximize the overlap between the forward propagation communication and backward propagation computation. Finally, a dynamic recomputation strategy based on task memory requirement prediction is proposed to improve the fitness ratio of task and memory, which improves the throughput of the cluster and solves the memory shortfall problem caused by memory differences in heterogeneous clusters. The empirical results show that HePipe improves the training speed by 1.6×−2.2× over the existing asynchronous pipeline baselines.

Keywords: pipeline parallelism, heterogeneous cluster, model training, 2-level stage partitioning

Procedia PDF Downloads 19
1853 Portfolio Optimization with Reward-Risk Ratio Measure Based on the Mean Absolute Deviation

Authors: Wlodzimierz Ogryczak, Michal Przyluski, Tomasz Sliwinski

Abstract:

In problems of portfolio selection, the reward-risk ratio criterion is optimized to search for a risky portfolio with the maximum increase of the mean return in proportion to the risk measure increase when compared to the risk-free investments. In the classical model, following Markowitz, the risk is measured by the variance thus representing the Sharpe ratio optimization and leading to the quadratic optimization problems. Several Linear Programming (LP) computable risk measures have been introduced and applied in portfolio optimization. In particular, the Mean Absolute Deviation (MAD) measure has been widely recognized. The reward-risk ratio optimization with the MAD measure can be transformed into the LP formulation with the number of constraints proportional to the number of scenarios and the number of variables proportional to the total of the number of scenarios and the number of instruments. This may lead to the LP models with huge number of variables and constraints in the case of real-life financial decisions based on several thousands scenarios, thus decreasing their computational efficiency and making them hardly solvable by general LP tools. We show that the computational efficiency can be then dramatically improved by an alternative model based on the inverse risk-reward ratio minimization and by taking advantages of the LP duality. In the introduced LP model the number of structural constraints is proportional to the number of instruments thus not affecting seriously the simplex method efficiency by the number of scenarios and therefore guaranteeing easy solvability. Moreover, we show that under natural restriction on the target value the MAD risk-reward ratio optimization is consistent with the second order stochastic dominance rules.

Keywords: portfolio optimization, reward-risk ratio, mean absolute deviation, linear programming

Procedia PDF Downloads 407
1852 Payments for Forest Environmental Services: Advantages and Disadvantages in the Different Mechanisms in Vietnam North Central Area

Authors: Huong Nguyen Thi Thanh, Van Mai Thi Khanh

Abstract:

For around the world, payments for environmental services have been implemented since the late 1970s in Europe and North America; then, it was spread to Latin America, Asia, Africa, and finally Oceania in 2008. In Vietnam, payments for environmental services are an interesting issue recently with the forest as the main focus and therefore known as the program on payment for forest environmental services (PFES). PFES was piloted in Lam Dong and Son La in 2008 and has been widely applied in many provinces after 2010. PFES is in the orientation for the socialization of national forest protection in Vietnam and has made great strides in the last decade. By using the primary data and secondary data simultaneously, the paper clarifies two cases of implementing PFES in the Vietnam North Central area with the different mechanisms of payment. In the first case at Phu Loc district (Thua Thien Hue province), PFES is an indirect method by a water supply company via the Forest Protection and Development Fund. In the second one at Phong Nha – Ke Bang National Park (Quang Binh Province), tourism companies are the direct payers to forest owners. The paper describes the PFES implementation process at each site, clarifies the payment mechanism, and models the relationship between stakeholders in PFES implementation. Based on the current status of PFES sites, the paper compares and analyzes the advantages and disadvantages of the two payment methods. Finally, the paper proposes recommendations to improve the existing shortcomings in each payment mechanism.

Keywords: advantages and disadvantages, forest environmental services, forest protection, payment mechanism

Procedia PDF Downloads 129
1851 Numerical Modelling of Skin Tumor Diagnostics through Dynamic Thermography

Authors: Luiz Carlos Wrobel, Matjaz Hribersek, Jure Marn, Jurij Iljaz

Abstract:

Dynamic thermography has been clinically proven to be a valuable diagnostic technique for skin tumor detection as well as for other medical applications such as breast cancer diagnostics, diagnostics of vascular diseases, fever screening, dermatological and other applications. Thermography for medical screening can be done in two different ways, observing the temperature response under steady-state conditions (passive or static thermography), and by inducing thermal stresses by cooling or heating the observed tissue and measuring the thermal response during the recovery phase (active or dynamic thermography). The numerical modelling of heat transfer phenomena in biological tissue during dynamic thermography can aid the technique by improving process parameters or by estimating unknown tissue parameters based on measured data. This paper presents a nonlinear numerical model of multilayer skin tissue containing a skin tumor, together with the thermoregulation response of the tissue during the cooling-rewarming processes of dynamic thermography. The model is based on the Pennes bioheat equation and solved numerically by using a subdomain boundary element method which treats the problem as axisymmetric. The paper includes computational tests and numerical results for Clark II and Clark IV tumors, comparing the models using constant and temperature-dependent thermophysical properties, which showed noticeable differences and highlighted the importance of using a local thermoregulation model.

Keywords: boundary element method, dynamic thermography, static thermography, skin tumor diagnostic

Procedia PDF Downloads 107
1850 An Integrated Approach for Optimal Selection of Machining Parameters in Laser Micro-Machining Process

Authors: A. Gopala Krishna, M. Lakshmi Chaitanya, V. Kalyana Manohar

Abstract:

In the existent analysis, laser micro machining (LMM) of Silicon carbide (SiCp) reinforced Aluminum 7075 Metal Matrix Composite (Al7075/SiCp MMC) was studied. While machining, Because of the intense heat generated, A layer gets formed on the work piece surface which is called recast layer and this layer is detrimental to the surface quality of the component. The recast layer needs to be as small as possible for precise applications. Therefore, The height of recast layer and the depth of groove which are conflicting in nature were considered as the significant manufacturing criteria, Which determines the pursuit of a machining process obtained in LMM of Al7075/10%SiCp composite. The present work formulates the depth of groove and height of recast layer in relation to the machining parameters using the Response Surface Methodology (RSM) and correspondingly, The formulated mathematical models were put to use for optimization. Since the effect of machining parameters on the depth of groove and height of recast layer was contradictory, The problem was explicated as a multi objective optimization problem. Moreover, An evolutionary Non-dominated sorting genetic algorithm (NSGA-II) was employed to optimize the model established by RSM. Subsequently this algorithm was also adapted to achieve the Pareto optimal set of solutions that provide a detailed illustration for making the optimal solutions. Eventually experiments were conducted to affirm the results obtained from RSM and NSGA-II.

Keywords: Laser Micro Machining (LMM), depth of groove, Height of recast layer, Response Surface Methodology (RSM), non-dominated sorting genetic algorithm

Procedia PDF Downloads 345
1849 The Effect of Accounting Conservatism on Cost of Capital: A Quantile Regression Approach for MENA Countries

Authors: Maha Zouaoui Khalifa, Hakim Ben Othman, Hussaney Khaled

Abstract:

Prior empirical studies have investigated the economic consequences of accounting conservatism by examining its impact on the cost of equity capital (COEC). However, findings are not conclusive. We assume that inconsistent results of such association may be attributed to the regression models used in data analysis. To address this issue, we re-examine the effect of different dimension of accounting conservatism: unconditional conservatism (U_CONS) and conditional conservatism (C_CONS) on the COEC for a sample of listed firms from Middle Eastern and North Africa (MENA) countries, applying quantile regression (QR) approach developed by Koenker and Basset (1978). While classical ordinary least square (OLS) method is widely used in empirical accounting research, however it may produce inefficient and bias estimates in the case of departures from normality or long tail error distribution. QR method is more powerful than OLS to handle this kind of problem. It allows the coefficient on the independent variables to shift across the distribution of the dependent variable whereas OLS method only estimates the conditional mean effects of a response variable. We find as predicted that U_CONS has a significant positive effect on the COEC however, C_CONS has a negative impact. Findings suggest also that the effect of the two dimensions of accounting conservatism differs considerably across COEC quantiles. Comparing results from QR method with those of OLS, this study throws more lights on the association between accounting conservatism and COEC.

Keywords: unconditional conservatism, conditional conservatism, cost of equity capital, OLS, quantile regression, emerging markets, MENA countries

Procedia PDF Downloads 356
1848 Modeling Palm Oil Quality During the Ripening Process of Fresh Fruits

Authors: Afshin Keshvadi, Johari Endan, Haniff Harun, Desa Ahmad, Farah Saleena

Abstract:

Experiments were conducted to develop a model for analyzing the ripening process of oil palm fresh fruits in relation to oil yield and oil quality of palm oil produced. This research was carried out on 8-year-old Tenera (Dura × Pisifera) palms planted in 2003 at the Malaysian Palm Oil Board Research Station. Fresh fruit bunches were harvested from designated palms during January till May of 2010. The bunches were divided into three regions (top, middle and bottom), and fruits from the outer and inner layers were randomly sampled for analysis at 8, 12, 16 and 20 weeks after anthesis to establish relationships between maturity and oil development in the mesocarp and kernel. Computations on data related to ripening time, oil content and oil quality were performed using several computer software programs (MSTAT-C, SAS and Microsoft Excel). Nine nonlinear mathematical models were utilized using MATLAB software to fit the data collected. The results showed mean mesocarp oil percent increased from 1.24 % at 8 weeks after anthesis to 29.6 % at 20 weeks after anthesis. Fruits from the top part of the bunch had the highest mesocarp oil content of 10.09 %. The lowest kernel oil percent of 0.03 % was recorded at 12 weeks after anthesis. Palmitic acid and oleic acid comprised of more than 73 % of total mesocarp fatty acids at 8 weeks after anthesis, and increased to more than 80 % at fruit maturity at 20 weeks. The Logistic model with the highest R2 and the lowest root mean square error was found to be the best fit model.

Keywords: oil palm, oil yield, ripening process, anthesis, fatty acids, modeling

Procedia PDF Downloads 313
1847 A Study on the Impact of Perceived Benefits and Switching Costs of Consumers When Shifting from Brick and Mortar Store to Online Shopping of Apparels

Authors: Havisha Banda

Abstract:

Recent advancements in technology have facilitated commerce around the globe. The online medium of commerce has provided and will continue to provide great opportunities for consumers and businesses. Advancements in technology enable apparel stores, for instance, to improve their online services by using personalized virtual models allowing consumers to visualize the product on the model to determine correct sizing and fit. In addition to many advantages in online shopping the consumers will also have to undergo many types of switching costs in this process of buying apparel online. This study is to identify such switching costs and switching benefits from traditional shopping to online shopping and to understand what the consumers value the most. The scope of this study is to understand the types of switching costs and the factors that actually allow the consumers to shift from brick and mortar to online shopping and also to understand why a certain set of customers consider to purchase offline. Hence this study helps to understand the perceived cost and perceived benefit relation that the consumer draws in purchasing the garments online. This will help the upcoming e-commerce sites and brick and mortar store to understand the various factors and formulate new policies and implement strategies in their own ways to attract the customers and to retain them. A sample of 35 is considered for the process of laddered interviews. In the era of e-commerce there are people who feel comfortable to shop in a retail store rather than online purchase. Few respondents who shop online do not prefer to shop apparel online. Few respondents said that they shop online only for apparels. Most of the variables match in terms of switching costs and also in regard to benefits.

Keywords: e-commerce, switching costs, switching benefits, apparel shopping

Procedia PDF Downloads 318
1846 Analyzing Changes in Runoff Patterns Due to Urbanization Using SWAT Models

Authors: Asawari Ajay Avhad

Abstract:

The Soil and Water Assessment Tool (SWAT) is a hydrological model designed to predict the complex interactions within natural and human-altered watersheds. This research applies the SWAT model to the Ulhas River basin, a small watershed undergoing urbanization and characterized by bowl-like topography. Three simulation scenarios (LC17, LC22, and LC27) are investigated, each representing different land use and land cover (LULC) configurations, to assess the impact of urbanization on runoff. The LULC for the year 2027 is generated using the MOLUSCE Plugin of QGIS, incorporating various spatial factors such as DEM, Distance from Road, Distance from River, Slope, and distance from settlements. Future climate data is simulated within the SWAT model using historical data spanning 30 years. A susceptibility map for runoff across the basin is created, classifying runoff into five susceptibility levels ranging from very low to very high. Sub-basins corresponding to major urban settlements are identified as highly susceptible to runoff. With consideration of future climate projections, a slight increase in runoff is forecasted. The reliability of the methodology was validated through the identification of sub-basins known for experiencing severe flood events, which were determined to be highly susceptible to runoff. The susceptibility map successfully pinpointed these sub-basins with a track record of extreme flood occurrences, thus reinforcing the credibility of the assessment methodology. This study suggests that the methodology employed could serve as a valuable tool in flood management planning.

Keywords: future land use impact, flood management, run off prediction, ArcSWAT

Procedia PDF Downloads 47
1845 The Impact of Hospital Strikes on Patient Care: Evidence from 135 Strikes in the Portuguese National Health System

Authors: Eduardo Costa

Abstract:

Hospital strikes in the Portuguese National Health Service (NHS) are becoming increasingly frequent, raising concerns in what respects patient safety. In fact, data shows that mortality rates for patients admitted during strikes are up to 30% higher than for patients admitted in other days. This paper analyses the effects of hospital strikes on patients’ outcomes. Specifically, it analyzes the impact of different strikes (physicians, nurses and other health professionals), on in-hospital mortality rates, readmission rates and length of stay. The paper uses patient-level data containing all NHS hospital admissions in mainland Portugal from 2012 to 2017, together with a comprehensive strike dataset comprising over 250 strike days (19 physicians-strike days, 150 nurses-strike days and 50 other health professionals-strike days) from 135 different strikes. The paper uses a linear probability model and controls for hospital and regional characteristics, time trends, and changes in patients’ composition and diagnoses. Preliminary results suggest a 6-7% increase in in-hospital mortality rates for patients exposed to physicians’ strikes. The effect is smaller for patients exposed to nurses’ strikes (2-5%). Patients exposed to nurses strikes during their stay have, on average, higher 30-days urgent readmission rates (4%). Length of stay also seems to increase for patients exposed to any strike. Results – conditional on further testing, namely on non-linear models - suggest that hospital operations and service levels are partially disrupted during strikes.

Keywords: health sector strikes, in-hospital mortality rate, length of stay, readmission rate

Procedia PDF Downloads 135
1844 Identification Algorithm of Critical Interface, Modelling Perils on Critical Infrastructure Subjects

Authors: Jiří. J. Urbánek, Hana Malachová, Josef Krahulec, Jitka Johanidisová

Abstract:

The paper deals with crisis situations investigation and modelling within the organizations of critical infrastructure. Every crisis situation has an origin in the emergency event occurrence in the organizations of energetic critical infrastructure especially. Here, the emergency events can be both the expected events, then crisis scenarios can be pre-prepared by pertinent organizational crisis management authorities towards their coping or the unexpected event (Black Swan effect) – without pre-prepared scenario, but it needs operational coping of crisis situations as well. The forms, characteristics, behaviour and utilization of crisis scenarios have various qualities, depending on real critical infrastructure organization prevention and training processes. An aim is always better organizational security and continuity obtainment. This paper objective is to find and investigate critical/ crisis zones and functions in critical situations models of critical infrastructure organization. The DYVELOP (Dynamic Vector Logistics of Processes) method is able to identify problematic critical zones and functions, displaying critical interfaces among actors of crisis situations on the DYVELOP maps named Blazons. Firstly, for realization of this ability is necessary to derive and create identification algorithm of critical interfaces. The locations of critical interfaces are the flags of crisis situation in real organization of critical infrastructure. Conclusive, the model of critical interface will be displayed at real organization of Czech energetic crisis infrastructure subject in Black Out peril environment. The Blazons need live power Point presentation for better comprehension of this paper mission.

Keywords: algorithm, crisis, DYVELOP, infrastructure

Procedia PDF Downloads 409
1843 Assessment of Landfill Pollution Load on Hydroecosystem by Use of Heavy Metal Bioaccumulation Data in Fish

Authors: Gintarė Sauliutė, Gintaras Svecevičius

Abstract:

Landfill leachates contain a number of persistent pollutants, including heavy metals. They have the ability to spread in ecosystems and accumulate in fish which most of them are classified as top-consumers of trophic chains. Fish are freely swimming organisms; but perhaps, due to their species-specific ecological and behavioral properties, they often prefer the most suitable biotopes and therefore, did not avoid harmful substances or environments. That is why it is necessary to evaluate the persistent pollutant dispersion in hydroecosystem using fish tissue metal concentration. In hydroecosystems of hybrid type (e.g. river-pond-river) the distance from the pollution source could be a perfect indicator of such a kind of metal distribution. The studies were carried out in the Kairiai landfill neighboring hybrid-type ecosystem which is located 5 km east of the Šiauliai City. Fish tissue (gills, liver, and muscle) metal concentration measurements were performed on two types of ecologically-different fishes according to their feeding characteristics: benthophagous (Gibel carp, roach) and predatory (Northern pike, perch). A number of mathematical models (linear, non-linear, using log and other transformations) have been applied in order to identify the most satisfactorily description of the interdependence between fish tissue metal concentration and the distance from the pollution source. However, the only one log-multiple regression model revealed the pattern that the distance from the pollution source is closely and positively correlated with metal concentration in all predatory fish tissues studied (gills, liver, and muscle).

Keywords: bioaccumulation in fish, heavy metals, hydroecosystem, landfill leachate, mathematical model

Procedia PDF Downloads 286
1842 Effect of Climate Change on Groundwater Recharge in a Sub-Humid Sub-Tropical Region of Eastern India

Authors: Suraj Jena, Rabindra Kumar Panda

Abstract:

The study region of the reported study was in Eastern India, having a sub-humid sub-tropical climate and sandy loam soil. The rainfall in this region has wide temporal and spatial variation. Due to lack of adequate surface water to meet the irrigation and household demands, groundwater is being over exploited in that region leading to continuous depletion of groundwater level. Therefore, there is an obvious urgency in reversing the depleting groundwater level through induced recharge, which becomes more critical under the climate change scenarios. The major goal of the reported study was to investigate the effects of climate change on groundwater recharge and subsequent adaptation strategies. Groundwater recharge was modelled using HELP3, a quasi-two-dimensional, deterministic, water-routing model along with global climate models (GCMs) and three global warming scenarios, to examine the changes in groundwater recharge rates for a 2030 climate under a variety of soil and vegetation covers. The relationship between the changing mean annual recharge and mean annual rainfall was evaluated for every combination of soil and vegetation using sensitivity analysis. The relationship was found to be statistically significant (p<0.05) with a coefficient of determination of 0.81. Vegetation dynamics and water-use affected by the increase in potential evapotranspiration for large climate variability scenario led to significant decrease in recharge from 49–658 mm to 18–179 mm respectively. Therefore, appropriate conjunctive use, irrigation schedule and enhanced recharge practices under the climate variability and land use/land cover change scenarios impacting the groundwater recharge needs to be understood properly for groundwater sustainability.

Keywords: Groundwater recharge, climate variability, Land use/cover, GCM

Procedia PDF Downloads 282
1841 The Metabolite Profiling of Fulvestrant-3 Boronic Acid under Biological Oxidation

Authors: Changde Zhang, Qiang Zhang, Shilong Zheng, Jiawang Liu, Shanchun Guo, Qiu Zhong, Guangdi Wang

Abstract:

Fulvestrant was approved by FDA to treat breast cancer as a selective estrogen receptor downregulator (SERD) with intramuscular injection administration. ZB716, a fulvestarnt-3 boronic acid, is an SERD with comparable anticancer effect to fulvestrant, but could produce good pharmacokinetic properties under oral administration with mice or rat models. To understand why ZB716 produced much better oral bioavailability, it was proposed that the boronic acid blocked the phase II direct biotransformation with the hydroxyl group on the 3 position of the aromatic ring on fulvestrant. In this study, ZB716 or fulvestrant was incubated with human liver microsome and oxidation cofactor NADPH in vitro. Their metabolites after oxidation were profiled with the Q-Exactive, a high-resolution mass spectrometer. The result showed that ZB716 blocked the forming of hydroxyl groups on its benzene ring except for the oxidation of C-B bond forming fulvestrant in its metabolites, and the concentration of fulvestrant with one more hydroxyl group found in the metabolites from incubation with fulvestrant was about 34 fold high as that formed from incubation with ZB716. Compared to fulvestrant, ZB716 is expected to be much difficult to be further bio-transformed into more hydrophilic compounds, to be difficult excreted out of blood system, and to have longer residence time in blood, which can lead to higher oral bioavailability. This study provided evidence to explain the high bioavailability of ZB716 after oral administration from the perspective of its difficulty of oxidation, a phase I biotransformation, on positions on its aromatic ring.

Keywords: biotransformation, fulvestrant, metabolite profiling, ZB716

Procedia PDF Downloads 259
1840 Data Augmentation for Early-Stage Lung Nodules Using Deep Image Prior and Pix2pix

Authors: Qasim Munye, Juned Islam, Haseeb Qureshi, Syed Jung

Abstract:

Lung nodules are commonly identified in computed tomography (CT) scans by experienced radiologists at a relatively late stage. Early diagnosis can greatly increase survival. We propose using a pix2pix conditional generative adversarial network to generate realistic images simulating early-stage lung nodule growth. We have applied deep images prior to 2341 slices from 895 computed tomography (CT) scans from the Lung Image Database Consortium (LIDC) dataset to generate pseudo-healthy medical images. From these images, 819 were chosen to train a pix2pix network. We observed that for most of the images, the pix2pix network was able to generate images where the nodule increased in size and intensity across epochs. To evaluate the images, 400 generated images were chosen at random and shown to a medical student beside their corresponding original image. Of these 400 generated images, 384 were defined as satisfactory - meaning they resembled a nodule and were visually similar to the corresponding image. We believe that this generated dataset could be used as training data for neural networks to detect lung nodules at an early stage or to improve the accuracy of such networks. This is particularly significant as datasets containing the growth of early-stage nodules are scarce. This project shows that the combination of deep image prior and generative models could potentially open the door to creating larger datasets than currently possible and has the potential to increase the accuracy of medical classification tasks.

Keywords: medical technology, artificial intelligence, radiology, lung cancer

Procedia PDF Downloads 69
1839 The Effect of Socio-Affective Variables in the Relationship between Organizational Trust and Employee Turnover Intention

Authors: Paula A. Cruise, Carvell McLeary

Abstract:

Employee turnover leads to lowered productivity, decreased morale and work quality, and psychological effects associated with employee separation and replacement. Yet, it remains unknown why talented employees willingly withdraw from organizations. This uncertainty is worsened as studies; a) priorities organizational over individual predictors resulting in restriction in range in turnover measurement; b) focus on actual rather than intended turnover thereby limiting conceptual understanding of the turnover construct and its relationship with other variables and; c) produce inconsistent findings across cultures, contexts and industries despite a clear need for a unified perspective. The current study addressed these gaps by adopting the theory of planned behavior (TPB) framework to examine socio-cognitive factors in organizational trust and individual turnover intentions among bankers and energy employees in Jamaica. In a comparative study of n=369 [nbank= 264; male=57 (22.73%); nenergy =105; male =45 (42.86)], it was hypothesized that organizational trust was a predictor of employee turnover intention, and the effect of individual, group, cognitive and socio-affective variables varied across industry. Findings from structural equation modelling confirmed the hypothesis, with a model of both cognitive and socio-affective variables being a better fit [CMIN (χ2) = 800.067, df = 364, p ≤ .000; CFI = 0.950; RMSEA = 0.057 with 90% C.I. (0.052 - 0.062); PCLOSE = 0.016; PNFI = 0.818 in predicting turnover intention. The findings are discussed in relation to socio-cognitive components of trust models and predicting negative employee behaviors across cultures and industries.

Keywords: context-specific organizational trust, cross-cultural psychology, theory of planned behavior, employee turnover intention

Procedia PDF Downloads 248
1838 Effect of Atrial Flutter on Alcoholic Cardiomyopathy

Authors: Ibrahim Ahmed, Richard Amoateng, Akhil Jain, Mohamed Ahmed

Abstract:

Alcoholic cardiomyopathy (ACM) is a type of acquired cardiomyopathy caused by chronic alcohol consumption. Frequently ACM is associated with arrhythmias such as atrial flutter. Our aim was to characterize the patient demographics and investigate the effect of atrial flutter (AF) on ACM. This was a retrospective cohort study using the Nationwide Inpatient Sample database to identify admissions in adults with principal and secondary diagnoses of alcoholic cardiomyopathy and atrial flutter from 2019. Multivariate linear and logistic regression models were adjusted for age, gender, race, household income, insurance status, Elixhauser comorbidity score, hospital location, bed size, and teaching status. The primary outcome was all-cause mortality, and secondary outcomes were the length of stay (LOS) and total charge in USD. There was a total of 21,855 admissions with alcoholic cardiomyopathy, of which 1,635 had atrial flutter (AF-ACM). Compared to Non-AF-ACM cohort, AF-ACM cohort had fewer females (4.89% vs 14.54%, p<0.001), were older (58.66 vs 56.13 years, p<0.001), fewer Native Americans (0.61% vs2.67%, p<0.01), had fewer smaller (19.27% vs 22.45%, p<0.01) & medium-sized hospitals (23.24% vs28.98%, p<0.01), but more large-sized hospitals (57.49% vs 48.57%, p<0.01), more Medicare (40.37% vs 34.08%, p<0.05) and fewer Medicaid insured (23.55% vs 33.70%, p=<0.001), fewer hypertension (10.7% vs 15.01%, p<0.05), and more obesity (24.77% vs 16.35%, p<0.001). Compared to Non-AF-ACM cohort, there was no difference in AF-ACM cohort mortality rate (6.13% vs 4.20%, p=0.0998), unadjusted mortality OR 1.49 (95% CI 0.92-2.40, p=0.102), adjusted mortality OR 1.36 (95% CI 0.83-2.24, p=0.221), but there was a difference in LOS 1.23 days (95% CI 0.34-2.13, p<0.01), total charge $28,860.30 (95% CI 11,883.96-45,836.60, p<0.01). In patients admitted with ACM, the presence of AF was not associated with a higher all-cause mortality rate or odds of all-cause mortality; however, it was associated with 1.23 days increase in LOS and a $28,860.30 increase in total hospitalization charge. Native Americans, older age and obesity were risk factors for the presence of AF in ACM.

Keywords: alcoholic cardiomyopathy, atrial flutter, cardiomyopathy, arrhythmia

Procedia PDF Downloads 112
1837 An Image Processing Scheme for Skin Fungal Disease Identification

Authors: A. A. M. A. S. S. Perera, L. A. Ranasinghe, T. K. H. Nimeshika, D. M. Dhanushka Dissanayake, Namalie Walgampaya

Abstract:

Nowadays, skin fungal diseases are mostly found in people of tropical countries like Sri Lanka. A skin fungal disease is a particular kind of illness caused by fungus. These diseases have various dangerous effects on the skin and keep on spreading over time. It becomes important to identify these diseases at their initial stage to control it from spreading. This paper presents an automated skin fungal disease identification system implemented to speed up the diagnosis process by identifying skin fungal infections in digital images. An image of the diseased skin lesion is acquired and a comprehensive computer vision and image processing scheme is used to process the image for the disease identification. This includes colour analysis using RGB and HSV colour models, texture classification using Grey Level Run Length Matrix, Grey Level Co-Occurrence Matrix and Local Binary Pattern, Object detection, Shape Identification and many more. This paper presents the approach and its outcome for identification of four most common skin fungal infections, namely, Tinea Corporis, Sporotrichosis, Malassezia and Onychomycosis. The main intention of this research is to provide an automated skin fungal disease identification system that increase the diagnostic quality, shorten the time-to-diagnosis and improve the efficiency of detection and successful treatment for skin fungal diseases.

Keywords: Circularity Index, Grey Level Run Length Matrix, Grey Level Co-Occurrence Matrix, Local Binary Pattern, Object detection, Ring Detection, Shape Identification

Procedia PDF Downloads 232
1836 Interaction Between Gut Microorganisms and Endocrine Disruptors - Effects on Hyperglycaemia

Authors: Karthika Durairaj, Buvaneswari G., Gowdham M., Gilles M., Velmurugan G.

Abstract:

Background: Hyperglycaemia is the primary cause of metabolic illness. Recently, researchers focused on the possibility that chemical exposure could promote metabolic disease. Hyperglycaemia causes a variety of metabolic diseases dependent on its etiologic conditions. According to animal and population-based research, individual chemical exposure causes health problems through alteration of endocrine function with the influence of microbial influence. We were intrigued by the function of gut microbiota variation in high fat and chemically induced hyperglycaemia. Methodology: C57/Bl6 mice were subjected to two different treatments to generate the etiologic-based diabetes model: I – a high-fat diet with a 45 kcal diet, and II - endocrine disrupting chemicals (EDCs) cocktail. The mice were monitored periodically for changes in body weight and fasting glucose. After 120 days of the experiment, blood anthropometry, faecal metagenomics and metabolomics were performed and analyzed through statistical analysis using one-way ANOVA and student’s t-test. Results: After 120 days of exposure, we found hyperglycaemic changes in both experimental models. The treatment groups also differed in terms of plasma lipid levels, creatinine, and hepatic markers. To determine the influence on glucose metabolism, microbial profiling and metabolite levels were significantly different between groups. The gene expression studies associated with glucose metabolism vary between hosts and their treatments. Conclusion: This research will result in the identification of biomarkers and molecular targets for better diabetes control and treatment.

Keywords: hyperglycaemia, endocrine-disrupting chemicals, gut microbiota, host metabolism

Procedia PDF Downloads 42
1835 Advancements in Laser Welding Process: A Comprehensive Model for Predictive Geometrical, Metallurgical, and Mechanical Characteristics

Authors: Seyedeh Fatemeh Nabavi, Hamid Dalir, Anooshiravan Farshidianfar

Abstract:

Laser welding is pivotal in modern manufacturing, offering unmatched precision, speed, and efficiency. Its versatility in minimizing heat-affected zones, seamlessly joining dissimilar materials, and working with various metals makes it indispensable for crafting intricate automotive components. Integration into automated systems ensures consistent delivery of high-quality welds, thereby enhancing overall production efficiency. Noteworthy are the safety benefits of laser welding, including reduced fumes and consumable materials, which align with industry standards and environmental sustainability goals. As the automotive sector increasingly demands advanced materials and stringent safety and quality standards, laser welding emerges as a cornerstone technology. A comprehensive model encompassing thermal dynamic and characteristics models accurately predicts geometrical, metallurgical, and mechanical aspects of the laser beam welding process. Notably, Model 2 showcases exceptional accuracy, achieving remarkably low error rates in predicting primary and secondary dendrite arm spacing (PDAS and SDAS). These findings underscore the model's reliability and effectiveness, providing invaluable insights and predictive capabilities crucial for optimizing welding processes and ensuring superior productivity, efficiency, and quality in the automotive industry.

Keywords: laser welding process, geometrical characteristics, mechanical characteristics, metallurgical characteristics, comprehensive model, thermal dynamic

Procedia PDF Downloads 48
1834 The Effect of Fibre Orientation on the Mechanical Behaviour of Skeletal Muscle: A Finite Element Study

Authors: Christobel Gondwe, Yongtao Lu, Claudia Mazzà, Xinshan Li

Abstract:

Skeletal muscle plays an important role in the human body system and function by generating voluntary forces and facilitating body motion. However, The mechanical properties and behaviour of skeletal muscle are still not comprehensively known yet. As such, various robust engineering techniques have been applied to better elucidate the mechanical behaviour of skeletal muscle. It is considered that muscle mechanics are highly governed by the architecture of the fibre orientations. Therefore, the aim of this study was to investigate the effect of different fibre orientations on the mechanical behaviour of skeletal muscle.In this study, a continuum mechanics approach–finite element (FE) analysis was applied to the left bicep femoris long head to determine the contractile mechanism of the muscle using Hill’s three-element model. The geometry of the muscle was segmented from the magnetic resonance images. The muscle was modelled as a quasi-incompressible hyperelastic (Mooney-Rivlin) material. Two types of fibre orientations were implemented: one with the idealised fibre arrangement, i.e. parallel single-direction fibres going from the muscle origin to insertion sites, and the other with curved fibre arrangement which is aligned with the muscle shape.The second fibre arrangement was implemented through the finite element method; non-uniform rational B-spline (FEM-NURBs) technique by means of user material (UMAT) subroutines. The stress-strain behaviour of the muscle was investigated under idealised exercise conditions, and will be further analysed under physiological conditions. The results of the two different FE models have been outputted and qualitatively compared.

Keywords: FEM-NURBS, finite element analysis, Mooney-Rivlin hyperelastic, muscle architecture

Procedia PDF Downloads 479
1833 DLtrace: Toward Understanding and Testing Deep Learning Information Flow in Deep Learning-Based Android Apps

Authors: Jie Zhang, Qianyu Guo, Tieyi Zhang, Zhiyong Feng, Xiaohong Li

Abstract:

With the widespread popularity of mobile devices and the development of artificial intelligence (AI), deep learning (DL) has been extensively applied in Android apps. Compared with traditional Android apps (traditional apps), deep learning based Android apps (DL-based apps) need to use more third-party application programming interfaces (APIs) to complete complex DL inference tasks. However, existing methods (e.g., FlowDroid) for detecting sensitive information leakage in Android apps cannot be directly used to detect DL-based apps as they are difficult to detect third-party APIs. To solve this problem, we design DLtrace; a new static information flow analysis tool that can effectively recognize third-party APIs. With our proposed trace and detection algorithms, DLtrace can also efficiently detect privacy leaks caused by sensitive APIs in DL-based apps. Moreover, using DLtrace, we summarize the non-sequential characteristics of DL inference tasks in DL-based apps and the specific functionalities provided by DL models for such apps. We propose two formal definitions to deal with the common polymorphism and anonymous inner-class problems in the Android static analyzer. We conducted an empirical assessment with DLtrace on 208 popular DL-based apps in the wild and found that 26.0% of the apps suffered from sensitive information leakage. Furthermore, DLtrace has a more robust performance than FlowDroid in detecting and identifying third-party APIs. The experimental results demonstrate that DLtrace expands FlowDroid in understanding DL-based apps and detecting security issues therein.

Keywords: mobile computing, deep learning apps, sensitive information, static analysis

Procedia PDF Downloads 179
1832 On the Added Value of Probabilistic Forecasts Applied to the Optimal Scheduling of a PV Power Plant with Batteries in French Guiana

Authors: Rafael Alvarenga, Hubert Herbaux, Laurent Linguet

Abstract:

The uncertainty concerning the power production of intermittent renewable energy is one of the main barriers to the integration of such assets into the power grid. Efforts have thus been made to develop methods to quantify this uncertainty, allowing producers to ensure more reliable and profitable engagements related to their future power delivery. Even though a diversity of probabilistic approaches was proposed in the literature giving promising results, the added value of adopting such methods for scheduling intermittent power plants is still unclear. In this study, the profits obtained by a decision-making model used to optimally schedule an existing PV power plant connected to batteries are compared when the model is fed with deterministic and probabilistic forecasts generated with two of the most recent methods proposed in the literature. Moreover, deterministic forecasts with different accuracy levels were used in the experiments, testing the utility and the capability of probabilistic methods of modeling the progressively increasing uncertainty. Even though probabilistic approaches are unquestionably developed in the recent literature, the results obtained through a study case show that deterministic forecasts still provide the best performance if accurate, ensuring a gain of 14% on final profits compared to the average performance of probabilistic models conditioned to the same forecasts. When the accuracy of deterministic forecasts progressively decreases, probabilistic approaches start to become competitive options until they completely outperform deterministic forecasts when these are very inaccurate, generating 73% more profits in the case considered compared to the deterministic approach.

Keywords: PV power forecasting, uncertainty quantification, optimal scheduling, power systems

Procedia PDF Downloads 87
1831 A Modular and Reusable Bond Graph Model of Epithelial Transport in the Proximal Convoluted Tubule

Authors: Leyla Noroozbabaee, David Nickerson

Abstract:

We introduce a modular, consistent, reusable bond graph model of the renal nephron’s proximal convoluted tubule (PCT), which can reproduce biological behaviour. In this work, we focus on ion and volume transport in the proximal convoluted tubule of the renal nephron. Modelling complex systems requires complex modelling problems to be broken down into manageable pieces. This can be enabled by developing models of subsystems that are subsequently coupled hierarchically. Because they are based on a graph structure. In the current work, we define two modular subsystems: the resistive module representing the membrane and the capacitive module representing solution compartments. Each module is analyzed based on thermodynamic processes, and all the subsystems are reintegrated into circuit theory in network thermodynamics. The epithelial transport system we introduce in the current study consists of five transport membranes and four solution compartments. Coupled dissipations in the system occur in the membrane subsystems and coupled free-energy increasing, or decreasing processes appear in solution compartment subsystems. These structural subsystems also consist of elementary thermodynamic processes: dissipations, free-energy change, and power conversions. We provide free and open access to the Python implementation to ensure our model is accessible, enabling the reader to explore the model through setting their simulations and reproducibility tests.

Keywords: Bond Graph, Epithelial Transport, Water Transport, Mathematical Modeling

Procedia PDF Downloads 87
1830 Non−zero θ_13 and δ_CP phase with A_4 Flavor Symmetry and Deviations to Tri−Bi−Maximal mixing via Z_2 × Z_2 invariant perturbations in the Neutrino sector.

Authors: Gayatri Ghosh

Abstract:

In this work, a flavour theory of a neutrino mass model based on A_4 symmetry is considered to explain the phenomenology of neutrino mixing. The spontaneous symmetry breaking of A_4 symmetry in this model leads to tribimaximal mixing in the neutrino sector at a leading order. We consider the effect of Z_2 × Z_2 invariant perturbations in neutrino sector and find the allowed region of correction terms in the perturbation matrix that is consistent with 3σ ranges of the experimental values of the mixing angles. We study the entanglement of this formalism on the other phenomenological observables, such as δ_CP phase, the neutrino oscillation probability P(νµ → νe), the effective Majorana mass |mee| and |meff νe |. A Z_2 × Z_2 invariant perturbations in this model is introduced in the neutrino sector which leads to testable predictions of θ_13 and CP violation. By changing the magnitudes of perturbations in neutrino sector, one can generate viable values of δ_CP and neutrino oscillation parameters. Next we investigate the feasibility of charged lepton flavour violation in type-I seesaw models with leptonic flavour symmetries at high energy that leads to tribimaximal neutrino mixing. We consider an effective theory with an A_4 × Z_2 × Z_2 symmetry, which after spontaneous symmetry breaking at high scale which is much higher than the electroweak scale leads to charged lepton flavour violation processes once the heavy Majorana neutrino mass degeneracy is lifted either by renormalization group effects or by a soft breaking of the A_4 symmetry. In this context the implications for charged lepton flavour violation processes like µ → eγ, τ → eγ, τ → µγ are discussed.

Keywords: Z2 × Z2 invariant perturbations, CLFV, delta CP phase, tribimaximal neutrino mixing

Procedia PDF Downloads 79
1829 Planning Fore Stress II: Study on Resiliency of New Architectural Patterns in Urban Scale

Authors: Amir Shouri, Fereshteh Tabe

Abstract:

Master planning and urban infrastructure’s thoughtful and sequential design strategies will play the major role in reducing the damages of natural disasters, war and or social/population related conflicts for cities. Defensive strategies have been revised during the history of mankind after having damages from natural depressions, war experiences and terrorist attacks on cities. Lessons learnt from Earthquakes, from 2 world war casualties in 20th century and terrorist activities of all times. Particularly, after Hurricane Sandy of New York in 2012 and September 11th attack on New York’s World Trade Centre (WTC) in 21st century, there have been series of serious collaborations between law making authorities, urban planners and architects and defence related organizations to firstly, getting prepared and/or prevent such activities and secondly, reduce the human loss and economic damages to minimum. This study will work on developing a model of planning for New York City, where its citizens will get minimum impacts in threat-full time with minimum economic damages to the city after the stress is passed. The main discussion in this proposal will focus on pre-hazard, hazard-time and post-hazard transformative policies and strategies that will reduce the “Life casualties” and will ease “Economic Recovery” in post-hazard conditions. This proposal is going to scrutinize that one of the key solutions in this path might be focusing on all overlaying possibilities on architectural platforms of three fundamental infrastructures, the transportation, the power related sources and defensive abilities on a dynamic-transformative framework that will provide maximum safety, high level of flexibility and fastest action-reaction opportunities in stressful periods of time. “Planning Fore Stress” is going to be done in an analytical, qualitative and quantitative work frame, where it will study cases from all over the world. Technology, Organic Design, Materiality, Urban forms, city politics and sustainability will be discussed in deferent cases in international scale. From the modern strategies of Copenhagen for living friendly with nature to traditional approaches of Indonesian old urban planning patterns, the “Iron Dome” of Israel to “Tunnels” in Gaza, from “Ultra-high-performance quartz-infused concrete” of Iran to peaceful and nature-friendly strategies of Switzerland, from “Urban Geopolitics” in cities, war and terrorism to “Design of Sustainable Cities” in the world, will all be studied with references and detailed look to analysis of each case in order to propose the most resourceful, practical and realistic solutions to questions on “New City Divisions”, “New City Planning and social activities” and “New Strategic Architecture for Safe Cities”. This study is a developed version of a proposal that was announced as winner at MoMA in 2013 in call for ideas for Rockaway after Sandy Hurricane took place.

Keywords: urban scale, city safety, natural disaster, war and terrorism, city divisions, architecture for safe cities

Procedia PDF Downloads 484
1828 Recognizing Human Actions by Multi-Layer Growing Grid Architecture

Authors: Z. Gharaee

Abstract:

Recognizing actions performed by others is important in our daily lives since it is necessary for communicating with others in a proper way. We perceive an action by observing the kinematics of motions involved in the performance. We use our experience and concepts to make a correct recognition of the actions. Although building the action concepts is a life-long process, which is repeated throughout life, we are very efficient in applying our learned concepts in analyzing motions and recognizing actions. Experiments on the subjects observing the actions performed by an actor show that an action is recognized after only about two hundred milliseconds of observation. In this study, hierarchical action recognition architecture is proposed by using growing grid layers. The first-layer growing grid receives the pre-processed data of consecutive 3D postures of joint positions and applies some heuristics during the growth phase to allocate areas of the map by inserting new neurons. As a result of training the first-layer growing grid, action pattern vectors are generated by connecting the elicited activations of the learned map. The ordered vector representation layer receives action pattern vectors to create time-invariant vectors of key elicited activations. Time-invariant vectors are sent to second-layer growing grid for categorization. This grid creates the clusters representing the actions. Finally, one-layer neural network developed by a delta rule labels the action categories in the last layer. System performance has been evaluated in an experiment with the publicly available MSR-Action3D dataset. There are actions performed by using different parts of human body: Hand Clap, Two Hands Wave, Side Boxing, Bend, Forward Kick, Side Kick, Jogging, Tennis Serve, Golf Swing, Pick Up and Throw. The growing grid architecture was trained by applying several random selections of generalization test data fed to the system during on average 100 epochs for each training of the first-layer growing grid and around 75 epochs for each training of the second-layer growing grid. The average generalization test accuracy is 92.6%. A comparison analysis between the performance of growing grid architecture and self-organizing map (SOM) architecture in terms of accuracy and learning speed show that the growing grid architecture is superior to the SOM architecture in action recognition task. The SOM architecture completes learning the same dataset of actions in around 150 epochs for each training of the first-layer SOM while it takes 1200 epochs for each training of the second-layer SOM and it achieves the average recognition accuracy of 90% for generalization test data. In summary, using the growing grid network preserves the fundamental features of SOMs, such as topographic organization of neurons, lateral interactions, the abilities of unsupervised learning and representing high dimensional input space in the lower dimensional maps. The architecture also benefits from an automatic size setting mechanism resulting in higher flexibility and robustness. Moreover, by utilizing growing grids the system automatically obtains a prior knowledge of input space during the growth phase and applies this information to expand the map by inserting new neurons wherever there is high representational demand.

Keywords: action recognition, growing grid, hierarchical architecture, neural networks, system performance

Procedia PDF Downloads 157
1827 Photoelastic Analysis and Finite Elements Analysis of a Stress Field Developed in a Double Edge Notched Specimen

Authors: A. Bilek, M. Beldi, T. Cherfi, S. Djebali, S. Larbi

Abstract:

Finite elements analysis and photoelasticity are used to determine the stress field developed in a double edge notched specimen loaded in tension. The specimen is cut in a birefringent plate. Experimental isochromatic fringes are obtained with circularly polarized light on the analyzer of a regular polariscope. The fringes represent the loci of points of equal maximum shear stress. In order to obtain the stress values corresponding to the fringe orders recorded in the notched specimen, particularly in the neighborhood of the notches, a calibrating disc made of the same material is loaded in compression along its diameter in order to determine the photoelastic fringe value. This fringe value is also used in the finite elements solution in order to obtain the simulated photoelastic fringes, the isochromatics as well as the isoclinics. A color scale is used by the software to represent the simulated fringes on the whole model. The stress concentration factor can be readily obtained at the notches. Good agreements are obtained between the experimental and the simulated fringe patterns and between the graphs of the shear stress particularly in the neighborhood of the notches. The purpose in this paper is to show that one can obtain rapidly and accurately, by the finite element analysis, the isochromatic and the isoclinic fringe patterns in a stressed model as the experimental procedure can be time consuming. Stress fields can therefore be analyzed in three dimensional models as long as the meshing and the limit conditions are properly set in the program.

Keywords: isochromatic fringe, isoclinic fringe, photoelasticity, stress concentration factor

Procedia PDF Downloads 229
1826 Learning Dynamic Representations of Nodes in Temporally Variant Graphs

Authors: Sandra Mitrovic, Gaurav Singh

Abstract:

In many industries, including telecommunications, churn prediction has been a topic of active research. A lot of attention has been drawn on devising the most informative features, and this area of research has gained even more focus with spread of (social) network analytics. The call detail records (CDRs) have been used to construct customer networks and extract potentially useful features. However, to the best of our knowledge, no studies including network features have yet proposed a generic way of representing network information. Instead, ad-hoc and dataset dependent solutions have been suggested. In this work, we build upon a recently presented method (node2vec) to obtain representations for nodes in observed network. The proposed approach is generic and applicable to any network and domain. Unlike node2vec, which assumes a static network, we consider a dynamic and time-evolving network. To account for this, we propose an approach that constructs the feature representation of each node by generating its node2vec representations at different timestamps, concatenating them and finally compressing using an auto-encoder-like method in order to retain reasonably long and informative feature vectors. We test the proposed method on churn prediction task in telco domain. To predict churners at timestamp ts+1, we construct training and testing datasets consisting of feature vectors from time intervals [t1, ts-1] and [t2, ts] respectively, and use traditional supervised classification models like SVM and Logistic Regression. Observed results show the effectiveness of proposed approach as compared to ad-hoc feature selection based approaches and static node2vec.

Keywords: churn prediction, dynamic networks, node2vec, auto-encoders

Procedia PDF Downloads 315
1825 Neurosciences in Entrepreneurship: The Multitasking Case in Favor of Social Entrepreneurship Innovation

Authors: Berger Aida

Abstract:

Social entrepreneurship has emerged as an active area of practice and research within the last three decades and has called for a focus on Social Entrepreneurship innovation. Areas such as academics, practitioners , institutions or governments have placed Social Entrepreneurship on the priority list of reflexion and action. It has been accepted that Social entrepreneurship (SE) shares large similarities with its parent, Traditional Entrepreneurship (TE). SE has grown over the past ten years exploring entrepreneurial cognition and the analysis of the ways of thinking of entrepreneurs. The research community believes that value exists in grounding entrepreneurship in neuroscience and notes that SE, like Traditional Entrepreneurship, needs to undergo efforts in clarification, definition and differentiation. Moreover, gaps in SE research call for integrative multistage and multilevel framework for further research. The cognitive processes underpinning entrepreneurial action are similar for SE and TE even if Social Entrepreneurship orientation shows an increased empathy value. Theoretically, there is a need to develop sound models of how to process functions and how to work more effectively as entrepreneurs and research on efficiency improvement calls for the analysis of the most common practices in entrepreneurship. Multitasking has been recognized as a daily and unavoidable habit of entrepreneurs. Hence, we believe in the need of analyzing the multiple task phenomena as a methodology for skill acquisition. We will conduct our paper including Social Entrepreneurship within the wider spectrum of Traditional Entrepreneurship, for the purpose of simplifying the neuroscientific lecture of the entrepreneurial cognition. A question to be inquired is to know if there is a way of developing multitasking habits in order to improve entrepreneurial skills such as speed of information processing , creativity and adaptability . Nevertheless, the direct link between the neuroscientific approach to multitasking and entrepreneurship effectiveness is yet to be uncovered. That is why an extensive Literature Review on Multitasking is a propos.

Keywords: cognitive, entrepreneurial, empathy, multitasking

Procedia PDF Downloads 172