Search results for: 3D models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6743

Search results for: 3D models

1643 Current Methods for Drug Property Prediction in the Real World

Authors: Jacob Green, Cecilia Cabrera, Maximilian Jakobs, Andrea Dimitracopoulos, Mark van der Wilk, Ryan Greenhalgh

Abstract:

Predicting drug properties is key in drug discovery to enable de-risking of assets before expensive clinical trials and to find highly active compounds faster. Interest from the machine learning community has led to the release of a variety of benchmark datasets and proposed methods. However, it remains unclear for practitioners which method or approach is most suitable, as different papers benchmark on different datasets and methods, leading to varying conclusions that are not easily compared. Our large-scale empirical study links together numerous earlier works on different datasets and methods, thus offering a comprehensive overview of the existing property classes, datasets, and their interactions with different methods. We emphasise the importance of uncertainty quantification and the time and, therefore, cost of applying these methods in the drug development decision-making cycle. To the best of the author's knowledge, it has been observed that the optimal approach varies depending on the dataset and that engineered features with classical machine learning methods often outperform deep learning. Specifically, QSAR datasets are typically best analysed with classical methods such as Gaussian Processes, while ADMET datasets are sometimes better described by Trees or deep learning methods such as Graph Neural Networks or language models. Our work highlights that practitioners do not yet have a straightforward, black-box procedure to rely on and sets a precedent for creating practitioner-relevant benchmarks. Deep learning approaches must be proven on these benchmarks to become the practical method of choice in drug property prediction.

Keywords: activity (QSAR), ADMET, classical methods, drug property prediction, empirical study, machine learning

Procedia PDF Downloads 81
1642 Digitizing Masterpieces in Italian Museums: Techniques, Challenges and Consequences from Giotto to Caravaggio

Authors: Ginevra Addis

Abstract:

The possibility of reproducing physical artifacts in a digital format is one of the opportunities offered by the technological advancements in information and communication most frequently promoted by museums. Indeed, the study and conservation of our cultural heritage have seen significant advancement due to the three-dimensional acquisition and modeling technology. A variety of laser scanning systems has been developed, based either on optical triangulation or on time-of-flight measurement, capable of producing digital 3D images of complex structures with high resolution and accuracy. It is necessary, however, to explore the challenges and opportunities that this practice brings within museums. The purpose of this paper is to understand what change is introduced by digital techniques in those museums that are hosting digital masterpieces. The methodology used will investigate three distinguished Italian exhibitions, related to the territory of Milan, trying to analyze the following issues about museum practices: 1) how digitizing art masterpieces increases the number of visitors; 2) what the need that calls for the digitization of artworks; 3) which techniques are most used; 4) what the setting is; 5) the consequences of a non-publication of hard copies of catalogues; 6) envision of these practices in the future. Findings will show how interconnection plays an important role in rebuilding a collection spread all over the world. Secondly how digital artwork duplication and extension of reality entail new forms of accessibility. Thirdly, that collection and preservation through digitization of images have both a social and educational mission. Fourthly, that convergence of the properties of different media (such as web, radio) is key to encourage people to get actively involved in digital exhibitions. The present analysis will suggest further research that should create museum models and interaction spaces that act as catalysts for innovation.

Keywords: digital masterpieces, education, interconnection, Italian museums, preservation

Procedia PDF Downloads 175
1641 Digitalization and High Audit Fees: An Empirical Study Applied to US Firms

Authors: Arpine Maghakyan

Abstract:

The purpose of this paper is to study the relationship between the level of industry digitalization and audit fees, especially, the relationship between Big 4 auditor fees and industry digitalization level. On the one hand, automation of business processes decreases internal control weakness and manual mistakes; increases work effectiveness and integrations. On the other hand, it may cause serious misstatements, high business risks or even bankruptcy, typically in early stages of automation. Incomplete automation can bring high audit risk especially if the auditor does not fully understand client’s business automation model. Higher audit risk consequently will cause higher audit fees. Higher audit fees for clients with high automation level are more highlighted in Big 4 auditor’s behavior. Using data of US firms from 2005-2015, we found that industry level digitalization is an interaction for the auditor quality on audit fees. Moreover, the choice of Big4 or non-Big4 is correlated with client’s industry digitalization level. Big4 client, which has higher digitalization level, pays more than one with low digitalization level. In addition, a high-digitalized firm that has Big 4 auditor pays higher audit fee than non-Big 4 client. We use audit fees and firm-specific variables from Audit Analytics and Compustat databases. We analyze collected data by using fixed effects regression methods and Wald tests for sensitivity check. We use fixed effects regression models for firms for determination of the connections between technology use in business and audit fees. We control for firm size, complexity, inherent risk, profitability and auditor quality. We chose fixed effects model as it makes possible to control for variables that have not or cannot be measured.

Keywords: audit fees, auditor quality, digitalization, Big4

Procedia PDF Downloads 302
1640 Deuterium Effect on the Growth of the Fungus Aspergillus Fumigatus and Candida Albicans

Authors: Farzad Doostishoar, Abdolreza Hasanzadeh, Seyed Amin Ayatolahi Mousavi

Abstract:

Introduction and Goals: Deuterium has different action from its isotopes hydrogen in chemical reactions and biochemical processes. It is not a significant difference in heavier atoms between the behavior of heavier isotope and the lighter One but for very lighter atoms it is significant . According to that most of the weight of all creatures body is water natural rate can be significant. In this article we want to study the effect of reduced deuterium on the fungus cell. If we saw the dependence of deuterium concentration of environment on the cells growth we can test this in invivo models too. Methods: First we measured deuterium concentration of the distillated water this analyze was operated by Arak’s heavy water company. Then the deuterium was diluted to ½ ¼ 1/8 1/16 by adding water free of deuterium for making media. In tree of samples the deuterium concentration was increased by adding D2O up to 10,50,100 times more concentrated. For candida albicans growth we used sabor medium and for aspergillus fomigatis growth we used sabor medium containing chloramphenicol. After culturing the funguses species we put the mediums for each species in the shaker incubator for 10 days in 25 centigrade. In different days and times the plates were studied morphologically and some microscopic characteristics were studied too. This experiments and cultures were repeated 3 times. Results: Statistical analyzes by paired-sample T test showed that aspergilus fomigatoos growth was decreased in concentration of 72 ppm( half deuterium concentration of negative control) significantly. In deuterium concentration reduction the growth reduce into the negative control significantly. The project results showed that candida albicans was sensitive to reduce and decrease of the deuterium in all concentrations.

Keywords: deuterium, cancer cell, growth, candida albicans

Procedia PDF Downloads 401
1639 DNA Double-Strand Break–Capturing Nuclear Envelope Tubules Drive DNA Repair

Authors: Mitra Shokrollahi, Mia Stanic, Anisha Hundal, Janet N. Y. Chan, Defne Urman, Chris A. Jordan, Anne Hakem, Roderic Espin, Jun Hao, Rehna Krishnan, Philipp G. Maass, Brendan C. Dickson, Manoor P. Hande, Miquel A. Pujana, Razqallah Hakem, Karim Mekhail

Abstract:

Current models suggest that DNA double-strand breaks (DSBs) can move to the nuclear periphery for repair. It is unclear to what extent human DSBs display such repositioning. Here we show that the human nuclear envelope localizes to DSBs in a manner depending on DNA damage response (DDR) kinases and cytoplasmic microtubules acetylated by α-tubulin acetyltransferase-1 (ATAT1). These factors collaborate with the linker of nucleoskeleton and cytoskeleton complex (LINC), nuclear pore complex (NPC) protein NUP153, the nuclear lamina and kinesins KIF5B and KIF13B to generate DSB-capturing nuclear envelope tubules (dsbNETs). dsbNETs are partly supported by nuclear actin filaments and the circadian factor PER1 and reversed by kinesin KIFC3. Although dsbNETs promote repair and survival, they are also co-opted during poly (ADP-ribose) polymerase (PARP) inhibition to restrain BRCA1-deficient breast cancer cells and are hyper-induced in cells expressing the aging-linked lamin A mutant progerin. In summary, our results advance understanding of nuclear structure-function relationships, uncover a nuclear-cytoplasmic DDR and identify dsbNETs as critical factors in genome organization and stability.

Keywords: DNA damage response, genome stability, nuclear envelope, cancer, age-related disorders

Procedia PDF Downloads 16
1638 A Secreted Protein Can Attenuate High Fat Diet Induced Obesity and Metabolic Syndrome in Mice

Authors: Abdul Soofi, Katherine Wolf, Egon Ranghini, Gregory Dressler

Abstract:

Obesity and its associated complications, such as insulin resistance and non-alcoholic fatty liver disease, are reaching epidemic proportions. In mice, the TGF-β superfamily is implicated in the regulation of white and brown adipose tissues differentiation. The Kielin/Chordin-like Protein (KCP) is a secreted regulator of the TGF-β superfamily pathways that can inhibit both TGF-β and Activin signals while enhancing the Bone Morphogenetic protein (BMP) signaling. However, the effects of KCP on metabolism and obesity have not been studied in animal models. Thus, we examined the effects of KCP loss or gain of function in mice that were maintained on either a regular or a high fat diet. Loss of KCP sensitized mice to obesity and associated complications such as hepatic steatosis and glucose intolerance. In contrast, transgenic mice that expressed KCP in the kidney, liver and adipose tissues were resistant to developing high fat diet induced obesity and had significantly reduced white adipose tissue. KCP over-expression was able to shift the pattern of Smad signaling in vivo, to increase the levels of P-Smad1 and decrease P-Smad3, resulting in resistance to high fat diet induced hepatic steatosis and glucose intolerance. In aging mice, loss of KCP promoted liver pathology even when mice were fed a normal diet. The data demonstrate that shifting the TGF-β superfamily signaling with a secreted inhibitor or enhancer can alter the physiology of adipose tissue to reduce obesity and can inhibit the initiation and progression of hepatic steatosis to significantly reduce the effects of high fat diet induced metabolic disease.

Keywords: adipose tissue, KCP, obesity, TGF-β, BMP, hepatic steatosis, metabolic syndrome

Procedia PDF Downloads 353
1637 Drape Simulation by Commercial Software and Subjective Assessment of Virtual Drape

Authors: Evrim Buyukaslan, Simona Jevsnik, Fatma Kalaoglu

Abstract:

Simulation of fabrics is more difficult than any other simulation due to complex mechanics of fabrics. Most of the virtual garment simulation software use mass-spring model and incorporate fabric mechanics into simulation models. The accuracy and fidelity of these virtual garment simulation software is a question mark. Drape is a subjective phenomenon and evaluation of drape has been studied since 1950’s. On the other hand, fabric and garment simulation is relatively new. Understanding drape perception of subjects when looking at fabric simulations is critical as virtual try-on becomes more of an issue by enhanced online apparel sales. Projected future of online apparel retailing is that users may view their avatars and try-on the garment on their avatars in the virtual environment. It is a well-known fact that users will not be eager to accept this innovative technology unless it is realistic enough. Therefore, it is essential to understand what users see when they are displaying fabrics in a virtual environment. Are they able to distinguish the differences between various fabrics in virtual environment? The purpose of this study is to investigate human perception when looking at a virtual fabric and determine the most visually noticeable drape parameter. To this end, five different fabrics are mechanically tested, and their drape simulations are generated by commercial garment simulation software (Optitex®). The simulation images are processed by an image analysis software to calculate drape parameters namely; drape coefficient, node severity, and peak angles. A questionnaire is developed to evaluate drape properties subjectively in a virtual environment. Drape simulation images are shown to 27 subjects and asked to rank the samples according to their questioned drape property. The answers are compared to the calculated drape parameters. The results show that subjects are quite sensitive to drape coefficient changes while they are not very sensitive to changes in node dimensions and node distributions.

Keywords: drape simulation, drape evaluation, fabric mechanics, virtual fabric

Procedia PDF Downloads 338
1636 Numerical Investigation of the Needle Opening Process in a High Pressure Gas Injector

Authors: Matthias Banholzer, Hagen Müller, Michael Pfitzner

Abstract:

Gas internal combustion engines are widely used as propulsion systems or in power plants to generate heat and electricity. While there are different types of injection methods including the manifold port fuel injection and the direct injection, the latter has more potential to increase the specific power by avoiding air displacement in the intake and to reduce combustion anomalies such as backfire or pre-ignition. During the opening process of the injector, multiple flow regimes occur: subsonic, transonic and supersonic. To cover the wide range of Mach numbers a compressible pressure-based solver is used. While the standard Pressure Implicit with Splitting of Operators (PISO) method is used for the coupling between velocity and pressure, a high-resolution non-oscillatory central scheme established by Kurganov and Tadmor calculates the convective fluxes. A blending function based on the local Mach- and CFL-number switches between the compressible and incompressible regimes of the developed model. As the considered operating points are well above the critical state of the used fluids, the ideal gas assumption is not valid anymore. For the real gas thermodynamics, the models based on the Soave-Redlich-Kwong equation of state were implemented. The caloric properties are corrected using a departure formalism, for the viscosity and the thermal conductivity the empirical correlation of Chung is used. For the injector geometry, the dimensions of a diesel injector were adapted. Simulations were performed using different nozzle and needle geometries and opening curves. It can be clearly seen that there is a significant influence of all three parameters.

Keywords: high pressure gas injection, hybrid solver, hydrogen injection, needle opening process, real-gas thermodynamics

Procedia PDF Downloads 461
1635 Seismic Assessment of Passive Control Steel Structure with Modified Parameter of Oil Damper

Authors: Ahmad Naqi

Abstract:

Today, the passively controlled buildings are extensively becoming popular due to its excellent lateral load resistance circumstance. Typically, these buildings are enhanced with a damping device that has high market demand. Some manufacturer falsified the damping device parameter during the production to achieve the market demand. Therefore, this paper evaluates the seismic performance of buildings equipped with damping devices, which their parameter modified to simulate the falsified devices, intentionally. For this purpose, three benchmark buildings of 4-, 10-, and 20-story were selected from JSSI (Japan Society of Seismic Isolation) manual. The buildings are special moment resisting steel frame with oil damper in the longitudinal direction only. For each benchmark buildings, two types of structural elements are designed to resist the lateral load with and without damping devices (hereafter, known as Trimmed & Conventional Building). The target building was modeled using STERA-3D, a finite element based software coded for study purpose. Practicing the software one can develop either three-dimensional Model (3DM) or Lumped Mass model (LMM). Firstly, the seismic performance of 3DM and LMM models was evaluated and found excellent coincide for the target buildings. The simplified model of LMM used in this study to produce 66 cases for both of the buildings. Then, the device parameters were modified by ± 40% and ±20% to predict many possible conditions of falsification. It is verified that the building which is design to sustain the lateral load with support of damping device (Trimmed Building) are much more under threat as a result of device falsification than those building strengthen by damping device (Conventional Building).

Keywords: passive control system, oil damper, seismic assessment, lumped mass model

Procedia PDF Downloads 114
1634 State and Determinant of Caregiver’s Mental Health in Thailand: A Household Level Analysis

Authors: Ruttana Phetsitong, Patama Vapattanawong, Malee Sunpuwan, Marc Voelker

Abstract:

The majority of care for older people at home in Thai society falls upon caregivers resulting in caregiver’s mental health problem. Beyond individual characteristics, household factors might have a profound effect on the caregiver’s mental health. But reliable data capturing this at the household level have been limited to date. The objectives of the present study were to explore the levels of Thai caregiver’s mental health and to investigate the factors affecting the mental health at household level. Data were obtained from the 2011 National Survey of Thai Older Persons conducted by the National Statistical Office of Thailand. Caregiver’s mental health was measured by using the 15- items-short version of the Thai Mental Health Indicator (TMHI-15) developed by the Department of Mental Health, the Ministry of Public Health. Multivariate logistic regression models were used to explore the impact of potential factors on caregiver’s mental health. The THMI-15 produced an overall average caregiver mental health score of 30.9 out of 45 (SD 5.3). The score can be categorized into good (34.02-45), fair (27.01-34), and poor (0-27). Duration of care for older people, household wealth, and functional dependency of the older people significantly predicted total caregiver’s mental health. Household economic factor was key in predicting better mental health. Compared to those poorest households, the adjusted effect of the fifth quintile household wealth was high (OR=2.34; 95%CI=1.47-3.73). The findings of this study provide a fuller picture to a better understanding of the level and factors that cause the mental health of Thai caregivers. Health care providers and policymakers should consider these factors when designing interventions aimed at alleviating caregiver’s psychological burden when provided care for older people at home.

Keywords: caregiver’s mental health, household, older people, Thailand

Procedia PDF Downloads 144
1633 An Exploration of Cross-culture Consumer Behaviour - The Characteristics of Chinese Consumers’ Decision Making in Europe

Authors: Yongsheng Guo, Xiaoxian Zhu, Mandella Osei-Assibey Bonsu

Abstract:

This study explores the effects of national culture on consumer behaviour by identifying the characteristics of Chinese consumers’ decision making in Europe. It offers a better understanding of how cultural factors affect consumers’ behaviour, and how consumers make decisions in other nations with different culture. It adopted a grounded theory approach and conducted twenty-four in-depth interviews. Grounded theory models are developed to link the causal conditions, process and consequences. Results reveal that some cultural factors including conservatism, emotionality, acquaintance community, long-term orientation and principles affect Chinese consumers when making purchase decisions in Europe. Most Chinese consumers plan and prepare their expenditure and stay in Europe as cultural learners, and purchase durable products or assets as investment, and share their experiences within a community. This study identified potential problems such as political and social environment, complex procedures, and restrictions. This study found that external factors influence on internal factors and then internal characters determine consumer behaviour. This study proposes that cultural traits developed in convergence evolution through social selection and Chinese consumers persist most characters but adapt some perceptions and actions overtime in other countries. This study suggests that cultural marketing could be adopted by companies to reflect consumers’ preferences. Agencies, shops, and the authorities could take actions to reduce the complexity and restrictions.

Keywords: national culture, consumer behaviour, decision making, cultural marketing

Procedia PDF Downloads 94
1632 Accelerating Molecular Dynamics Simulations of Electrolytes with Neural Network: Bridging the Gap between Ab Initio Molecular Dynamics and Classical Molecular Dynamics

Authors: Po-Ting Chen, Santhanamoorthi Nachimuthu, Jyh-Chiang Jiang

Abstract:

Classical molecular dynamics (CMD) simulations are highly efficient for material simulations but have limited accuracy. In contrast, ab initio molecular dynamics (AIMD) provides high precision by solving the Kohn–Sham equations yet requires significant computational resources, restricting the size of systems and time scales that can be simulated. To address these challenges, we employed NequIP, a machine learning model based on an E(3)-equivariant graph neural network, to accelerate molecular dynamics simulations of a 1M LiPF6 in EC/EMC (v/v 3:7) for Li battery applications. AIMD calculations were initially conducted using the Vienna Ab initio Simulation Package (VASP) to generate highly accurate atomic positions, forces, and energies. This data was then used to train the NequIP model, which efficiently learns from the provided data. NequIP achieved AIMD-level accuracy with significantly less training data. After training, NequIP was integrated into the LAMMPS software to enable molecular dynamics simulations of larger systems over longer time scales. This method overcomes the computational limitations of AIMD while improving the accuracy limitations of CMD, providing an efficient and precise computational framework. This study showcases NequIP’s applicability to electrolyte systems, particularly for simulating the dynamics of LiPF6 ionic mixtures. The results demonstrate substantial improvements in both computational efficiency and simulation accuracy, highlighting the potential of machine learning models to enhance molecular dynamics simulations.

Keywords: lithium-ion batteries, electrolyte simulation, molecular dynamics, neural network

Procedia PDF Downloads 18
1631 Effect of One-Period of SEAS Exercises on Some Spinal Biomechanical and Postural Parameters in the Students with Idiopathic Scoliosis

Authors: Zandi Ahmad, Sokhanguei Yahya, Saboonchi Reza

Abstract:

Objective: The new and modern lifestyle, especially in the twenty-first century and lack of movement in spinal structure have made patients and the physicians in the field of health and also other insurance companies in the developed and developing countries worry more than before about the abnormalities of spinal column- this great healthcare problem. The high prevalence of spinal column in all age groups -from children to adults- and in all professions have led the researchers to the idea of giving an opportunity to all those who worry about the dangers threatening the spinal column. Therefore, one of the corrective methods for these patients is using SEAS exercises. Materials and Methods: This study aims at investigating the effect of one-period of SEAS exercises on some spinal biomechanical and postural parameters in the students with idiopathic scoliosis. According to the nature of the study and research objectives as well as the data collection methods, the current research is a semi-empirical survey. The research population is comprised of students with idiopathic scoliosis. A total number of 30 students were selected using available sampling and divided into two groups of control and SEAS exercises. Scoliometer was used for data collection. Descriptive statistics were used to categorize the findings. Kolmogorov-Smirnov statistical models were used to confirm that the distribution of the data is normal and T-test was used for effectiveness. Hypothesis testing was done using SPSS21. Conclusion: Results show that SEAS exercises have a significant effect in Adam’s Test. Therefore, according to the obtained results, SEAS exercises can be used to recover idiopathic scoliosis among the students. Further studies in larger samples and treatment, periods as well as more follow-up investigations appear to be essential to prove these effects.

Keywords: SEAS exercises, idiopathic scoliosis, Adam’s test, exercise

Procedia PDF Downloads 290
1630 Human Metabolism of the Drug Candidate PBTZ169

Authors: Vadim Makarov, Stewart T.Cole

Abstract:

PBTZ169 is novel drug candidate with high efficacy in animals models, and its combination treatment of PBTZ169 with BDQ and pyrazinamide was shown to be more efficacious than the standard treatment for tuberculosis in a mouse model. The target of PBTZ169 is famous DprE1, an essential enzyme in cell wall biosynthesis. The crystal structure of the DprE1-PBTZ169 complex reveals formation of a semimercaptal adduct with Cys387 in the active site and explains the irreversible inactivation of the enzyme. Furthermore, this drug candidate demonstrated during preclinical research ‘drug like’ properties what made it an attractive drug candidate to treat tuberculosis in humans. During first clinical trials several cohorts of the healthy volunteers were treated by the single doses of PBTZ169 as well as two weeks repeated treatment was chosen for two maximal doses. As expected PBTZ169 was well tolerated, and no significant toxicity effects were observed during the trials. The study of the metabolism shown that human metabolism of PBTZ169 is very different from microbial or animals compound transformation. So main pathway of microbial, mice and less rats metabolism connected with reduction processes, but human metabolism mainly connected with oxidation processes. Due to this difference we observed several metabolites of PBTZ169 in humans with antitubercular activity, and now we can conclude that animal antituberculosis activity of PBTZ169 is a result not only activity of the drug itself, but it is a result of the sum activity of the drug and its metabolites. Direct antimicrobial plasma activity was studied, and such activity was observed for 24 hours after human treatment for some doses. This data gets high chance for good efficacy of PBTZ169 in human for treatment TB infection. Second phase of clinical trials was started summer of 2017 and continues to the present day. Available data will be presented.

Keywords: clinical trials, DprE1, PBTZ169, metabolism

Procedia PDF Downloads 166
1629 Time Driven Activity Based Costing Capability to Improve Logistics Performance: Application in Manufacturing Context

Authors: Siham Rahoui, Amr Mahfouz, Amr Arisha

Abstract:

In a highly competitive environment characterised by uncertainty and disruptions, such as the recent COVID-19 outbreak, supply chains (SC) face the challenge of maintaining their cost at minimum levels while continuing to provide customers with high-quality products and services. More importantly, businesses in such an economic context strive to maintain survival by keeping the cost of undertaken activities (such as logistics) low and in-house. To do so, managers need to understand the costs associated with different products and services in order to have a clear vision of the SC performance, maintain profitability levels, and make strategic decisions. In this context, SC literature explored different costing models that sought to determine the costs of undertaking supply chain-related activities. While some cost accounting techniques have been extensively explored in the SC context, more contributions are needed to explore the potential of time driven activity-based costing (TDABC). More specifically, more applications are needed in the manufacturing context of the SC, where the debate is ongoing. The aim of the study is to assess the capability of the technique to assess the operational performance of the logistics function. Through a case study methodology applied to a manufacturing company operating in the automotive industry, TDABC evaluates the efficiency of the current configuration and its logistics processes. The study shows that monitoring the process efficiency and cost efficiency leads to strategic decisions that contributed to improve the overall efficiency of the logistics processes.

Keywords: efficiency, operational performance, supply chain costing, time driven activity based costing

Procedia PDF Downloads 165
1628 Replicating Brain’s Resting State Functional Connectivity Network Using a Multi-Factor Hub-Based Model

Authors: B. L. Ho, L. Shi, D. F. Wang, V. C. T. Mok

Abstract:

The brain’s functional connectivity while temporally non-stationary does express consistency at a macro spatial level. The study of stable resting state connectivity patterns hence provides opportunities for identification of diseases if such stability is severely perturbed. A mathematical model replicating the brain’s spatial connections will be useful for understanding brain’s representative geometry and complements the empirical model where it falls short. Empirical computations tend to involve large matrices and become infeasible with fine parcellation. However, the proposed analytical model has no such computational problems. To improve replicability, 92 subject data are obtained from two open sources. The proposed methodology, inspired by financial theory, uses multivariate regression to find relationships of every cortical region of interest (ROI) with some pre-identified hubs. These hubs acted as representatives for the entire cortical surface. A variance-covariance framework of all ROIs is then built based on these relationships to link up all the ROIs. The result is a high level of match between model and empirical correlations in the range of 0.59 to 0.66 after adjusting for sample size; an increase of almost forty percent. More significantly, the model framework provides an intuitive way to delineate between systemic drivers and idiosyncratic noise while reducing dimensions by more than 30 folds, hence, providing a way to conduct attribution analysis. Due to its analytical nature and simple structure, the model is useful as a standalone toolkit for network dependency analysis or as a module for other mathematical models.

Keywords: functional magnetic resonance imaging, multivariate regression, network hubs, resting state functional connectivity

Procedia PDF Downloads 151
1627 Analysis of Career Support Programs for Olympic Athletes in Japan with Fifteen Conceptual Categories

Authors: Miyako Oulevey, Kaori Tsutsui, David Lavallee, Naohiko Kohtake

Abstract:

The Japan Sports Agency has made efforts to unify several career support programs for Olympic athletes prior to the 2020 Tokyo Olympics. One of the programs, the Japan Olympic Committee Career Academy (JCA) was established in 2008 for Olympic athletes at their retirement. Research focusing on the service content of sport career support programs can help athletes experience a more positive transition. This study was designed to investigate the service content of the JCA program in relation to athletes’ career transition needs, including any differences of the reasons for retirement between Summer/Winter and Male/Female Olympic athletes, and to suggest the directions of how to unify the career support programs in Japan after hosting the Olympic Games using sport career transition models. Semi-structured interviews were conducted and analyzed the JCA director who started and managed the program since its inception, and a total of 15 conceptual categories were generated by the analysis. Four conceptual categories were in the result of “JCA situation”, 4 conceptual categories were in the result of “Athletes using JCA”, and 7 conceptual categories were in the result of “JCA current difficulties”. Through the analysis it was revealed that: the JCA had occupational supports for both current and retired Olympic athletes; other supports such as psychological support were unclear due to the lack of psychological professionals in JCA and the difficulties collaborating with other sports organizations; and there are differences in tendencies of visiting JCA, financial situations, and career choices depending on Summer/Winter and Male/Female athletes.

Keywords: career support programs, causes of career termination, Olympic athlete, Olympic committee

Procedia PDF Downloads 145
1626 Fake News Detection Based on Fusion of Domain Knowledge and Expert Knowledge

Authors: Yulan Wu

Abstract:

The spread of fake news on social media has posed significant societal harm to the public and the nation, with its threats spanning various domains, including politics, economics, health, and more. News on social media often covers multiple domains, and existing models studied by researchers and relevant organizations often perform well on datasets from a single domain. However, when these methods are applied to social platforms with news spanning multiple domains, their performance significantly deteriorates. Existing research has attempted to enhance the detection performance of multi-domain datasets by adding single-domain labels to the data. However, these methods overlook the fact that a news article typically belongs to multiple domains, leading to the loss of domain knowledge information contained within the news text. To address this issue, research has found that news records in different domains often use different vocabularies to describe their content. In this paper, we propose a fake news detection framework that combines domain knowledge and expert knowledge. Firstly, it utilizes an unsupervised domain discovery module to generate a low-dimensional vector for each news article, representing domain embeddings, which can retain multi-domain knowledge of the news content. Then, a feature extraction module uses the domain embeddings discovered through unsupervised domain knowledge to guide multiple experts in extracting news knowledge for the total feature representation. Finally, a classifier is used to determine whether the news is fake or not. Experiments show that this approach can improve multi-domain fake news detection performance while reducing the cost of manually labeling domain labels.

Keywords: fake news, deep learning, natural language processing, multiple domains

Procedia PDF Downloads 73
1625 Lipase-Catalyzed Synthesis of Novel Nutraceutical Structured Lipids in Non-Conventional Media

Authors: Selim Kermasha

Abstract:

A process for the synthesis of structured lipids (SLs) by the lipase-catalyzed interesterification of selected endogenous edible oils such as flaxseed oil (FO) and medium-chain triacylglyceols such as tricaprylin (TC) in non-conventional media (NCM), including organic solvent media (OSM) and solvent-free medium (SFM), was developed. The bioconversion yield of the medium-long-medium-type SLs (MLM-SLs were monitored as the responses with use of selected commercial lipases. In order to optimize the interesterification reaction and to establish a model system, a wide range of reaction parameters, including TC to FO molar ratio, reaction temperature, enzyme concentration, reaction time, agitation speed and initial water activity, were investigated to establish the a model system. The model system was monitored with the use of multiple response surface methodology (RSM) was used to obtain significant models for the responses and to optimize the interesterification reaction, on the basis of selected levels and variable fractional factorial design (FFD) with centre points. Based on the objective of each response, the appropriate level combination of the process parameters and the solutions that met the defined criteria were also provided by means of desirability function. The synthesized novel molecules were structurally characterized, using silver-ion reversed-phase high-performance liquid chromatography (RP-HPLC) atmospheric pressure chemical ionization-mass spectrophotometry (APCI-MS) analyses. The overall experimental findings confirmed the formation of dicaprylyl-linolenyl glycerol, dicaprylyl-oleyl glycerol and dicaprylyl-linoleyl glycerol resulted from the lipase-catalyzed interesterification of FO and TC.

Keywords: enzymatic interesterification, non-conventinal media, nutraceuticals, structured lipids

Procedia PDF Downloads 294
1624 International Financial Reporting Standards and the Quality of Banks Financial Statement Information: Evidence from an Emerging Market-Nigeria

Authors: Ugbede Onalo, Mohd Lizam, Ahmad Kaseri, Otache Innocent

Abstract:

Giving the paucity of studies on IFRS adoption and quality of banks accounting quality, particularly in emerging economies, this study is motivated to investigate whether the Nigeria decision to adopt IFRS beginning from 1 January 2012 is associated with high quality accounting measures. Consistent with prior literatures, this study measure quality of financial statement information using earnings measurement, timeliness of loss recognition and value relevance. A total of twenty Nigeria banks covering a period of six years (2008-2013) divided equally into three years each (2008, 2009, 2010) pre adoption period and (2011, 2012, 2013) post adoption period were investigated. Following prior studies eight models were in all employed to investigate earnings management, timeliness of loss recognition and value relevance of Nigeria bank accounting quality for the different reporting regimes. Results suggest that IFRS adoption is associated with minimal earnings management, timely recognition of losses and high value relevance of accounting information. Summarily, IFRS adoption engenders higher quality of banks financial statement information compared to local GAAP. Hence, this study recommends the global adoption of IFRS and that Nigeria banks should embrace good corporate governance practices.

Keywords: IFRS, SAS, quality of accounting information, earnings measurement, discretionary accruals, non-discretionary accruals, total accruals, Jones model, timeliness of loss recognition, value relevance

Procedia PDF Downloads 465
1623 Evaluation of Short-Term Load Forecasting Techniques Applied for Smart Micro-Grids

Authors: Xiaolei Hu, Enrico Ferrera, Riccardo Tomasi, Claudio Pastrone

Abstract:

Load Forecasting plays a key role in making today's and future's Smart Energy Grids sustainable and reliable. Accurate power consumption prediction allows utilities to organize in advance their resources or to execute Demand Response strategies more effectively, which enables several features such as higher sustainability, better quality of service, and affordable electricity tariffs. It is easy yet effective to apply Load Forecasting at larger geographic scale, i.e. Smart Micro Grids, wherein the lower available grid flexibility makes accurate prediction more critical in Demand Response applications. This paper analyses the application of short-term load forecasting in a concrete scenario, proposed within the EU-funded GreenCom project, which collect load data from single loads and households belonging to a Smart Micro Grid. Three short-term load forecasting techniques, i.e. linear regression, artificial neural networks, and radial basis function network, are considered, compared, and evaluated through absolute forecast errors and training time. The influence of weather conditions in Load Forecasting is also evaluated. A new definition of Gain is introduced in this paper, which innovatively serves as an indicator of short-term prediction capabilities of time spam consistency. Two models, 24- and 1-hour-ahead forecasting, are built to comprehensively compare these three techniques.

Keywords: short-term load forecasting, smart micro grid, linear regression, artificial neural networks, radial basis function network, gain

Procedia PDF Downloads 468
1622 On the Factors Affecting Computing Students’ Awareness of the Latest ICTs

Authors: O. D. Adegbehingbe, S. D. Eyono Obono

Abstract:

The education sector is constantly faced with rapid changes in technologies in terms of ensuring that the curriculum is up to date and in terms of making sure that students are aware of these technological changes. This challenge can be seen as the motivation for this study, which is to examine the factors affecting computing students’ awareness of the latest Information Technologies (ICTs). The aim of this study is divided into two sub-objectives which are: the selection of relevant theories and the design of a conceptual model to support it as well as the empirical testing of the designed model. The first objective is achieved by a review of existing literature on technology adoption theories and models. The second objective is achieved using a survey of computing students in the four universities of the KwaZulu-Natal province of South Africa. Data collected from this survey is analyzed using Statistical package for the Social Science (SPSS) using descriptive statistics, ANOVA and Pearson correlations. The main hypothesis of this study is that there is a relationship between the demographics and the prior conditions of the computing students and their awareness of general ICT trends and of Digital Switch Over (DSO) a new technology which involves the change from analog to digital television broadcasting in order to achieve improved spectrum efficiency. The prior conditions of the computing students that were considered in this study are students’ perceived exposure to career guidance and students’ perceived curriculum currency. The results of this study confirm that gender, ethnicity, and high school computing course affect students’ perceived curriculum currency while high school location affects students’ awareness of DSO. The results of this study also confirm that there is a relationship between students prior conditions and their awareness of general ICT trends and DSO in particular.

Keywords: education, information technologies, IDT, awareness

Procedia PDF Downloads 357
1621 The Effects of Orientation on Energy and Plasticity of Metallic Crystalline-Amorphous Interface

Authors: Ehsan Alishahi, Chuang Deng

Abstract:

Commercial applications of bulk metallic glasses (BMGs) were restricted due to the sudden brittle failure mode which was the main drawback in these new class of materials. Therefore, crystalline-amorphous (C-A) composites were introduced as a toughening strategy in BMGs. In spite of numerous researches in the area of metallic C-A composites, the fundamental structure-property relation in these composites that are not exactly known yet. In this study, it is aimed to investigate the fundamental properties of crystalline-amorphous interface in a model system of Cu/CuZr by using molecular dynamics simulations. Several parameters including interface energy and mechanical properties were investigated by means of atomic models and employing Embedded Atom Method (EAM) potential function. It is found that the crystalline-amorphous interfacial energy weakly depends on the orientation of the crystalline layer, which is in stark contrast to that in a regular crystalline grain boundary. Additionally, the results showed that the interface controls the yielding of the crystalline-amorphous composites during uniaxial tension either by serving as sources for dislocation nucleation in the crystalline layer or triggering local shear transformation zones in amorphous layer. The critical resolved shear stress required to nucleate the first dislocation is also found to strongly depend on the crystalline orientation. Furthermore, it is found that the interaction between dislocations and shear localization at the crystalline-amorphous interface oriented in different directions can lead to a change in the deformation mode. For instance, while the dislocation and shear banding are aligned to each other in {0 0 1} interface plane, the misorientation angle between these failure mechanisms causing more homogeneous deformation in {1 1 0} and {1 1 1} crystalline-amorphous interfaces. These results should help clarify the failure mechanism of crystalline-amorphous composites under various loading conditions.

Keywords: crystalline-amorphous, composites, orientation, plasticity

Procedia PDF Downloads 293
1620 Using Visualization Techniques to Support Common Clinical Tasks in Clinical Documentation

Authors: Jonah Kenei, Elisha Opiyo

Abstract:

Electronic health records, as a repository of patient information, is nowadays the most commonly used technology to record, store and review patient clinical records and perform other clinical tasks. However, the accurate identification and retrieval of relevant information from clinical records is a difficult task due to the unstructured nature of clinical documents, characterized in particular by a lack of clear structure. Therefore, medical practice is facing a challenge thanks to the rapid growth of health information in electronic health records (EHRs), mostly in narrative text form. As a result, it's becoming important to effectively manage the growing amount of data for a single patient. As a result, there is currently a requirement to visualize electronic health records (EHRs) in a way that aids physicians in clinical tasks and medical decision-making. Leveraging text visualization techniques to unstructured clinical narrative texts is a new area of research that aims to provide better information extraction and retrieval to support clinical decision support in scenarios where data generated continues to grow. Clinical datasets in electronic health records (EHR) offer a lot of potential for training accurate statistical models to classify facets of information which can then be used to improve patient care and outcomes. However, in many clinical note datasets, the unstructured nature of clinical texts is a common problem. This paper examines the very issue of getting raw clinical texts and mapping them into meaningful structures that can support healthcare professionals utilizing narrative texts. Our work is the result of a collaborative design process that was aided by empirical data collected through formal usability testing.

Keywords: classification, electronic health records, narrative texts, visualization

Procedia PDF Downloads 118
1619 Confidence Intervals for Process Capability Indices for Autocorrelated Data

Authors: Jane A. Luke

Abstract:

Persistent pressure passed on to manufacturers from escalating consumer expectations and the ever growing global competitiveness have produced a rapidly increasing interest in the development of various manufacturing strategy models. Academic and industrial circles are taking keen interest in the field of manufacturing strategy. Many manufacturing strategies are currently centered on the traditional concepts of focused manufacturing capabilities such as quality, cost, dependability and innovation. Process capability indices was conducted assuming that the process under study is in statistical control and independent observations are generated over time. However, in practice, it is very common to come across processes which, due to their inherent natures, generate autocorrelated observations. The degree of autocorrelation affects the behavior of patterns on control charts. Even, small levels of autocorrelation between successive observations can have considerable effects on the statistical properties of conventional control charts. When observations are autocorrelated the classical control charts exhibit nonrandom patterns and lack of control. Many authors have considered the effect of autocorrelation on the performance of statistical process control charts. In this paper, the effect of autocorrelation on confidence intervals for different PCIs was included. Stationary Gaussian processes is explained. Effect of autocorrelation on PCIs is described in detail. Confidence intervals for Cp and Cpk are constructed for PCIs when data are both independent and autocorrelated. Confidence intervals for Cp and Cpk are computed. Approximate lower confidence limits for various Cpk are computed assuming AR(1) model for the data. Simulation studies and industrial examples are considered to demonstrate the results.

Keywords: autocorrelation, AR(1) model, Bissell’s approximation, confidence intervals, statistical process control, specification limits, stationary Gaussian processes

Procedia PDF Downloads 388
1618 Assessment of the Impacts of Climate Change on Climatic Zones over the Korean Peninsula for Natural Disaster Management Information

Authors: Sejin Jung, Dongho Kang, Byungsik Kim

Abstract:

Assessing the impact of climate change requires the use of a multi-model ensemble (MME) to quantify uncertainties between scenarios and produce downscaled outlines for simulation of climate under the influence of different factors, including topography. This study decreases climate change scenarios from the 13 global climate models (GCMs) to assess the impacts of future climate change. Unlike South Korea, North Korea lacks in studies using climate change scenarios of the CoupledModelIntercomparisonProject (CMIP5), and only recently did the country start the projection of extreme precipitation episodes. One of the main purposes of this study is to predict changes in the average climatic conditions of North Korea in the future. The result of comparing downscaled climate change scenarios with observation data for a reference period indicates high applicability of the Multi-Model Ensemble (MME). Furthermore, the study classifies climatic zones by applying the Köppen-Geiger climate classification system to the MME, which is validated for future precipitation and temperature. The result suggests that the continental climate (D) that covers the inland area for the reference climate is expected to shift into the temperate climate (C). The coefficient of variation (CVs) in the temperature ensemble is particularly low for the southern coast of the Korean peninsula, and accordingly, a high possibility of the shifting climatic zone of the coast is predicted. This research was supported by a grant (MOIS-DP-2015-05) of Disaster Prediction and Mitigation Technology Development Program funded by Ministry of Interior and Safety (MOIS, Korea).

Keywords: MME, North Korea, Koppen–Geiger, climatic zones, coefficient of variation, CV

Procedia PDF Downloads 111
1617 Development of National Scale Hydropower Resource Assessment Scheme Using SWAT and Geospatial Techniques

Authors: Rowane May A. Fesalbon, Greyland C. Agno, Jodel L. Cuasay, Dindo A. Malonzo, Ma. Rosario Concepcion O. Ang

Abstract:

The Department of Energy of the Republic of the Philippines estimates that the country’s energy reserves for 2015 are dwindling– observed in the rotating power outages in several localities. To aid in the energy crisis, a national hydropower resource assessment scheme is developed. Hydropower is a resource that is derived from flowing water and difference in elevation. It is a renewable energy resource that is deemed abundant in the Philippines – being an archipelagic country that is rich in bodies of water and water resources. The objectives of this study is to develop a methodology for a national hydropower resource assessment using hydrologic modeling and geospatial techniques in order to generate resource maps for future reference and use of the government and other stakeholders. The methodology developed for this purpose is focused on two models – the implementation of the Soil and Water Assessment Tool (SWAT) for the river discharge and the use of geospatial techniques to analyze the topography and obtain the head, and generate the theoretical hydropower potential sites. The methodology is highly coupled with Geographic Information Systems to maximize the use of geodatabases and the spatial significance of the determined sites. The hydrologic model used in this workflow is SWAT integrated in the GIS software ArcGIS. The head is determined by a developed algorithm that utilizes a Synthetic Aperture Radar (SAR)-derived digital elevation model (DEM) which has a resolution of 10-meters. The initial results of the developed workflow indicate hydropower potential in the river reaches ranging from pico (less than 5 kW) to mini (1-3 MW) theoretical potential.

Keywords: ArcSWAT, renewable energy, hydrologic model, hydropower, GIS

Procedia PDF Downloads 313
1616 Location Choice: The Effects of Network Configuration upon the Distribution of Economic Activities in the Chinese City of Nanning

Authors: Chuan Yang, Jing Bie, Zhong Wang, Panagiotis Psimoulis

Abstract:

Contemporary studies investigating the association between the spatial configuration of the urban network and economic activities at the street level were mostly conducted within space syntax conceptual framework. These findings supported the theory of 'movement economy' and demonstrated the impact of street configuration on the distribution of pedestrian movement and land-use shaping, especially retail activities. However, the effects varied between different urban contexts. In this paper, the relationship between economic activity distribution and the urban configurational characters was examined at the segment level. In the study area, three kinds of neighbourhood types, urban, suburban, and rural neighbourhood, were included. And among all neighbourhoods, three kinds of urban network form, 'tree-like', grid, and organic pattern, were recognised. To investigate the nested effects of urban configuration measured by space syntax approach and urban context, multilevel zero-inflated negative binomial (ZINB) regression models were constructed. Additionally, considering the spatial autocorrelation, spatial lag was also concluded in the model as an independent variable. The random effect ZINB model shows superiority over the ZINB model or multilevel linear (ML) model in the explanation of economic activities pattern shaping over the urban environment. And after adjusting for the neighbourhood type and network form effects, connectivity and syntax centrality significantly affect economic activities clustering. The comparison between accumulative and new established economic activities illustrated the different preferences for economic activity location choice.

Keywords: space syntax, economic activities, multilevel model, Chinese city

Procedia PDF Downloads 124
1615 Biophysical Modeling of Anisotropic Brain Tumor Growth

Authors: Mutaz Dwairy

Abstract:

Solid tumors have high interstitial fluid pressure (IFP), high mechanical stress, and low oxygen levels. Solid stresses may induce apoptosis, stimulate the invasiveness and metastasis of cancer cells, and lower their proliferation rate, while oxygen concentration may affect the response of cancer cells to treatment. Although tumors grow in a nonhomogeneous environment, many existing theoretical models assume homogeneous growth and tissue has uniform mechanical properties. For example, the brain consists of three primary materials: white matter, gray matter, and cerebrospinal fluid (CSF). Therefore, tissue inhomogeneity should be considered in the analysis. This study established a physical model based on convection-diffusion equations and continuum mechanics principles. The model considers the geometrical inhomogeneity of the brain by including the three different matters in the analysis: white matter, gray matter, and CSF. The model also considers fluid-solid interaction and explicitly describes the effect of mechanical factors, e.g., solid stresses and IFP, chemical factors, e.g., oxygen concentration, and biological factors, e.g., cancer cell concentration, on growing tumors. In this article, we applied the model on a brain tumor positioned within the white matter, considering the brain inhomogeneity to estimate solid stresses, IFP, the cancer cell concentration, oxygen concentration, and the deformation of the tissues within the neoplasm and the surrounding. Tumor size was estimated at different time points. This model might be clinically crucial for cancer detection and treatment planning by measuring mechanical stresses, IFP, and oxygen levels in the tissue.

Keywords: biomechanical model, interstitial fluid pressure, solid stress, tumor microenvironment

Procedia PDF Downloads 46
1614 Probabilistic Crash Prediction and Prevention of Vehicle Crash

Authors: Lavanya Annadi, Fahimeh Jafari

Abstract:

Transportation brings immense benefits to society, but it also has its costs. Costs include such as the cost of infrastructure, personnel and equipment, but also the loss of life and property in traffic accidents on the road, delays in travel due to traffic congestion and various indirect costs in terms of air transport. More research has been done to identify the various factors that affect road accidents, such as road infrastructure, traffic, sociodemographic characteristics, land use, and the environment. The aim of this research is to predict the probabilistic crash prediction of vehicles using machine learning due to natural and structural reasons by excluding spontaneous reasons like overspeeding etc., in the United States. These factors range from weather factors, like weather conditions, precipitation, visibility, wind speed, wind direction, temperature, pressure, and humidity to human made structures like road structure factors like bump, roundabout, no exit, turning loop, give away, etc. Probabilities are dissected into ten different classes. All the predictions are based on multiclass classification techniques, which are supervised learning. This study considers all crashes that happened in all states collected by the US government. To calculate the probability, multinomial expected value was used and assigned a classification label as the crash probability. We applied three different classification models, including multiclass Logistic Regression, Random Forest and XGBoost. The numerical results show that XGBoost achieved a 75.2% accuracy rate which indicates the part that is being played by natural and structural reasons for the crash. The paper has provided in-deep insights through exploratory data analysis.

Keywords: road safety, crash prediction, exploratory analysis, machine learning

Procedia PDF Downloads 111