Search results for: logistic model tree
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17550

Search results for: logistic model tree

10530 The Cost of Non-Communicable Diseases in the European Union: A Projection towards the Future

Authors: Desiree Vandenberghe, Johan Albrecht

Abstract:

Non-communicable diseases (NCDs) are responsible for the vast majority of deaths in the European Union (EU) and represent a large share of total health care spending. A future increase in this health and financial burden is likely to be driven by population ageing, lifestyle changes and technological advances in medicine. Without adequate prevention measures, this burden can severely threaten population health and economic development. To tackle this challenge, a correct assessment of the current burden of NCDs is required, as well as a projection of potential increases of this burden. The contribution of this paper is to offer perspective on the evolution of the NCD burden towards the future and to give an indication of the potential of prevention policy. A Non-Homogenous, Semi-Markov model for the EU was constructed, which allowed for a projection of the cost burden for the four main NCDs (cancer, cardiovascular disease, chronic respiratory disease and diabetes mellitus) towards 2030 and 2050. This simulation is done based on multiple baseline scenarios that vary in demand and supply factors such as health status, population structure, and technological advances. Finally, in order to assess the potential of preventive measures to curb the cost explosion of NCDs, a simulation is executed which includes increased efforts for preventive health care measures. According to the Markov model, by 2030 and 2050, total costs (direct and indirect costs) in the EU could increase by 30.1% and 44.1% respectively, compared to 2015 levels. An ambitious prevention policy framework for NCDs will be required if the EU wants to meet this challenge of rising costs. To conclude, significant cost increases due to Non-Communicable Diseases are likely to occur due to demographic and lifestyle changes. Nevertheless, an ambitious prevention program throughout the EU can aid in making this cost burden manageable for future generations.

Keywords: non-communicable diseases, preventive health care, health policy, Markov model, scenario analysis

Procedia PDF Downloads 121
10529 Factors of Adoption of the International Financial Reporting Standard for Small and Medium Sized Entities

Authors: Uyanga Jadamba

Abstract:

Globalisation of the world economy has necessitated the development and implementation of a comparable and understandable reporting language suitable for use by all reporting entities. The International Accounting Standard Board (IASB) provides an international reporting language that lets all users understand the financial information of their business and potentially allows them to have access to finance at an international level. The study is based on logistic regression analysis to investigate the factors for the adoption of theInternational Financial Reporting Standard for Small and Medium sized Entities (IFRS for SMEs). The study started with a list of 217 countries from World Bank data. Due to the lack of availability of data, the final sample consisted of 136 countries, including 60 countries that have adopted the IFRS for SMEs and 76 countries that have not adopted it yet. As a result, the study included a period from 2010 to 2020 and obtained 1360 observations. The findings confirm that the adoption of the IFRS for SMEs is significantly related to the existence of national reporting standards, law enforcement quality, common law (legal system), and extent of disclosure. It means that the likelihood of adoption of the IFRS for SMEs decreases if the country already has a national reporting standard for SMEs, which suggests that implementation and transitional costs are relatively high in order to change the reporting standards. The result further suggests that the new standard adoption is easier in countries with constructive law enforcement and effective application of laws. The finding also shows that the adoption increases if countries have a common law system which suggests that efficient reportingregulations are more widespread in these countries. Countries with a high extent of disclosing their financial information are more likely to adopt the standard than others. The findings lastly show that the audit qualityand primary education levelhave no significant impact on the adoption.One possible explanation for this could be that accounting professionalsfrom in developing countries lacked complete knowledge of the international reporting standards even though there was a requirement to comply with them. The study contributes to the literature by providing factors that impact the adoption of the IFRS for SMEs. It helps policymakers to better understand and apply the standard to improve the transparency of financial statements. The benefit of adopting the IFRS for SMEs is significant due to the relaxed and tailored reporting requirements for SMEs, reduced burden on professionals to comply with the standard, and provided transparent financial information to gain access to finance.The results of the study are useful toemerging economies where SMEs are dominant in the economy in informing its evaluation of the adoption of the IFRS for SMEs.

Keywords: IFRS for SMEs, international financial reporting standard, adoption, institutional factors

Procedia PDF Downloads 61
10528 Fuzzy Optimization for Identifying Anticancer Targets in Genome-Scale Metabolic Models of Colon Cancer

Authors: Feng-Sheng Wang, Chao-Ting Cheng

Abstract:

Developing a drug from conception to launch is costly and time-consuming. Computer-aided methods can reduce research costs and accelerate the development process during the early drug discovery and development stages. This study developed a fuzzy multi-objective hierarchical optimization framework for identifying potential anticancer targets in a metabolic model. First, RNA-seq expression data of colorectal cancer samples and their healthy counterparts were used to reconstruct tissue-specific genome-scale metabolic models. The aim of the optimization framework was to identify anticancer targets that lead to cancer cell death and evaluate metabolic flux perturbations in normal cells that have been caused by cancer treatment. Four objectives were established in the optimization framework to evaluate the mortality of cancer cells for treatment and to minimize side effects causing toxicity-induced tumorigenesis on normal cells and smaller metabolic perturbations. Through fuzzy set theory, a multiobjective optimization problem was converted into a trilevel maximizing decision-making (MDM) problem. The applied nested hybrid differential evolution was applied to solve the trilevel MDM problem using two nutrient media to identify anticancer targets in the genome-scale metabolic model of colorectal cancer, respectively. Using Dulbecco’s Modified Eagle Medium (DMEM), the computational results reveal that the identified anticancer targets were mostly involved in cholesterol biosynthesis, pyrimidine and purine metabolisms, glycerophospholipid biosynthetic pathway and sphingolipid pathway. However, using Ham’s medium, the genes involved in cholesterol biosynthesis were unidentifiable. A comparison of the uptake reactions for the DMEM and Ham’s medium revealed that no cholesterol uptake reaction was included in DMEM. Two additional media, i.e., a cholesterol uptake reaction was included in DMEM and excluded in HAM, were respectively used to investigate the relationship of tumor cell growth with nutrient components and anticancer target genes. The genes involved in the cholesterol biosynthesis were also revealed to be determinable if a cholesterol uptake reaction was not induced when the cells were in the culture medium. However, the genes involved in cholesterol biosynthesis became unidentifiable if such a reaction was induced.

Keywords: Cancer metabolism, genome-scale metabolic model, constraint-based model, multilevel optimization, fuzzy optimization, hybrid differential evolution

Procedia PDF Downloads 58
10527 Neonatal Seizure Detection and Severity Identification Using Deep Convolutional Neural Networks

Authors: Biniam Seifu Debelo, Bheema Lingaiah Thamineni, Hanumesh Kumar Dasari, Ahmed Ali Dawud

Abstract:

Background: One of the most frequent neurological conditions in newborns is neonatal seizures, which may indicate severe neurological dysfunction. They may be caused by a broad range of problems with the central nervous system during or after pregnancy, infections, brain injuries, and/or other health conditions. These seizures may have very subtle or very modest clinical indications because patterns like oscillatory (spike) trains begin with relatively low amplitude and gradually increase over time. This becomes very challenging and erroneous if clinical observation is the primary basis for identifying newborn seizures. Objectives: In this study, a diagnosis system using deep convolutional neural networks is proposed to determine and classify the severity level of neonatal seizures using multichannel neonatal EEG data. Methods: Clinical multichannel EEG datasets were compiled using datasets from publicly accessible online sources. Various preprocessing steps were taken, including converting 2D time series data to equivalent waveform pictures. The proposed models underwent training, and their performance was evaluated. Results: The proposed CNN was used to perform binary classification with an accuracy of 92.6%, F1-score of 92.7%, specificity of 92.8%, and precision of 92.6%. To detect newborn seizures, this model is utilized. Using the proposed CNN model, multiclassification was performed with accuracy rates of 88.6%, specificity rates of 92.18%, F1-score rates of 85.61%, and precision rates of 88.9%. A multiclassification model is used to classify the severity level of neonatal seizures. The results demonstrated that the suggested strategy can assist medical professionals in making accurate diagnoses close to healthcare institutions. Conclusion: The developed system was capable of detecting neonatal seizures and has the potential to be used as a decision-making tool in resource-limited areas with a scarcity of expert neurologists.

Keywords: CNN, multichannel EEG, neonatal seizure, severity identification

Procedia PDF Downloads 8
10526 Proposing an Architecture for Drug Response Prediction by Integrating Multiomics Data and Utilizing Graph Transformers

Authors: Nishank Raisinghani

Abstract:

Efficiently predicting drug response remains a challenge in the realm of drug discovery. To address this issue, we propose four model architectures that combine graphical representation with varying positions of multiheaded self-attention mechanisms. By leveraging two types of multi-omics data, transcriptomics and genomics, we create a comprehensive representation of target cells and enable drug response prediction in precision medicine. A majority of our architectures utilize multiple transformer models, one with a graph attention mechanism and the other with a multiheaded self-attention mechanism, to generate latent representations of both drug and omics data, respectively. Our model architectures apply an attention mechanism to both drug and multiomics data, with the goal of procuring more comprehensive latent representations. The latent representations are then concatenated and input into a fully connected network to predict the IC-50 score, a measure of cell drug response. We experiment with all four of these architectures and extract results from all of them. Our study greatly contributes to the future of drug discovery and precision medicine by looking to optimize the time and accuracy of drug response prediction.

Keywords: drug discovery, transformers, graph neural networks, multiomics

Procedia PDF Downloads 124
10525 Predictors of Pericardial Effusion Requiring Drainage Following Coronary Artery Bypass Graft Surgery: A Retrospective Analysis

Authors: Nicholas McNamara, John Brookes, Michael Williams, Manish Mathew, Elizabeth Brookes, Tristan Yan, Paul Bannon

Abstract:

Objective: Pericardial effusions are an uncommon but potentially fatal complication after cardiac surgery. The goal of this study was to describe the incidence and risk factors associated with the development of pericardial effusion requiring drainage after coronary artery bypass graft surgery (CABG). Methods: A retrospective analysis was undertaken using prospectively collected data. All adult patients who underwent CABG at our institution between 1st January 2017 and 31st December 2018 were included. Pericardial effusion was diagnosed using transthoracic echocardiography (TTE) performed for clinical suspicion of pre-tamponade or tamponade. Drainage was undertaken if considered clinically necessary and performed via a sub-xiphoid incision, pericardiocentesis, or via re-sternotomy at the discretion of the treating surgeon. Patient demographics, operative characteristics, anticoagulant exposure, and postoperative outcomes were examined to identify those variables associated with the development of pericardial effusion requiring drainage. Tests of association were performed using the Fischer exact test for dichotomous variables and the Student t-test for continuous variables. Logistic regression models were used to determine univariate predictors of pericardial effusion requiring drainage. Results: Between January 1st, 2017, and December 31st, 2018, a total of 408 patients underwent CABG at our institution, and eight (1.9%) required drainage of pericardial effusion. There was no difference in age, gender, or the proportion of patients on preoperative therapeutic heparin between the study and control groups. Univariate analysis identified preoperative atrial arrhythmia (37.5% vs 8.8%, p = 0.03), reduced left ventricular ejection fraction (47% vs 56%, p = 0.04), longer cardiopulmonary bypass (130 vs 84 min, p < 0.01) and cross-clamp (107 vs 62 min, p < 0.01) times, higher drain output in the first four postoperative hours (420 vs 213 mL, p <0.01), postoperative atrial fibrillation (100% vs 32%, p < 0.01), and pleural effusion requiring drainage (87.5% vs 12.5%, p < 0.01) to be associated with development of pericardial effusion requiring drainage. Conclusion: In this study, the incidence of pericardial effusion requiring drainage was 1.9%. Several factors, mainly related to preoperative or postoperative arrhythmia, length of surgery, and pleural effusion requiring drainage, were identified to be associated with developing clinically significant pericardial effusions. High clinical suspicion and low threshold for transthoracic echo are pertinent to ensure this potentially lethal condition is not missed.

Keywords: coronary artery bypass, pericardial effusion, pericardiocentesis, tamponade, sub-xiphoid drainage

Procedia PDF Downloads 150
10524 Competitive Advantage Challenges in the Apparel Manufacturing Industries of South Africa: Application of Porter’s Factor Conditions

Authors: Sipho Mbatha, Anne Mastament-Mason

Abstract:

South African manufacturing global competitiveness was ranked 22nd (out of 38 countries), dropped to 24th in 2013 and is expected to drop further to 25th by 2018. These impacts negatively on the industrialisation project of South Africa. For industrialization to be achieved through labour intensive industries like the Apparel Manufacturing Industries of South Africa (AMISA), South Africa needs to identify and respond to factors negatively impacting on the development of competitive advantage This paper applied factor conditions from Porter’s Diamond Model (1990) to understand the various challenges facing the AMISA. Factor conditions highlighted in Porter’s model are grouped into two groups namely, basic and advance factors. Two AMISA associations representing over 10 000 employees were interviewed. The largest Clothing, Textiles and Leather (CTL) apparel retail group was also interviewed with a government department implementing the industrialisation policy were interviewed The paper points out that while AMISA have basic factor conditions necessary for competitive advantage in the clothing and textiles industries, Advance factor coordination has proven to be a challenging task for the AMISA, Higher Education Institutions (HEIs) and government. Poor infrastructural maintenance has contributed to high manufacturing costs and poor quick response as a result of lack of advanced technologies. The use of Porter’s Factor Conditions as a tool to analyse the sector’s competitive advantage challenges and opportunities has increased knowledge regarding factors that limit the AMISA’s competitiveness. It is therefore argued that other studies on Porter’s Diamond model factors like Demand conditions, Firm strategy, structure and rivalry and Related and supporting industries can be used to analyse the situation of the AMISA for the purposes of improving competitive advantage.

Keywords: compliance rule, apparel manufacturing industry, factor conditions, advance skills and South African industrial policy

Procedia PDF Downloads 340
10523 Multi-Spectral Deep Learning Models for Forest Fire Detection

Authors: Smitha Haridasan, Zelalem Demissie, Atri Dutta, Ajita Rattani

Abstract:

Aided by the wind, all it takes is one ember and a few minutes to create a wildfire. Wildfires are growing in frequency and size due to climate change. Wildfires and its consequences are one of the major environmental concerns. Every year, millions of hectares of forests are destroyed over the world, causing mass destruction and human casualties. Thus early detection of wildfire becomes a critical component to mitigate this threat. Many computer vision-based techniques have been proposed for the early detection of forest fire using video surveillance. Several computer vision-based methods have been proposed to predict and detect forest fires at various spectrums, namely, RGB, HSV, and YCbCr. The aim of this paper is to propose a multi-spectral deep learning model that combines information from different spectrums at intermediate layers for accurate fire detection. A heterogeneous dataset assembled from publicly available datasets is used for model training and evaluation in this study. The experimental results show that multi-spectral deep learning models could obtain an improvement of about 4.68 % over those based on a single spectrum for fire detection.

Keywords: deep learning, forest fire detection, multi-spectral learning, natural hazard detection

Procedia PDF Downloads 214
10522 Prediction of the Crustal Deformation of Volcán - Nevado Del RUíz in the Year 2020 Using Tropomi Tropospheric Information, Dinsar Technique, and Neural Networks

Authors: Juan Sebastián Hernández

Abstract:

The Nevado del Ruíz volcano, located between the limits of the Departments of Caldas and Tolima in Colombia, presented an unstable behaviour in the course of the year 2020, this volcanic activity led to secondary effects on the crust, which is why the prediction of deformations becomes the task of geoscientists. In the course of this article, the use of tropospheric variables such as evapotranspiration, UV aerosol index, carbon monoxide, nitrogen dioxide, methane, surface temperature, among others, is used to train a set of neural networks that can predict the behaviour of the resulting phase of an unrolled interferogram with the DInSAR technique, whose main objective is to identify and characterise the behaviour of the crust based on the environmental conditions. For this purpose, variables were collected, a generalised linear model was created, and a set of neural networks was created. After the training of the network, validation was carried out with the test data, giving an MSE of 0.17598 and an associated r-squared of approximately 0.88454. The resulting model provided a dataset with good thematic accuracy, reflecting the behaviour of the volcano in 2020, given a set of environmental characteristics.

Keywords: crustal deformation, Tropomi, neural networks (ANN), volcanic activity, DInSAR

Procedia PDF Downloads 76
10521 Across-Breed Genetic Evaluation of New Zealand Dairy Goats

Authors: Nicolas Lopez-Villalobos, Dorian J. Garrick, Hugh T. Blair

Abstract:

Many dairy goat farmers of New Zealand milk herds of mixed breed does. Simultaneous evaluation of sires and does across breed is required to select the best animals for breeding on a common basis. Across-breed estimated breeding values (EBV) and estimated producing values for 208-day lactation yields of milk (MY), fat (FY), protein (PY) and somatic cell score (SCS; LOG2(SCC) of Saanen, Nubian, Alpine, Toggenburg and crossbred dairy goats from 75 herds were estimated using a test day model. Evaluations were based on 248,734 herd-test records representing 125,374 lactations from 65,514 does sired by 930 sires over 9 generations. Averages of MY, FY and PY were 642 kg, 21.6 kg and 19.8 kg, respectively. Average SCC and SCS were 936,518 cells/ml milk and 9.12. Pure-bred Saanen does out-produced other breeds in MY, FY and PY. Average EBV for MY, FY and PY compared to a Saanen base were Nubian -98 kg, 0.1 kg and -1.2 kg; Alpine -64 kg, -1.0 kg and -1.7 kg; and Toggenburg -42 kg, -1.0 kg and -0.5 kg. First-cross heterosis estimates were 29 kg MY, 1.1 kg FY and 1.2 kg PY. Average EBV for SCS compared to a Saanen base were Nubian 0.041, Alpine -0.083 and Toggenburg 0.094. Heterosis for SCS was 0.03. Breeding values are combined with respective economic values to calculate an economic index used for ranking sires and does to reflect farm profit.

Keywords: breed effects, dairy goats, milk traits, test-day model

Procedia PDF Downloads 306
10520 How Cultural Tourists Perceive Authenticity in World Heritage Historic Centers: An Empirical Research

Authors: Odete Paiva, Cláudia Seabra, José Luís Abrantes, Fernanda Cravidão

Abstract:

There is a clear ‘cult of authenticity’, at least in modern Western society. So, there is a need to analyze the tourist perception of authenticity, bearing in mind the destination, its attractions, motivations, cultural distance, and contact with other tourists. Our study seeks to investigate the relationship among cultural values, image, sense of place, perception of authenticity and behavior intentions at World Heritage Historic Centers. From a theoretical perspective, few researches focus on the impact of cultural values, image and sense of place on authenticity and intentions behavior in tourists. The intention of this study is to help close this gap. A survey was applied to collect data from tourists visiting two World Heritage Historic Centers – Guimarães in Portugal and Cordoba in Spain. Data was analyzed in order to establish a structural equation model (SEM). Discussion centers on the implications of model to theory and managerial development of tourism strategies. Recommendations for destinations managers and promoters and tourist organizations administrators are addressed.

Keywords: authenticity perception, behavior intentions, cultural tourism, cultural values, world heritage historic centers

Procedia PDF Downloads 287
10519 A General Framework for Knowledge Discovery from Echocardiographic and Natural Images

Authors: S. Nandagopalan, N. Pradeep

Abstract:

The aim of this paper is to propose a general framework for storing, analyzing, and extracting knowledge from two-dimensional echocardiographic images, color Doppler images, non-medical images, and general data sets. A number of high performance data mining algorithms have been used to carry out this task. Our framework encompasses four layers namely physical storage, object identification, knowledge discovery, user level. Techniques such as active contour model to identify the cardiac chambers, pixel classification to segment the color Doppler echo image, universal model for image retrieval, Bayesian method for classification, parallel algorithms for image segmentation, etc., were employed. Using the feature vector database that have been efficiently constructed, one can perform various data mining tasks like clustering, classification, etc. with efficient algorithms along with image mining given a query image. All these facilities are included in the framework that is supported by state-of-the-art user interface (UI). The algorithms were tested with actual patient data and Coral image database and the results show that their performance is better than the results reported already.

Keywords: active contour, Bayesian, echocardiographic image, feature vector

Procedia PDF Downloads 421
10518 Hydrological Analysis for Urban Water Management

Authors: Ranjit Kumar Sahu, Ramakar Jha

Abstract:

Urban Water Management is the practice of managing freshwater, waste water, and storm water as components of a basin-wide management plan. It builds on existing water supply and sanitation considerations within an urban settlement by incorporating urban water management within the scope of the entire river basin. The pervasive problems generated by urban development have prompted, in the present work, to study the spatial extent of urbanization in Golden Triangle of Odisha connecting the cities Bhubaneswar (20.2700° N, 85.8400° E), Puri (19.8106° N, 85.8314° E) and Konark (19.9000° N, 86.1200° E)., and patterns of periodic changes in urban development (systematic/random) in order to develop future plans for (i) urbanization promotion areas, and (ii) urbanization control areas. Remote Sensing, using USGS (U.S. Geological Survey) Landsat8 maps, supervised classification of the Urban Sprawl has been done for during 1980 - 2014, specifically after 2000. This Work presents the following: (i) Time series analysis of Hydrological data (ground water and rainfall), (ii) Application of SWMM (Storm Water Management Model) and other soft computing techniques for Urban Water Management, and (iii) Uncertainty analysis of model parameters (Urban Sprawl and correlation analysis). The outcome of the study shows drastic growth results in urbanization and depletion of ground water levels in the area that has been discussed briefly. Other relative outcomes like declining trend of rainfall and rise of sand mining in local vicinity has been also discussed. Research on this kind of work will (i) improve water supply and consumption efficiency (ii) Upgrade drinking water quality and waste water treatment (iii) Increase economic efficiency of services to sustain operations and investments for water, waste water, and storm water management, and (iv) engage communities to reflect their needs and knowledge for water management.

Keywords: Storm Water Management Model (SWMM), uncertainty analysis, urban sprawl, land use change

Procedia PDF Downloads 408
10517 Designing Sustainable Building Based on Iranian's Windmills

Authors: Negar Sartipzadeh

Abstract:

Energy-conscious design, which coordinates with the Earth ecological systems during its life cycle, has the least negative impact on the environment with the least waste of resources. Due to the increasing in world population as well as the consumption of fossil fuels that cause the production of greenhouse gasses and environmental pollution, mankind is looking for renewable and also sustainable energies. The Iranian native construction is a clear evidence of energy-aware designing. Our predecessors were forced to rely on the natural resources and sustainable energies as well as environmental issues which have been being considered in the recent world. One of these endless energies is wind energy. Iranian traditional architecture foundations is a appropriate model in solving the environmental crisis and the contemporary energy. What will come in this paper is an effort to recognition and introduction of the unique characteristics of the Iranian architecture in the application of aerodynamic and hydraulic energies derived from the wind, which are the most common and major type of using sustainable energies in the traditional architecture of Iran. Therefore, the recent research attempts to offer a hybrid system suggestions for application in new constructions designing in a region such as Nashtifan, which has potential through reviewing windmills and how they deal with sustainable energy sources, as a model of Iranian native construction.

Keywords: renewable energy, sustainable building, windmill, Iranian architecture

Procedia PDF Downloads 398
10516 Resisting Adversarial Assaults: A Model-Agnostic Autoencoder Solution

Authors: Massimo Miccoli, Luca Marangoni, Alberto Aniello Scaringi, Alessandro Marceddu, Alessandro Amicone

Abstract:

The susceptibility of deep neural networks (DNNs) to adversarial manipulations is a recognized challenge within the computer vision domain. Adversarial examples, crafted by adding subtle yet malicious alterations to benign images, exploit this vulnerability. Various defense strategies have been proposed to safeguard DNNs against such attacks, stemming from diverse research hypotheses. Building upon prior work, our approach involves the utilization of autoencoder models. Autoencoders, a type of neural network, are trained to learn representations of training data and reconstruct inputs from these representations, typically minimizing reconstruction errors like mean squared error (MSE). Our autoencoder was trained on a dataset of benign examples; learning features specific to them. Consequently, when presented with significantly perturbed adversarial examples, the autoencoder exhibited high reconstruction errors. The architecture of the autoencoder was tailored to the dimensions of the images under evaluation. We considered various image sizes, constructing models differently for 256x256 and 512x512 images. Moreover, the choice of the computer vision model is crucial, as most adversarial attacks are designed with specific AI structures in mind. To mitigate this, we proposed a method to replace image-specific dimensions with a structure independent of both dimensions and neural network models, thereby enhancing robustness. Our multi-modal autoencoder reconstructs the spectral representation of images across the red-green-blue (RGB) color channels. To validate our approach, we conducted experiments using diverse datasets and subjected them to adversarial attacks using models such as ResNet50 and ViT_L_16 from the torch vision library. The autoencoder extracted features used in a classification model, resulting in an MSE (RGB) of 0.014, a classification accuracy of 97.33%, and a precision of 99%.

Keywords: adversarial attacks, malicious images detector, binary classifier, multimodal transformer autoencoder

Procedia PDF Downloads 58
10515 Differences in Patient Satisfaction Observed between Female Japanese Breast Cancer Patients Who Receive Breast-Conserving Surgery or Total Mastectomy

Authors: Keiko Yamauchi, Motoyuki Nakao, Yoko Ishihara

Abstract:

The increase in the number of women with breast cancer in Japan has required hospitals to provide a higher quality of medicine so that patients are satisfied with the treatment they receive. However, patients’ satisfaction following breast cancer treatment has not been sufficiently studied. Hence, we investigated the factors influencing patient satisfaction following breast cancer treatment among Japanese women. These women underwent either breast-conserving surgery (BCS) (n = 380) or total mastectomy (TM) (n = 247). In March 2016, we conducted a cross-sectional internet survey of Japanese women with breast cancer in Japan. We assessed the following factors: socioeconomic status, cancer-related information, the role of medical decision-making, the degree of satisfaction regarding the treatments received, and the regret arising from the medical decision-making processes. We performed logistic regression analyses with the following dependent variables: extreme satisfaction with the treatments received, and regret regarding the medical decision-making process. For both types of surgery, the odds ratio (OR) of being extremely satisfied with the cancer treatment was significantly higher among patients who did not have any regrets compared to patients who had. Also, the OR tended to be higher among patients who chose to play a wanted role in the medical decision-making process, compared with patients who did not. In the BCS group, the OR of being extremely satisfied with the treatment was higher if, at diagnosis, the patient’s youngest child was older than 19 years, compared with patients with no children. The OR was also higher if patient considered the stage and characteristics of their cancer significant. The OR of being extremely satisfied with the treatments was lower among patients who were not employed on full-time basis, and among patients who considered the second medical opinions and medical expenses to be significant. These associations were not observed in the TM group. The OR of having regrets regarding the medical decision-making process was higher among patients who chose to play a role in the decision-making process as they preferred, and was also higher in patients who were employed on either a part-time or contractual basis. For both types of surgery, the OR was higher among patients who considered a second medical opinion to be significant. Regardless of surgical type, regret regarding the medical decision-making process decreases treatment satisfaction. Patients who received breast-conserving surgery were more likely to have regrets concerning the medical decision-making process if they could not play a role in the process as they preferred. In addition, factors associated with the satisfaction with treatment in BCS group but not TM group included the second medical opinion, medical expenses, employment status, and age of the youngest child at diagnosis.

Keywords: medical decision making, breast-conserving surgery, total mastectomy, Japanese

Procedia PDF Downloads 127
10514 Application of Hydrological Engineering Centre – River Analysis System (HEC-RAS) to Estuarine Hydraulics

Authors: Julia Zimmerman, Gaurav Savant

Abstract:

This study aims to evaluate the efficacy of the U.S. Army Corp of Engineers’ River Analysis System (HEC-RAS) application to modeling the hydraulics of estuaries. HEC-RAS has been broadly used for a variety of riverine applications. However, it has not been widely applied to the study of circulation in estuaries. This report details the model development and validation of a combined 1D/2D unsteady flow hydraulic model using HEC-RAS for estuaries and they are associated with tidally influenced rivers. Two estuaries, Galveston Bay and Delaware Bay, were used as case studies. Galveston Bay, a bar-built, vertically mixed estuary, was modeled for the 2005 calendar year. Delaware Bay, a drowned river valley estuary, was modeled from October 22, 2019, to November 5, 2019. Water surface elevation was used to validate both models by comparing simulation results to NOAA’s Center for Operational Oceanographic Products and Services (CO-OPS) gauge data. Simulations were run using the Diffusion Wave Equations (DW), the Shallow Water Equations, Eulerian-Lagrangian Method (SWE-ELM), and the Shallow Water Equations Eulerian Method (SWE-EM) and compared for both accuracy and computational resources required. In general, the Diffusion Wave Equations results were found to be comparable to the two Shallow Water equations sets while requiring less computational power. The 1D/2D combined approach was valid for study areas within the 2D flow area, with the 1D flow serving mainly as an inflow boundary condition. Within the Delaware Bay estuary, the HEC-RAS DW model ran in 22 minutes and had an average R² value of 0.94 within the 2-D mesh. The Galveston Bay HEC-RAS DW ran in 6 hours and 47 minutes and had an average R² value of 0.83 within the 2-D mesh. The longer run time and lower R² for Galveston Bay can be attributed to the increased length of the time frame modeled and the greater complexity of the estuarine system. The models did not accurately capture tidal effects within the 1D flow area.

Keywords: Delaware bay, estuarine hydraulics, Galveston bay, HEC-RAS, one-dimensional modeling, two-dimensional modeling

Procedia PDF Downloads 182
10513 A Calibration Method of Portable Coordinate Measuring Arm Using Bar Gauge with Cone Holes

Authors: Rim Chang Hyon, Song Hak Jin, Song Kwang Hyok, Jong Ki Hun

Abstract:

The calibration of the articulated arm coordinate measuring machine (AACMM) is key to improving calibration accuracy and saving calibration time. To reduce the time consumed for calibration, we should choose the proper calibration gauges and develop a reasonable calibration method. In addition, we should get the exact optimal solution by accurately removing the rough errors within the experimental data. In this paper, we present a calibration method of the portable coordinate measuring arm (PCMA) using the 1.2m long bar guage with cone-holes. First, we determine the locations of the bar gauge and establish an optimal objective function for identifying the structural parameter errors. Next, we make a mathematical model of the calibration algorithm and present a new mathematical method to remove the rough errors within calibration data. Finally, we find the optimal solution to identify the kinematic parameter errors by using Levenberg-Marquardt algorithm. The experimental results show that our calibration method is very effective in saving the calibration time and improving the calibration accuracy.

Keywords: AACMM, kinematic model, parameter identify, measurement accuracy, calibration

Procedia PDF Downloads 52
10512 Sustainable Concepts Applied in the Pre-Columbian Andean Architecture in Southern Ecuador

Authors: Diego Espinoza-Piedra, David Duran

Abstract:

All architectural and land use processes are framed in a cultural, social and geographical context. The present study analyzes the Andean culture before the Spanish conquest in southern Ecuador, in the province of Azuay. This area has been habited for more than 10.000 years. The Canari and the Inca cultures occupied Azuay close to the arrival of the Spanish conquers. The Inca culture was settled in the Andes Mountains. The Canari culture was established in the south of Ecuador, on the actual provinces of Azuay and Canar. In contrast with history and archeology, to the best of our knowledge, their architecture has not yet been studied in this area because of the lack of architectural structures. Consequently, the present research reviewed the land use and culture for architectonic interpretations. The two main architectural objects in these cultures were dwellings and public buildings. In the first case, housing was conceived as temporary. It had to stand as long as its inhabitants lived. Therefore, houses were built when a couple got married. The whole community started the construction through the so-called ‘minga’ or collective work. The construction materials were tree branches, reeds, agave, ground, and straw. So that when their owners aged and then died, this house was easily disarmed and overthrown. Their materials become part of the land for agriculture. Finally, this cycle was repeated indefinitely. In the second case, the buildings, which we can call public, have presented erroneous interpretations. They have been defined as temples. But according to our conclusions, they were places for temporary accommodation, storage of objects and products, and in some special cases, even astronomical observatories. These public buildings were settled along the important road system called ‘Capac-Nam’, currently declared by UNESCO as World Cultural Heritage. The buildings had different scales at regular distances. Also, they were established in special or strategic places, which constituted a system of observatories. These observatories allowed to determine the cycles or calendars (solar or lunar) necessary for the agricultural production, as well as other natural phenomena. Most of the current minimal existence of physical structures in quantity and state of conservation is at the level of foundations or pieces of walls. Therefore, this study was realized after the identification of the history and culture of the inhabitants of this Andean region.

Keywords: Andean, pre-Colombian architecture, Southern Ecuador, sustainable

Procedia PDF Downloads 99
10511 Unravelling the Relationship Between Maternal and Fetal ACE2 Gene Polymorphism and Preeclampsia Risk

Authors: Sonia Tamanna, Akramul Hassan, Mohammad Shakil Mahmood, Farzana Ansari, Gowhar Rashid, Mir Fahim Faisal, M. Zakir Hossain Howlader

Abstract:

Background: Preeclampsia (PE), a pregnancy-specific hypertensive disorder, significantly impacts maternal and fetal health. It is particularly prevalent in underdeveloped countries and is linked to preterm delivery and fetal growth. The renin-angiotensin system (RAS) plays a crucial role in ensuring a successful pregnancy outcome, with Angiotensin-Converting Enzyme 2 (ACE2) being a key component. ACE2 converts ANG II to Ang-(1-7), offering protection against ANG II-induced stress and inflammation while regulating blood pressure and osmotic balance during pregnancy. The reduced maternal plasma angiotensin-converting enzyme 2 (ACE2) seen in preeclampsia might contribute to its pathogenesis. However, there has been a dearth of comprehensive research into the association between ACE2 gene polymorphism and preeclampsia. In the South Asian population, hypertension is strongly linked to two SNPs: rs2285666 and rs879922. This genotype was therefore considered, and the possible association of maternal and fetal ACE2 gene polymorphism with preeclampsia within the Bangladeshi population was evaluated. Method: DNA was extracted from peripheral white blood cells (WBCs) using the organic method, and SNP genotyping was done via PCR-RFLP. Odds ratios (OR) with 95% confidence intervals (95% CI) were calculated using logistic regression to determine relative risk. Result: A comprehensive case-control study was conducted on 51 PE patients and their infants, along with 56 control subjects and their infants. Maternal single nuvleotide polymorphisms (SNP) (rs2285666) analysis revealed a strong association between the TT genotype and preeclampsia, with a four-fold increased risk in mothers (P=0.024, OR=4.00, 95% CI=1.36-11.37) compared to their ancestral genotype CC. However, the CT genotype (rs2285666) showed no significant difference (P=0.46, OR=1.54, 95% CI=0.57-4.14). Notably, no significant correlation was found in infants, regardless of their gender. For rs879922, no significant association was observed in both mothers and infants. This pioneering study suggests that mothers carrying the ACE2 gene variant rs2285666 (TT allele) may be at higher risk for preeclampsia, potentially influencing hypertension characteristics, whereas rs879922 does not appear to be associated with developing preeclampsia. Conclusion: This study sheds light on the role of ACE2 gene polymorphism, particularly the rs2285666 TT allele, in maternal susceptibility to preeclampsia. However, rs879922 does not appear to be linked to the risk of PE. This research contributes to our understanding of the genetic underpinnings of preeclampsia, offering insights into potential avenues for prevention and management.

Keywords: ACE2, PCR-RFLP, preeclampsia, single nuvleotide polymorphisms (SNPs)

Procedia PDF Downloads 45
10510 Mathematical Modeling of the Fouling Phenomenon in Ultrafiltration of Latex Effluent

Authors: Amira Abdelrasoul, Huu Doan, Ali Lohi

Abstract:

An efficient and well-planned ultrafiltration process is becoming a necessity for monetary returns in the industrial settings. The aim of the present study was to develop a mathematical model for an accurate prediction of ultrafiltration membrane fouling of latex effluent applied to homogeneous and heterogeneous membranes with uniform and non-uniform pore sizes, respectively. The models were also developed for an accurate prediction of power consumption that can handle the large-scale purposes. The model incorporated the fouling attachments as well as chemical and physical factors in membrane fouling for accurate prediction and scale-up application. Both Polycarbonate and Polysulfone flat membranes, with pore sizes of 0.05 µm and a molecular weight cut-off of 60,000, respectively, were used under a constant feed flow rate and a cross-flow mode in ultrafiltration of the simulated paint effluent. Furthermore, hydrophilic ultrafilic and hydrophobic PVDF membranes with MWCO of 100,000 were used to test the reliability of the models. Monodisperse particles of 50 nm and 100 nm in diameter, and a latex effluent with a wide range of particle size distributions were utilized to validate the models. The aggregation and the sphericity of the particles indicated a significant effect on membrane fouling.

Keywords: membrane fouling, mathematical modeling, power consumption, attachments, ultrafiltration

Procedia PDF Downloads 453
10509 Enhanced COVID-19 Pharmaceuticals and Microplastics Removal from Wastewater Using Hybrid Reactor System

Authors: Reda Dzingelevičienė, Vytautas Abromaitis, Nerijus Dzingelevičius, Kęstutis Baranauskis, Saulius Raugelė, Malgorzata Mlynska-Szultka, Sergej Suzdalev, Reza Pashaei, Sajjad Abbasi, Boguslaw Buszewski

Abstract:

A unique hybrid technology was developed for the removal of COVID-19 specific contaminants from wastewater. Reactor testing was performed using model water samples contaminated with COVID-19 pharmaceuticals and microplastics. Different hydraulic retention times, concentrations of pollutants and dissolved ozone were tested. Liquid Chromatography-Mass Spectrometry, solid phase extraction, surface area and porosity, analytical tools were used to monitor the treatment efficiency and remaining sorption capacity of the spent adsorbent. The combination of advanced oxidation and adsorption processes was found to be the most effective, with the highest 90-99% and 89-95% molnupiravir and microplastics contaminants removal efficiency from the model wastewater. The research has received funding from the European Regional Development Fund (project No 13.1.1-LMT-K-718-05-0014) under a grant agreement with the Research Council of Lithuania (LMTLT), and it was funded as part of the European Union’s measure in response to the COVID-19 pandemic.

Keywords: adsorption, hybrid reactor system, pharmaceuticals-microplastics, wastewater

Procedia PDF Downloads 63
10508 Nonstationary Modeling of Extreme Precipitation in the Wei River Basin, China

Authors: Yiyuan Tao

Abstract:

Under the impact of global warming together with the intensification of human activities, the hydrological regimes may be altered, and the traditional stationary assumption was no longer satisfied. However, most of the current design standards of water infrastructures were still based on the hypothesis of stationarity, which may inevitably result in severe biases. Many critical impacts of climate on ecosystems, society, and the economy are controlled by extreme events rather than mean values. Therefore, it is of great significance to identify the non-stationarity of precipitation extremes and model the precipitation extremes in a nonstationary framework. The Wei River Basin (WRB), located in a continental monsoon climate zone in China, is selected as a case study in this study. Six extreme precipitation indices were employed to investigate the changing patterns and stationarity of precipitation extremes in the WRB. To identify if precipitation extremes are stationary, the Mann-Kendall trend test and the Pettitt test, which is used to examine the occurrence of abrupt changes are adopted in this study. Extreme precipitation indices series are fitted with non-stationary distributions that selected from six widely used distribution functions: Gumbel, lognormal, Weibull, gamma, generalized gamma and exponential distributions by means of the time-varying moments model generalized additive models for location, scale and shape (GAMLSS), where the distribution parameters are defined as a function of time. The results indicate that: (1) the trends were not significant for the whole WRB, but significant positive/negative trends were still observed in some stations, abrupt changes for consecutive wet days (CWD) mainly occurred in 1985, and the assumption of stationarity is invalid for some stations; (2) for these nonstationary extreme precipitation indices series with significant positive/negative trends, the GAMLSS models are able to capture well the temporal variations of the indices, and perform better than the stationary model. Finally, the differences between the quantiles of nonstationary and stationary models are analyzed, which highlight the importance of nonstationary modeling of precipitation extremes in the WRB.

Keywords: extreme precipitation, GAMLSSS, non-stationary, Wei River Basin

Procedia PDF Downloads 101
10507 Assessment of Designed Outdoor Playspaces as Learning Environments and Its Impact on Child’s Wellbeing: A Case of Bhopal, India

Authors: Richa Raje, Anumol Antony

Abstract:

Playing is the foremost stepping stone for childhood development. Play is an essential aspect of a child’s development and learning because it creates meaningful enduring environmental connections and increases children’s performance. The children’s proficiencies are ever varying in their course of growth. There is innovation in the activities, as it kindles the senses, surges the love for exploration, overcomes linguistic barriers and physiological development, which in turn allows them to find their own caliber, spontaneity, curiosity, cognitive skills, and creativity while learning during play. This paper aims to comprehend the learning in play which is the most essential underpinning aspect of the outdoor play area. It also assesses the trend of playgrounds design that is merely hammered with equipment's. It attempts to derive a relation between the natural environment and children’s activities and the emotions/senses that can be evoked in the process. One of the major concerns with our outdoor play is that it is limited to an area with a similar kind of equipment, thus making the play highly regimented and monotonous. This problem is often lead by the strict timetables of our education system that hardly accommodates play. Due to these reasons, the play areas remain neglected both in terms of design that allows learning and wellbeing. Poorly designed spaces fail to inspire the physical, emotional, social and psychological development of the young ones. Currently, the play space has been condensed to an enclosed playground, driveway or backyard which confines the children’s capability to leap the boundaries set for him. The paper emphasizes on study related to kids ranging from 5 to 11 years where the behaviors during their interactions in a playground are mapped and analyzed. The theory of affordance is applied to various outdoor play areas, in order to study and understand the children’s environment and how variedly they perceive and use them. A higher degree of affordance shall form the basis for designing the activities suitable in play spaces. It was observed during their play that, they choose certain spaces of interest majority being natural over other artificial equipment. The activities like rolling on the ground, jumping from a height, molding earth, hiding behind tree, etc. suggest that despite equipment they have an affinity towards nature. Therefore, we as designers need to take a cue from their behavior and practices to be able to design meaningful spaces for them, so the child gets the freedom to test their precincts.

Keywords: children, landscape design, learning environment, nature and play, outdoor play

Procedia PDF Downloads 105
10506 In silico Repopulation Model of Various Tumour Cells during Treatment Breaks in Head and Neck Cancer Radiotherapy

Authors: Loredana G. Marcu, David Marcu, Sanda M. Filip

Abstract:

Advanced head and neck cancers are aggressive tumours, which require aggressive treatment. Treatment efficiency is often hindered by cancer cell repopulation during radiotherapy, which is due to various mechanisms triggered by the loss of tumour cells and involves both stem and differentiated cells. The aim of the current paper is to present in silico simulations of radiotherapy schedules on a virtual head and neck tumour grown with biologically realistic kinetic parameters. Using the linear quadratic formalism of cell survival after radiotherapy, altered fractionation schedules employing various treatment breaks for normal tissue recovery are simulated and repopulation mechanism implemented in order to evaluate the impact of various cancer cell contribution on tumour behaviour during irradiation. The model has shown that the timing of treatment breaks is an important factor influencing tumour control in rapidly proliferating tissues such as squamous cell carcinomas of the head and neck. Furthermore, not only stem cells but also differentiated cells, via the mechanism of abortive division, can contribute to malignant cell repopulation during treatment.

Keywords: radiation, tumour repopulation, squamous cell carcinoma, stem cell

Procedia PDF Downloads 252
10505 Comparison between the Efficiency of Heterojunction Thin Film InGaP\GaAs\Ge and InGaP\GaAs Solar Cell

Authors: F. Djaafar, B. Hadri, G. Bachir

Abstract:

This paper presents the design parameters for a thin film 3J InGaP/GaAs/Ge solar cell with a simulated maximum efficiency of 32.11% using Tcad Silvaco. Design parameters include the doping concentration, molar fraction, layers’ thickness and tunnel junction characteristics. An initial dual junction InGaP/GaAs model of a previous published heterojunction cell was simulated in Tcad Silvaco to accurately predict solar cell performance. To improve the solar cell’s performance, we have fixed meshing, material properties, models and numerical methods. However, thickness and layer doping concentration were taken as variables. We, first simulate the InGaP\GaAs dual junction cell by changing the doping concentrations and thicknesses which showed an increase in efficiency. Next, a triple junction InGaP/GaAs/Ge cell was modeled by adding a Ge layer to the previous dual junction InGaP/GaAs model with an InGaP /GaAs tunnel junction.

Keywords: heterojunction, modeling, simulation, thin film, Tcad Silvaco

Procedia PDF Downloads 347
10504 A Probabilistic Theory of the Buy-Low and Sell-High for Algorithmic Trading

Authors: Peter Shi

Abstract:

Algorithmic trading is a rapidly expanding domain within quantitative finance, constituting a substantial portion of trading volumes in the US financial market. The demand for rigorous and robust mathematical theories underpinning these trading algorithms is ever-growing. In this study, the author establishes a new stock market model that integrates the Efficient Market Hypothesis and the statistical arbitrage. The model, for the first time, finds probabilistic relations between the rational price and the market price in terms of the conditional expectation. The theory consequently leads to a mathematical justification of the old market adage: buy-low and sell-high. The thresholds for “low” and “high” are precisely derived using a max-min operation on Bayes’s error. This explicit connection harmonizes the Efficient Market Hypothesis and Statistical Arbitrage, demonstrating their compatibility in explaining market dynamics. The amalgamation represents a pioneering contribution to quantitative finance. The study culminates in comprehensive numerical tests using historical market data, affirming that the “buy-low” and “sell-high” algorithm derived from this theory significantly outperforms the general market over the long term in four out of six distinct market environments.

Keywords: efficient market hypothesis, behavioral finance, Bayes' decision, algorithmic trading, risk control, stock market

Procedia PDF Downloads 52
10503 Least Squares Solution for Linear Quadratic Gaussian Problem with Stochastic Approximation Approach

Authors: Sie Long Kek, Wah June Leong, Kok Lay Teo

Abstract:

Linear quadratic Gaussian model is a standard mathematical model for the stochastic optimal control problem. The combination of the linear quadratic estimation and the linear quadratic regulator allows the state estimation and the optimal control policy to be designed separately. This is known as the separation principle. In this paper, an efficient computational method is proposed to solve the linear quadratic Gaussian problem. In our approach, the Hamiltonian function is defined, and the necessary conditions are derived. In addition to this, the output error is defined and the least-square optimization problem is introduced. By determining the first-order necessary condition, the gradient of the sum squares of output error is established. On this point of view, the stochastic approximation approach is employed such that the optimal control policy is updated. Within a given tolerance, the iteration procedure would be stopped and the optimal solution of the linear-quadratic Gaussian problem is obtained. For illustration, an example of the linear-quadratic Gaussian problem is studied. The result shows the efficiency of the approach proposed. In conclusion, the applicability of the approach proposed for solving the linear quadratic Gaussian problem is highly demonstrated.

Keywords: iteration procedure, least squares solution, linear quadratic Gaussian, output error, stochastic approximation

Procedia PDF Downloads 159
10502 Multi-Robotic Partial Disassembly Line Balancing with Robotic Efficiency Difference via HNSGA-II

Authors: Tao Yin, Zeqiang Zhang, Wei Liang, Yanqing Zeng, Yu Zhang

Abstract:

To accelerate the remanufacturing process of electronic waste products, this study designs a partial disassembly line with the multi-robotic station to effectively dispose of excessive wastes. The multi-robotic partial disassembly line is a technical upgrade to the existing manual disassembly line. Balancing optimization can make the disassembly line smoother and more efficient. For partial disassembly line balancing with the multi-robotic station (PDLBMRS), a mixed-integer programming model (MIPM) considering the robotic efficiency differences is established to minimize cycle time, energy consumption and hazard index and to calculate their optimal global values. Besides, an enhanced NSGA-II algorithm (HNSGA-II) is proposed to optimize PDLBMRS efficiently. Finally, MIPM and HNSGA-II are applied to an actual mixed disassembly case of two types of computers, the comparison of the results solved by GUROBI and HNSGA-II verifies the correctness of the model and excellent performance of the algorithm, and the obtained Pareto solution set provides multiple options for decision-makers.

Keywords: waste disposal, disassembly line balancing, multi-robot station, robotic efficiency difference, HNSGA-II

Procedia PDF Downloads 207
10501 Modelling of Organic Rankine Cycle for Waste Heat Recovery Process in Supercritical Condition

Authors: Jahedul Islam Chowdhury, Bao Kha Nguyen, David Thornhill, Roy Douglas, Stephen Glover

Abstract:

Organic Rankine Cycle (ORC) is the most commonly used method for recovering energy from small sources of heat. The investigation of the ORC in supercritical condition is a new research area as it has a potential to generate high power and thermal efficiency in a waste heat recovery system. This paper presents a steady state ORC model in supercritical condition and its simulations with a real engine’s exhaust data. The key component of ORC, evaporator, is modelled using finite volume method, modelling of all other components of the waste heat recovery system such as pump, expander and condenser are also presented. The aim of this paper is to investigate the effects of mass flow rate and evaporator outlet temperature on the efficiency of the waste heat recovery process. Additionally, the necessity of maintaining an optimum evaporator outlet temperature is also investigated. Simulation results show that modification of mass flow rate is the key to changing the operating temperature at the evaporator outlet.

Keywords: Organic Rankine cycle, supercritical condition, steady state model, waste heat recovery

Procedia PDF Downloads 386