Search results for: performance prism model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25977

Search results for: performance prism model

15267 Thermomechanical Simulation of Equipment Subjected to an Oxygen Pressure and Heated Locally by the Ignition of Small Particles

Authors: Khaled Ayfi

Abstract:

In industrial oxygen systems at high temperature and high pressure, contamination by solid particles is one of the principal causes of ignition hazards. Indeed, gas can sweep away particles, generated by corrosion inside the pipes or during maintenance operations (welding residues, careless disassembly, etc.) and produce accumulations at places where the gas velocity decrease. Moreover, in such an environment rich in oxygen (oxidant), particles are highly reactive and can ignite system walls more actively and at higher temperatures. Oxidation based thermal effects are responsible for mechanical properties lost, leading to the destruction of the pressure equipment wall. To deal with this problem, a numerical analysis is done regarding a sample representative of a wall subjected to pressure and temperature. The validation and analysis are done comparing the numerical simulations results to experimental measurements. More precisely, in this work, we propose a numerical model that describes the thermomechanical behavior of thin metal disks under pressure and subjected to laser heating. This model takes into account the geometric and material nonlinearity and has been validated by the comparison of simulation results with experimental measurements.

Keywords: ignition, oxygen, numerical simulation, thermomechanical behavior

Procedia PDF Downloads 95
15266 Systematic Evaluation of Convolutional Neural Network on Land Cover Classification from Remotely Sensed Images

Authors: Eiman Kattan, Hong Wei

Abstract:

In using Convolutional Neural Network (CNN) for classification, there is a set of hyperparameters available for the configuration purpose. This study aims to evaluate the impact of a range of parameters in CNN architecture i.e. AlexNet on land cover classification based on four remotely sensed datasets. The evaluation tests the influence of a set of hyperparameters on the classification performance. The parameters concerned are epoch values, batch size, and convolutional filter size against input image size. Thus, a set of experiments were conducted to specify the effectiveness of the selected parameters using two implementing approaches, named pertained and fine-tuned. We first explore the number of epochs under several selected batch size values (32, 64, 128 and 200). The impact of kernel size of convolutional filters (1, 3, 5, 7, 10, 15, 20, 25 and 30) was evaluated against the image size under testing (64, 96, 128, 180 and 224), which gave us insight of the relationship between the size of convolutional filters and image size. To generalise the validation, four remote sensing datasets, AID, RSD, UCMerced and RSCCN, which have different land covers and are publicly available, were used in the experiments. These datasets have a wide diversity of input data, such as number of classes, amount of labelled data, and texture patterns. A specifically designed interactive deep learning GPU training platform for image classification (Nvidia Digit) was employed in the experiments. It has shown efficiency in both training and testing. The results have shown that increasing the number of epochs leads to a higher accuracy rate, as expected. However, the convergence state is highly related to datasets. For the batch size evaluation, it has shown that a larger batch size slightly decreases the classification accuracy compared to a small batch size. For example, selecting the value 32 as the batch size on the RSCCN dataset achieves the accuracy rate of 90.34 % at the 11th epoch while decreasing the epoch value to one makes the accuracy rate drop to 74%. On the other extreme, setting an increased value of batch size to 200 decreases the accuracy rate at the 11th epoch is 86.5%, and 63% when using one epoch only. On the other hand, selecting the kernel size is loosely related to data set. From a practical point of view, the filter size 20 produces 70.4286%. The last performed image size experiment shows a dependency in the accuracy improvement. However, an expensive performance gain had been noticed. The represented conclusion opens the opportunities toward a better classification performance in various applications such as planetary remote sensing.

Keywords: CNNs, hyperparamters, remote sensing, land cover, land use

Procedia PDF Downloads 156
15265 Census and Mapping of Oil Palms Over Satellite Dataset Using Deep Learning Model

Authors: Gholba Niranjan Dilip, Anil Kumar

Abstract:

Conduct of accurate reliable mapping of oil palm plantations and census of individual palm trees is a huge challenge. This study addresses this challenge and developed an optimized solution implemented deep learning techniques on remote sensing data. The oil palm is a very important tropical crop. To improve its productivity and land management, it is imperative to have accurate census over large areas. Since, manual census is costly and prone to approximations, a methodology for automated census using panchromatic images from Cartosat-2, SkySat and World View-3 satellites is demonstrated. It is selected two different study sites in Indonesia. The customized set of training data and ground-truth data are created for this study from Cartosat-2 images. The pre-trained model of Single Shot MultiBox Detector (SSD) Lite MobileNet V2 Convolutional Neural Network (CNN) from the TensorFlow Object Detection API is subjected to transfer learning on this customized dataset. The SSD model is able to generate the bounding boxes for each oil palm and also do the counting of palms with good accuracy on the panchromatic images. The detection yielded an F-Score of 83.16 % on seven different images. The detections are buffered and dissolved to generate polygons demarcating the boundaries of the oil palm plantations. This provided the area under the plantations and also gave maps of their location, thereby completing the automated census, with a fairly high accuracy (≈100%). The trained CNN was found competent enough to detect oil palm crowns from images obtained from multiple satellite sensors and of varying temporal vintage. It helped to estimate the increase in oil palm plantations from 2014 to 2021 in the study area. The study proved that high-resolution panchromatic satellite image can successfully be used to undertake census of oil palm plantations using CNNs.

Keywords: object detection, oil palm tree census, panchromatic images, single shot multibox detector

Procedia PDF Downloads 152
15264 Fast Adjustable Threshold for Uniform Neural Network Quantization

Authors: Alexander Goncharenko, Andrey Denisov, Sergey Alyamkin, Evgeny Terentev

Abstract:

The neural network quantization is highly desired procedure to perform before running neural networks on mobile devices. Quantization without fine-tuning leads to accuracy drop of the model, whereas commonly used training with quantization is done on the full set of the labeled data and therefore is both time- and resource-consuming. Real life applications require simplification and acceleration of quantization procedure that will maintain accuracy of full-precision neural network, especially for modern mobile neural network architectures like Mobilenet-v1, MobileNet-v2 and MNAS. Here we present a method to significantly optimize training with quantization procedure by introducing the trained scale factors for discretization thresholds that are separate for each filter. Using the proposed technique, we quantize the modern mobile architectures of neural networks with the set of train data of only ∼ 10% of the total ImageNet 2012 sample. Such reduction of train dataset size and small number of trainable parameters allow to fine-tune the network for several hours while maintaining the high accuracy of quantized model (accuracy drop was less than 0.5%). Ready-for-use models and code are available in the GitHub repository.

Keywords: distillation, machine learning, neural networks, quantization

Procedia PDF Downloads 305
15263 Investigation of the Mechanical Performance of Hot Mix Asphalt Modified with Crushed Waste Glass

Authors: Ayman Othman, Tallat Ali

Abstract:

The successive increase of generated waste materials like glass has led to many environmental problems. Using crushed waste glass in hot mix asphalt paving has been though as an alternative to landfill disposal and recycling. This paper discusses the possibility of utilizing crushed waste glass, as a part of fine aggregate in hot mix asphalt in Egypt. This is done through evaluation of the mechanical properties of asphalt concrete mixtures mixed with waste glass and determining the appropriate glass content that can be adapted in asphalt pavement. Four asphalt concrete mixtures with various glass contents, namely; 0%, 4%, 8% and 12% by weight of total mixture were studied. Evaluation of the mechanical properties includes performing Marshall stability, indirect tensile strength, fracture energy and unconfined compressive strength tests. Laboratory testing had revealed the enhancement in both compressive strength and Marshall stability test parameters when the crushed glass was added to asphalt concrete mixtures. This enhancement was accompanied with a very slight reduction in both indirect tensile strength and fracture energy when glass content up to 8% was used. Adding more than 8% of glass causes a sharp reduction in both indirect tensile strength and fracture energy. Testing results had also shown a reduction in the optimum asphalt content when the waste glass was used. Measurements of the heat loss rate of asphalt concrete mixtures mixed with glass revealed their ability to hold heat longer than conventional mixtures. This can have useful application in asphalt paving during cold whether or when a long period of post-mix transportation is needed.

Keywords: waste glass, hot mix asphalt, mechanical performance, indirect tensile strength, fracture energy, compressive strength

Procedia PDF Downloads 299
15262 A Bayesian Approach for Analyzing Academic Article Structure

Authors: Jia-Lien Hsu, Chiung-Wen Chang

Abstract:

Research articles may follow a simple and succinct structure of organizational patterns, called move. For example, considering extended abstracts, we observe that an extended abstract usually consists of five moves, including Background, Aim, Method, Results, and Conclusion. As another example, when publishing articles in PubMed, authors are encouraged to provide a structured abstract, which is an abstract with distinct and labeled sections (e.g., Introduction, Methods, Results, Discussions) for rapid comprehension. This paper introduces a method for computational analysis of move structures (i.e., Background-Purpose-Method-Result-Conclusion) in abstracts and introductions of research documents, instead of manually time-consuming and labor-intensive analysis process. In our approach, sentences in a given abstract and introduction are automatically analyzed and labeled with a specific move (i.e., B-P-M-R-C in this paper) to reveal various rhetorical status. As a result, it is expected that the automatic analytical tool for move structures will facilitate non-native speakers or novice writers to be aware of appropriate move structures and internalize relevant knowledge to improve their writing. In this paper, we propose a Bayesian approach to determine move tags for research articles. The approach consists of two phases, training phase and testing phase. In the training phase, we build a Bayesian model based on a couple of given initial patterns and the corpus, a subset of CiteSeerX. In the beginning, the priori probability of Bayesian model solely relies on initial patterns. Subsequently, with respect to the corpus, we process each document one by one: extract features, determine tags, and update the Bayesian model iteratively. In the testing phase, we compare our results with tags which are manually assigned by the experts. In our experiments, the promising accuracy of the proposed approach reaches 56%.

Keywords: academic English writing, assisted writing, move tag analysis, Bayesian approach

Procedia PDF Downloads 317
15261 Development of a Microfluidic Device for Low-Volume Sample Lysis

Authors: Abbas Ali Husseini, Ali Mohammad Yazdani, Fatemeh Ghadiri, Alper Şişman

Abstract:

We developed a microchip device that uses surface acoustic waves for rapid lysis of low level of cell samples. The device incorporates sharp-edge glass microparticles for improved performance. We optimized the lysis conditions for high efficiency and evaluated the device's feasibility for point-of-care applications. The microchip contains a 13-finger pair interdigital transducer with a 30-degree focused angle. It generates high-intensity acoustic beams that converge 6 mm away. The microchip operates at a frequency of 16 MHz, exciting Rayleigh waves with a 250 µm wavelength on the LiNbO3 substrate. Cell lysis occurs when Candida albicans cells and glass particles are placed within the focal area. The high-intensity surface acoustic waves induce centrifugal forces on the cells and glass particles, resulting in cell lysis through lateral forces from the sharp-edge glass particles. We conducted 42 pilot cell lysis experiments to optimize the surface acoustic wave-induced streaming. We varied electrical power, droplet volume, glass particle size, concentration, and lysis time. A regression machine-learning model determined the impact of each parameter on lysis efficiency. Based on these findings, we predicted optimal conditions: electrical signal of 2.5 W, sample volume of 20 µl, glass particle size below 10 µm, concentration of 0.2 µg, and a 5-minute lysis period. Downstream analysis successfully amplified a DNA target fragment directly from the lysate. The study presents an efficient microchip-based cell lysis method employing acoustic streaming and microparticle collisions within microdroplets. Integration of a surface acoustic wave-based lysis chip with an isothermal amplification method enables swift point-of-care applications.

Keywords: cell lysis, surface acoustic wave, micro-glass particle, droplet

Procedia PDF Downloads 65
15260 Consumers’ Perceptions of Non-Communicable Diseases and Perceived Product Value Impacts on Healthy Food Purchasing Decisions

Authors: Khatesiree Sripoothon, Usanee Sengpanich, Rattana Sittioum

Abstract:

The objective of this study is to examine the factors influencing consumer purchasing decisions about healthy food. This model consists of two latent variables: Consumer Perception relating to NCDs and Consumer Perceived Product Value. The study was conducted in the northern provinces of Thailand, which are popular with tourists and have received support from the government for health tourism. A survey was used as the data collection method, and the questionnaire was applied to 385 tourists. An accidental sampling method was used to identify the sample. The statistics of frequency, percentage, mean, and structural equation model were used to analyze the data obtained. Additionally, all factors had a significant positive influence on healthy food purchasing decisions (p<0.01) and were predictive of healthy food purchasing decisions at 46.20 (R2=0.462). Also, these findings seem to underline a supposition that consumer perceptions of NCDs and perceived product value are key variables that strengthens the competitive effects of a healthy-friendly business entrepreneur. Moreover, reduce the country's public health costs for treating patients with the disease of NCDs in Thailand.

Keywords: healthy food, perceived product value, perception of non-communicable diseases, purchasing decisions

Procedia PDF Downloads 146
15259 A Training Perspective for Sustainability and Partnership to Achieve Sustainable Development Goals in Sub-Saharan Africa

Authors: Nwachukwu M. A., Nwachukwu J. I., Anyanwu J., Emeka U., Okorondu J., Acholonu C.

Abstract:

Actualization of the 17 sustainable development goals (SDGs) conceived by the United Nations in 2015 is a global challenge that may not be feasible in sub-Saharan Africa by the year 2030, except universities play a committed role. This is because; there is a need to educate the people about the concepts of sustainability and sustainable development in the region to make the desired change. Here is a sensitization paper with a model of intervention and curricular planning to allow advancement in understanding and knowledge of SDGs. This Model Center for Sustainability Studies (MCSS) will enable partnerships with institutions in Africa and in advanced nations, thereby creating a global network for sustainability studies not found in sub-Saharan Africa. MCSS will train and certify public servants, government agencies, policymakers, entrepreneurs and personnel from organizations, and students on aspects of the SDGs and sustainability science. There is a need to add sustainability knowledge into environmental education and make environmental education a compulsory course in higher institutions and a secondary school certificate exam subject in sub-Saharan Africa. MCSS has 11 training modules that can be replicated anywhere in the world.

Keywords: sustainability, higher institutions, training, SDGs, collaboration, sub-Saharan Africa

Procedia PDF Downloads 75
15258 Heat Transfer Performance of a Small Cold Plate with Uni-Directional Porous Copper for Cooling Power Electronics

Authors: K. Yuki, R. Tsuji, K. Takai, S. Aramaki, R. Kibushi, N. Unno, K. Suzuki

Abstract:

A small cold plate with uni-directional porous copper is proposed for cooling power electronics such as an on-vehicle inverter with the heat generation of approximately 500 W/cm2. The uni-directional porous copper with the pore perpendicularly orienting the heat transfer surface is soldered to a grooved heat transfer surface. This structure enables the cooling liquid to evaporate in the pore of the porous copper and then the vapor to discharge through the grooves. In order to minimize the cold plate, a double flow channel concept is introduced for the design of the cold plate. The cold plate consists of a base plate, a spacer, and a vapor discharging plate, totally 12 mm in thickness. The base plate has multiple nozzles of 1.0 mm in diameter for the liquid supply and 4 slits of 2.0 mm in width for vapor discharging, and is attached onto the top surface of the porous copper plate of 20 mm in diameter and 5.0 mm in thickness. The pore size is 0.36 mm and the porosity is 36 %. The cooling liquid flows into the porous copper as an impinging jet flow from the multiple nozzles, and then the vapor, which is generated in the pore, is discharged through the grooves and the vapor slits outside the cold plate. A heated test section consists of the cold plate, which was explained above, and a heat transfer copper block with 6 cartridge heaters. The cross section of the heat transfer block is reduced in order to increase the heat flux. The top surface of the block is the grooved heat transfer surface of 10 mm in diameter at which the porous copper is soldered. The grooves are fabricated like latticework, and the width and depth are 1.0 mm and 0.5 mm, respectively. By embedding three thermocouples in the cylindrical part of the heat transfer block, the temperature of the heat transfer surface ant the heat flux are extrapolated in a steady state. In this experiment, the flow rate is 0.5 L/min and the flow velocity at each nozzle is 0.27 m/s. The liquid inlet temperature is 60 °C. The experimental results prove that, in a single-phase heat transfer regime, the heat transfer performance of the cold plate with the uni-directional porous copper is 2.1 times higher than that without the porous copper, though the pressure loss with the porous copper also becomes higher than that without the porous copper. As to the two-phase heat transfer regime, the critical heat flux increases by approximately 35% by introducing the uni-directional porous copper, compared with the CHF of the multiple impinging jet flow. In addition, we confirmed that these heat transfer data was much higher than that of the ordinary single impinging jet flow. These heat transfer data prove high potential of the cold plate with the uni-directional porous copper from the view point of not only the heat transfer performance but also energy saving.

Keywords: cooling, cold plate, uni-porous media, heat transfer

Procedia PDF Downloads 285
15257 Model Based Fault Diagnostic Approach for Limit Switches

Authors: Zafar Mahmood, Surayya Naz, Nazir Shah Khattak

Abstract:

The degree of freedom relates to our capability to observe or model the energy paths within the system. Higher the number of energy paths being modeled leaves to us a higher degree of freedom, but increasing the time and modeling complexity rendering it useless for today’s world’s need for minimum time to market. Since the number of residuals that can be uniquely isolated are dependent on the number of independent outputs of the system, increasing the number of sensors required. The examples of discrete position sensors that may be used to form an array include limit switches, Hall effect sensors, optical sensors, magnetic sensors, etc. Their mechanical design can usually be tailored to fit in the transitional path of an STME in a variety of mechanical configurations. The case studies into multi-sensor system were carried out and actual data from sensors is used to test this generic framework. It is being investigated, how the proper modeling of limit switches as timing sensors, could lead to unified and neutral residual space while keeping the implementation cost reasonably low.

Keywords: low-cost limit sensors, fault diagnostics, Single Throw Mechanical Equipment (STME), parameter estimation, parity-space

Procedia PDF Downloads 593
15256 The Impact of Voluntary Disclosure Level on the Cost of Equity Capital in Tunisian's Listed Firms

Authors: Nouha Ben Salah, Mohamed Ali Omri

Abstract:

This paper treats the association between disclosure level and the cost of equity capital in Tunisian’slisted firms. This relation is tested by using two models. The first is used for testing this relation directly by regressing firm specific estimates of cost of equity capital on market beta, firm size and a measure of disclosure level. The second model is used for testing this relation by introducing information asymmetry as mediator variable. This model is suggested by Baron and Kenny (1986) to demonstrate the role of mediator variable in general. Based on a sample of 21 non-financial Tunisian’s listed firms over a period from 2000 to 2004, the results prove that greater disclosure is associated with a lower cost of equity capital. However, the results of indirect relationship indicate a significant positive association between the level of voluntary disclosure and information asymmetry and a significant negative association between information asymmetry and cost of equity capital in contradiction with our previsions. Perhaps this result is due to the biases of measure of information asymmetry.

Keywords: cost of equity capital, voluntary disclosure, information asymmetry, and Tunisian’s listed non-financial firms

Procedia PDF Downloads 502
15255 Glorification Trap in Combating Human Trafficking in Indonesia: An Application of Three-Dimensional Model of Anti-Trafficking Policy

Authors: M. Kosandi, V. Susanti, N. I. Subono, E. Kartini

Abstract:

This paper discusses the risk of glorification trap in combating human trafficking, as it is shown in the case of Indonesia. Based on a research on Indonesian combat against trafficking in 2017-2018, this paper shows the tendency of misinterpretation and misapplication of the Indonesian anti-trafficking law into misusing the law for glorification, to create an image of certain extent of achievement in combating human trafficking. The objective of this paper is to explain the persistent occurrence of human trafficking crimes despite the significant progress of anti-trafficking efforts of Indonesian government. The research was conducted in 2017-2018 by qualitative approach through observation, depth interviews, discourse analysis, and document study, applying the three-dimensional model for analyzing human trafficking in the source country. This paper argues that the drive for glorification of achievement in the combat against trafficking has trapped Indonesian government in the loop of misinterpretation, misapplication, and misuse of the anti-trafficking law. In return, the so-called crime against humanity remains high and tends to increase in Indonesia.

Keywords: human trafficking, anti-trafficking policy, transnational crime, source country, glorification trap

Procedia PDF Downloads 154
15254 Energy Efficiency and Sustainability Analytics for Reducing Carbon Emissions in Oil Refineries

Authors: Gaurav Kumar Sinha

Abstract:

The oil refining industry, significant in its energy consumption and carbon emissions, faces increasing pressure to reduce its environmental footprint. This article explores the application of energy efficiency and sustainability analytics as crucial tools for reducing carbon emissions in oil refineries. Through a comprehensive review of current practices and technologies, this study highlights innovative analytical approaches that can significantly enhance energy efficiency. We focus on the integration of advanced data analytics, including machine learning and predictive modeling, to optimize process controls and energy use. These technologies are examined for their potential to not only lower energy consumption but also reduce greenhouse gas emissions. Additionally, the article discusses the implementation of sustainability analytics to monitor and improve environmental performance across various operational facets of oil refineries. We explore case studies where predictive analytics have successfully identified opportunities for reducing energy use and emissions, providing a template for industry-wide application. The challenges associated with deploying these analytics, such as data integration and the need for skilled personnel, are also addressed. The paper concludes with strategic recommendations for oil refineries aiming to enhance their sustainability practices through the adoption of targeted analytics. By implementing these measures, refineries can achieve significant reductions in carbon emissions, aligning with global environmental goals and regulatory requirements.

Keywords: energy efficiency, sustainability analytics, carbon emissions, oil refineries, data analytics, machine learning, predictive modeling, process optimization, greenhouse gas reduction, environmental performance

Procedia PDF Downloads 15
15253 Monitoring Prospective Sites for Water Harvesting Structures Using Remote Sensing and Geographic Information Systems-Based Modeling in Egypt

Authors: Shereif. H. Mahmoud

Abstract:

Egypt has limited water resources, and it will be under water stress by the year 2030. Therefore, Egypt should consider natural and non-conventional water resources to overcome such a problem. Rain harvesting is one solution. This Paper presents a geographic information system (GIS) methodology - based on decision support system (DSS) that uses remote sensing data, filed survey, and GIS to identify potential RWH areas. The input into the DSS includes a map of rainfall surplus, slope, potential runoff coefficient (PRC), land cover/use, soil texture. In addition, the outputs are map showing potential sites for RWH. Identifying suitable RWH sites implemented in the ArcGIS model environment using the model builder of ArcGIS 10.1. Based on Analytical hierarchy process (AHP) analysis taking into account five layers, the spatial extents of RWH suitability areas identified using Multi-Criteria Evaluation (MCE). The suitability model generated a suitability map for RWH with four suitability classes, i.e. Excellent, Moderate, Poor, and unsuitable. The spatial distribution of the suitability map showed that the excellent suitable areas for RWH concentrated in the northern part of Egypt. According to their averages, 3.24% of the total area have excellent and good suitability for RWH, while 45.04 % and 51.48 % of the total area are moderate and unsuitable suitability, respectively. The majority of the areas with excellent suitability have slopes between 2 and 8% and with an intensively cultivated area. The major soil type in the excellent suitable area is loam and the rainfall range from 100 up to 200 mm. Validation of the used technique depends on comparing existing RWH structures locations with the generated suitability map using proximity analysis tool of ArcGIS 10.1. The result shows that most of exiting RWH structures categorized as successful.

Keywords: rainwater harvesting (RWH), geographic information system (GIS), analytical hierarchy process (AHP), multi-criteria evaluation (MCE), decision support system (DSS)

Procedia PDF Downloads 345
15252 The Cost of Non-Communicable Diseases in the European Union: A Projection towards the Future

Authors: Desiree Vandenberghe, Johan Albrecht

Abstract:

Non-communicable diseases (NCDs) are responsible for the vast majority of deaths in the European Union (EU) and represent a large share of total health care spending. A future increase in this health and financial burden is likely to be driven by population ageing, lifestyle changes and technological advances in medicine. Without adequate prevention measures, this burden can severely threaten population health and economic development. To tackle this challenge, a correct assessment of the current burden of NCDs is required, as well as a projection of potential increases of this burden. The contribution of this paper is to offer perspective on the evolution of the NCD burden towards the future and to give an indication of the potential of prevention policy. A Non-Homogenous, Semi-Markov model for the EU was constructed, which allowed for a projection of the cost burden for the four main NCDs (cancer, cardiovascular disease, chronic respiratory disease and diabetes mellitus) towards 2030 and 2050. This simulation is done based on multiple baseline scenarios that vary in demand and supply factors such as health status, population structure, and technological advances. Finally, in order to assess the potential of preventive measures to curb the cost explosion of NCDs, a simulation is executed which includes increased efforts for preventive health care measures. According to the Markov model, by 2030 and 2050, total costs (direct and indirect costs) in the EU could increase by 30.1% and 44.1% respectively, compared to 2015 levels. An ambitious prevention policy framework for NCDs will be required if the EU wants to meet this challenge of rising costs. To conclude, significant cost increases due to Non-Communicable Diseases are likely to occur due to demographic and lifestyle changes. Nevertheless, an ambitious prevention program throughout the EU can aid in making this cost burden manageable for future generations.

Keywords: non-communicable diseases, preventive health care, health policy, Markov model, scenario analysis

Procedia PDF Downloads 124
15251 Fuzzy Optimization for Identifying Anticancer Targets in Genome-Scale Metabolic Models of Colon Cancer

Authors: Feng-Sheng Wang, Chao-Ting Cheng

Abstract:

Developing a drug from conception to launch is costly and time-consuming. Computer-aided methods can reduce research costs and accelerate the development process during the early drug discovery and development stages. This study developed a fuzzy multi-objective hierarchical optimization framework for identifying potential anticancer targets in a metabolic model. First, RNA-seq expression data of colorectal cancer samples and their healthy counterparts were used to reconstruct tissue-specific genome-scale metabolic models. The aim of the optimization framework was to identify anticancer targets that lead to cancer cell death and evaluate metabolic flux perturbations in normal cells that have been caused by cancer treatment. Four objectives were established in the optimization framework to evaluate the mortality of cancer cells for treatment and to minimize side effects causing toxicity-induced tumorigenesis on normal cells and smaller metabolic perturbations. Through fuzzy set theory, a multiobjective optimization problem was converted into a trilevel maximizing decision-making (MDM) problem. The applied nested hybrid differential evolution was applied to solve the trilevel MDM problem using two nutrient media to identify anticancer targets in the genome-scale metabolic model of colorectal cancer, respectively. Using Dulbecco’s Modified Eagle Medium (DMEM), the computational results reveal that the identified anticancer targets were mostly involved in cholesterol biosynthesis, pyrimidine and purine metabolisms, glycerophospholipid biosynthetic pathway and sphingolipid pathway. However, using Ham’s medium, the genes involved in cholesterol biosynthesis were unidentifiable. A comparison of the uptake reactions for the DMEM and Ham’s medium revealed that no cholesterol uptake reaction was included in DMEM. Two additional media, i.e., a cholesterol uptake reaction was included in DMEM and excluded in HAM, were respectively used to investigate the relationship of tumor cell growth with nutrient components and anticancer target genes. The genes involved in the cholesterol biosynthesis were also revealed to be determinable if a cholesterol uptake reaction was not induced when the cells were in the culture medium. However, the genes involved in cholesterol biosynthesis became unidentifiable if such a reaction was induced.

Keywords: Cancer metabolism, genome-scale metabolic model, constraint-based model, multilevel optimization, fuzzy optimization, hybrid differential evolution

Procedia PDF Downloads 64
15250 The Effect of Perceived Environmental Uncertainty on Corporate Entrepreneurship Performance: A Field Study in a Large Industrial Zone in Turkey

Authors: Adem Öğüt, M. Tahir Demirsel

Abstract:

Rapid changes and developments today, besides the opportunities and facilities they offer to the organization, may also be a source of danger and difficulties due to the uncertainty. In order to take advantage of opportunities and to take the necessary measures against possible uncertainties, organizations must always follow the changes and developments that occur in the business environment and develop flexible structures and strategies for the alternative cases. Perceived environmental uncertainty is an outcome of managers’ perceptions of the combined complexity, instability and unpredictability in the organizational environment. An environment that is perceived to be complex, changing rapidly, and difficult to predict creates high levels of uncertainty about the appropriate organizational responses to external circumstances. In an uncertain and complex environment, organizations experiencing cutthroat competition may be successful by developing their corporate entrepreneurial ability. Corporate entrepreneurship is a process that includes many elements such as innovation, creating new business, renewal, risk-taking and being predictive. Successful corporate entrepreneurship is a critical factor which has a significant contribution to gain a sustainable competitive advantage, to renew the organization and to adapt the environment. In this context, the objective of this study is to investigate the effect of perceived environmental uncertainty of managers on corporate entrepreneurship performance. The research was conducted on 222 business executives in one of the major industrial zones of Turkey, Konya Organized Industrial Zone (KOS). According to the results, it has been observed that there is a positive statistically significant relationship between perceived environmental uncertainty and corporate entrepreneurial activities.

Keywords: corporate entrepreneurship, entrepreneurship, industrial zone, perceived environmental uncertainty, uncertainty

Procedia PDF Downloads 303
15249 Proposing an Architecture for Drug Response Prediction by Integrating Multiomics Data and Utilizing Graph Transformers

Authors: Nishank Raisinghani

Abstract:

Efficiently predicting drug response remains a challenge in the realm of drug discovery. To address this issue, we propose four model architectures that combine graphical representation with varying positions of multiheaded self-attention mechanisms. By leveraging two types of multi-omics data, transcriptomics and genomics, we create a comprehensive representation of target cells and enable drug response prediction in precision medicine. A majority of our architectures utilize multiple transformer models, one with a graph attention mechanism and the other with a multiheaded self-attention mechanism, to generate latent representations of both drug and omics data, respectively. Our model architectures apply an attention mechanism to both drug and multiomics data, with the goal of procuring more comprehensive latent representations. The latent representations are then concatenated and input into a fully connected network to predict the IC-50 score, a measure of cell drug response. We experiment with all four of these architectures and extract results from all of them. Our study greatly contributes to the future of drug discovery and precision medicine by looking to optimize the time and accuracy of drug response prediction.

Keywords: drug discovery, transformers, graph neural networks, multiomics

Procedia PDF Downloads 129
15248 Competitive Advantage Challenges in the Apparel Manufacturing Industries of South Africa: Application of Porter’s Factor Conditions

Authors: Sipho Mbatha, Anne Mastament-Mason

Abstract:

South African manufacturing global competitiveness was ranked 22nd (out of 38 countries), dropped to 24th in 2013 and is expected to drop further to 25th by 2018. These impacts negatively on the industrialisation project of South Africa. For industrialization to be achieved through labour intensive industries like the Apparel Manufacturing Industries of South Africa (AMISA), South Africa needs to identify and respond to factors negatively impacting on the development of competitive advantage This paper applied factor conditions from Porter’s Diamond Model (1990) to understand the various challenges facing the AMISA. Factor conditions highlighted in Porter’s model are grouped into two groups namely, basic and advance factors. Two AMISA associations representing over 10 000 employees were interviewed. The largest Clothing, Textiles and Leather (CTL) apparel retail group was also interviewed with a government department implementing the industrialisation policy were interviewed The paper points out that while AMISA have basic factor conditions necessary for competitive advantage in the clothing and textiles industries, Advance factor coordination has proven to be a challenging task for the AMISA, Higher Education Institutions (HEIs) and government. Poor infrastructural maintenance has contributed to high manufacturing costs and poor quick response as a result of lack of advanced technologies. The use of Porter’s Factor Conditions as a tool to analyse the sector’s competitive advantage challenges and opportunities has increased knowledge regarding factors that limit the AMISA’s competitiveness. It is therefore argued that other studies on Porter’s Diamond model factors like Demand conditions, Firm strategy, structure and rivalry and Related and supporting industries can be used to analyse the situation of the AMISA for the purposes of improving competitive advantage.

Keywords: compliance rule, apparel manufacturing industry, factor conditions, advance skills and South African industrial policy

Procedia PDF Downloads 348
15247 Multi-Spectral Deep Learning Models for Forest Fire Detection

Authors: Smitha Haridasan, Zelalem Demissie, Atri Dutta, Ajita Rattani

Abstract:

Aided by the wind, all it takes is one ember and a few minutes to create a wildfire. Wildfires are growing in frequency and size due to climate change. Wildfires and its consequences are one of the major environmental concerns. Every year, millions of hectares of forests are destroyed over the world, causing mass destruction and human casualties. Thus early detection of wildfire becomes a critical component to mitigate this threat. Many computer vision-based techniques have been proposed for the early detection of forest fire using video surveillance. Several computer vision-based methods have been proposed to predict and detect forest fires at various spectrums, namely, RGB, HSV, and YCbCr. The aim of this paper is to propose a multi-spectral deep learning model that combines information from different spectrums at intermediate layers for accurate fire detection. A heterogeneous dataset assembled from publicly available datasets is used for model training and evaluation in this study. The experimental results show that multi-spectral deep learning models could obtain an improvement of about 4.68 % over those based on a single spectrum for fire detection.

Keywords: deep learning, forest fire detection, multi-spectral learning, natural hazard detection

Procedia PDF Downloads 223
15246 Prediction of the Crustal Deformation of Volcán - Nevado Del RUíz in the Year 2020 Using Tropomi Tropospheric Information, Dinsar Technique, and Neural Networks

Authors: Juan Sebastián Hernández

Abstract:

The Nevado del Ruíz volcano, located between the limits of the Departments of Caldas and Tolima in Colombia, presented an unstable behaviour in the course of the year 2020, this volcanic activity led to secondary effects on the crust, which is why the prediction of deformations becomes the task of geoscientists. In the course of this article, the use of tropospheric variables such as evapotranspiration, UV aerosol index, carbon monoxide, nitrogen dioxide, methane, surface temperature, among others, is used to train a set of neural networks that can predict the behaviour of the resulting phase of an unrolled interferogram with the DInSAR technique, whose main objective is to identify and characterise the behaviour of the crust based on the environmental conditions. For this purpose, variables were collected, a generalised linear model was created, and a set of neural networks was created. After the training of the network, validation was carried out with the test data, giving an MSE of 0.17598 and an associated r-squared of approximately 0.88454. The resulting model provided a dataset with good thematic accuracy, reflecting the behaviour of the volcano in 2020, given a set of environmental characteristics.

Keywords: crustal deformation, Tropomi, neural networks (ANN), volcanic activity, DInSAR

Procedia PDF Downloads 86
15245 Across-Breed Genetic Evaluation of New Zealand Dairy Goats

Authors: Nicolas Lopez-Villalobos, Dorian J. Garrick, Hugh T. Blair

Abstract:

Many dairy goat farmers of New Zealand milk herds of mixed breed does. Simultaneous evaluation of sires and does across breed is required to select the best animals for breeding on a common basis. Across-breed estimated breeding values (EBV) and estimated producing values for 208-day lactation yields of milk (MY), fat (FY), protein (PY) and somatic cell score (SCS; LOG2(SCC) of Saanen, Nubian, Alpine, Toggenburg and crossbred dairy goats from 75 herds were estimated using a test day model. Evaluations were based on 248,734 herd-test records representing 125,374 lactations from 65,514 does sired by 930 sires over 9 generations. Averages of MY, FY and PY were 642 kg, 21.6 kg and 19.8 kg, respectively. Average SCC and SCS were 936,518 cells/ml milk and 9.12. Pure-bred Saanen does out-produced other breeds in MY, FY and PY. Average EBV for MY, FY and PY compared to a Saanen base were Nubian -98 kg, 0.1 kg and -1.2 kg; Alpine -64 kg, -1.0 kg and -1.7 kg; and Toggenburg -42 kg, -1.0 kg and -0.5 kg. First-cross heterosis estimates were 29 kg MY, 1.1 kg FY and 1.2 kg PY. Average EBV for SCS compared to a Saanen base were Nubian 0.041, Alpine -0.083 and Toggenburg 0.094. Heterosis for SCS was 0.03. Breeding values are combined with respective economic values to calculate an economic index used for ranking sires and does to reflect farm profit.

Keywords: breed effects, dairy goats, milk traits, test-day model

Procedia PDF Downloads 315
15244 How Cultural Tourists Perceive Authenticity in World Heritage Historic Centers: An Empirical Research

Authors: Odete Paiva, Cláudia Seabra, José Luís Abrantes, Fernanda Cravidão

Abstract:

There is a clear ‘cult of authenticity’, at least in modern Western society. So, there is a need to analyze the tourist perception of authenticity, bearing in mind the destination, its attractions, motivations, cultural distance, and contact with other tourists. Our study seeks to investigate the relationship among cultural values, image, sense of place, perception of authenticity and behavior intentions at World Heritage Historic Centers. From a theoretical perspective, few researches focus on the impact of cultural values, image and sense of place on authenticity and intentions behavior in tourists. The intention of this study is to help close this gap. A survey was applied to collect data from tourists visiting two World Heritage Historic Centers – Guimarães in Portugal and Cordoba in Spain. Data was analyzed in order to establish a structural equation model (SEM). Discussion centers on the implications of model to theory and managerial development of tourism strategies. Recommendations for destinations managers and promoters and tourist organizations administrators are addressed.

Keywords: authenticity perception, behavior intentions, cultural tourism, cultural values, world heritage historic centers

Procedia PDF Downloads 294
15243 Hydrological Analysis for Urban Water Management

Authors: Ranjit Kumar Sahu, Ramakar Jha

Abstract:

Urban Water Management is the practice of managing freshwater, waste water, and storm water as components of a basin-wide management plan. It builds on existing water supply and sanitation considerations within an urban settlement by incorporating urban water management within the scope of the entire river basin. The pervasive problems generated by urban development have prompted, in the present work, to study the spatial extent of urbanization in Golden Triangle of Odisha connecting the cities Bhubaneswar (20.2700° N, 85.8400° E), Puri (19.8106° N, 85.8314° E) and Konark (19.9000° N, 86.1200° E)., and patterns of periodic changes in urban development (systematic/random) in order to develop future plans for (i) urbanization promotion areas, and (ii) urbanization control areas. Remote Sensing, using USGS (U.S. Geological Survey) Landsat8 maps, supervised classification of the Urban Sprawl has been done for during 1980 - 2014, specifically after 2000. This Work presents the following: (i) Time series analysis of Hydrological data (ground water and rainfall), (ii) Application of SWMM (Storm Water Management Model) and other soft computing techniques for Urban Water Management, and (iii) Uncertainty analysis of model parameters (Urban Sprawl and correlation analysis). The outcome of the study shows drastic growth results in urbanization and depletion of ground water levels in the area that has been discussed briefly. Other relative outcomes like declining trend of rainfall and rise of sand mining in local vicinity has been also discussed. Research on this kind of work will (i) improve water supply and consumption efficiency (ii) Upgrade drinking water quality and waste water treatment (iii) Increase economic efficiency of services to sustain operations and investments for water, waste water, and storm water management, and (iv) engage communities to reflect their needs and knowledge for water management.

Keywords: Storm Water Management Model (SWMM), uncertainty analysis, urban sprawl, land use change

Procedia PDF Downloads 413
15242 Bi-Liquid Free Surface Flow Simulation of Liquid Atomization for Bi-Propellant Thrusters

Authors: Junya Kouwa, Shinsuke Matsuno, Chihiro Inoue, Takehiro Himeno, Toshinori Watanabe

Abstract:

Bi-propellant thrusters use impinging jet atomization to atomize liquid fuel and oxidizer. Atomized propellants are mixed and combusted due to auto-ignitions. Therefore, it is important for a prediction of thruster’s performance to simulate the primary atomization phenomenon; especially, the local mixture ratio can be used as indicator of thrust performance, so it is useful to evaluate it from numerical simulations. In this research, we propose a numerical method for considering bi-liquid and the mixture and install it to CIP-LSM which is a two-phase flow simulation solver with level-set and MARS method as an interfacial tracking method and can predict local mixture ratio distribution downstream from an impingement point. A new parameter, beta, which is defined as the volume fraction of one liquid in the mixed liquid within a cell is introduced and the solver calculates the advection of beta, inflow and outflow flux of beta to a cell. By validating this solver, we conducted a simple experiment and the same simulation by using the solver. From the result, the solver can predict the penetrating length of a liquid jet correctly and it is confirmed that the solver can simulate the mixing of liquids. Then we apply this solver to the numerical simulation of impinging jet atomization. From the result, the inclination angle of fan after the impingement in the bi-liquid condition reasonably agrees with the theoretical value. Also, it is seen that the mixture of liquids can be simulated in this result. Furthermore, simulation results clarify that the injecting condition affects the atomization process and local mixture ratio distribution downstream drastically.

Keywords: bi-propellant thrusters, CIP-LSM, free-surface flow simulation, impinging jet atomization

Procedia PDF Downloads 269
15241 Adaptation of Hough Transform Algorithm for Text Document Skew Angle Detection

Authors: Kayode A. Olaniyi, Olabanji F. Omotoye, Adeola A. Ogunleye

Abstract:

The skew detection and correction form an important part of digital document analysis. This is because uncompensated skew can deteriorate document features and can complicate further document image processing steps. Efficient text document analysis and digitization can rarely be achieved when a document is skewed even at a small angle. Once the documents have been digitized through the scanning system and binarization also achieved, document skew correction is required before further image analysis. Research efforts have been put in this area with algorithms developed to eliminate document skew. Skew angle correction algorithms can be compared based on performance criteria. Most important performance criteria are accuracy of skew angle detection, range of skew angle for detection, speed of processing the image, computational complexity and consequently memory space used. The standard Hough Transform has successfully been implemented for text documentation skew angle estimation application. However, the standard Hough Transform algorithm level of accuracy depends largely on how much fine the step size for the angle used. This consequently consumes more time and memory space for increase accuracy and, especially where number of pixels is considerable large. Whenever the Hough transform is used, there is always a tradeoff between accuracy and speed. So a more efficient solution is needed that optimizes space as well as time. In this paper, an improved Hough transform (HT) technique that optimizes space as well as time to robustly detect document skew is presented. The modified algorithm of Hough Transform presents solution to the contradiction between the memory space, running time and accuracy. Our algorithm starts with the first step of angle estimation accurate up to zero decimal place using the standard Hough Transform algorithm achieving minimal running time and space but lacks relative accuracy. Then to increase accuracy, suppose estimated angle found using the basic Hough algorithm is x degree, we then run again basic algorithm from range between ±x degrees with accuracy of one decimal place. Same process is iterated till level of desired accuracy is achieved. The procedure of our skew estimation and correction algorithm of text images is implemented using MATLAB. The memory space estimation and process time are also tabulated with skew angle assumption of within 00 and 450. The simulation results which is demonstrated in Matlab show the high performance of our algorithms with less computational time and memory space used in detecting document skew for a variety of documents with different levels of complexity.

Keywords: hough-transform, skew-detection, skew-angle, skew-correction, text-document

Procedia PDF Downloads 141
15240 Designing Sustainable Building Based on Iranian's Windmills

Authors: Negar Sartipzadeh

Abstract:

Energy-conscious design, which coordinates with the Earth ecological systems during its life cycle, has the least negative impact on the environment with the least waste of resources. Due to the increasing in world population as well as the consumption of fossil fuels that cause the production of greenhouse gasses and environmental pollution, mankind is looking for renewable and also sustainable energies. The Iranian native construction is a clear evidence of energy-aware designing. Our predecessors were forced to rely on the natural resources and sustainable energies as well as environmental issues which have been being considered in the recent world. One of these endless energies is wind energy. Iranian traditional architecture foundations is a appropriate model in solving the environmental crisis and the contemporary energy. What will come in this paper is an effort to recognition and introduction of the unique characteristics of the Iranian architecture in the application of aerodynamic and hydraulic energies derived from the wind, which are the most common and major type of using sustainable energies in the traditional architecture of Iran. Therefore, the recent research attempts to offer a hybrid system suggestions for application in new constructions designing in a region such as Nashtifan, which has potential through reviewing windmills and how they deal with sustainable energy sources, as a model of Iranian native construction.

Keywords: renewable energy, sustainable building, windmill, Iranian architecture

Procedia PDF Downloads 406
15239 Resisting Adversarial Assaults: A Model-Agnostic Autoencoder Solution

Authors: Massimo Miccoli, Luca Marangoni, Alberto Aniello Scaringi, Alessandro Marceddu, Alessandro Amicone

Abstract:

The susceptibility of deep neural networks (DNNs) to adversarial manipulations is a recognized challenge within the computer vision domain. Adversarial examples, crafted by adding subtle yet malicious alterations to benign images, exploit this vulnerability. Various defense strategies have been proposed to safeguard DNNs against such attacks, stemming from diverse research hypotheses. Building upon prior work, our approach involves the utilization of autoencoder models. Autoencoders, a type of neural network, are trained to learn representations of training data and reconstruct inputs from these representations, typically minimizing reconstruction errors like mean squared error (MSE). Our autoencoder was trained on a dataset of benign examples; learning features specific to them. Consequently, when presented with significantly perturbed adversarial examples, the autoencoder exhibited high reconstruction errors. The architecture of the autoencoder was tailored to the dimensions of the images under evaluation. We considered various image sizes, constructing models differently for 256x256 and 512x512 images. Moreover, the choice of the computer vision model is crucial, as most adversarial attacks are designed with specific AI structures in mind. To mitigate this, we proposed a method to replace image-specific dimensions with a structure independent of both dimensions and neural network models, thereby enhancing robustness. Our multi-modal autoencoder reconstructs the spectral representation of images across the red-green-blue (RGB) color channels. To validate our approach, we conducted experiments using diverse datasets and subjected them to adversarial attacks using models such as ResNet50 and ViT_L_16 from the torch vision library. The autoencoder extracted features used in a classification model, resulting in an MSE (RGB) of 0.014, a classification accuracy of 97.33%, and a precision of 99%.

Keywords: adversarial attacks, malicious images detector, binary classifier, multimodal transformer autoencoder

Procedia PDF Downloads 93
15238 Application of Hydrological Engineering Centre – River Analysis System (HEC-RAS) to Estuarine Hydraulics

Authors: Julia Zimmerman, Gaurav Savant

Abstract:

This study aims to evaluate the efficacy of the U.S. Army Corp of Engineers’ River Analysis System (HEC-RAS) application to modeling the hydraulics of estuaries. HEC-RAS has been broadly used for a variety of riverine applications. However, it has not been widely applied to the study of circulation in estuaries. This report details the model development and validation of a combined 1D/2D unsteady flow hydraulic model using HEC-RAS for estuaries and they are associated with tidally influenced rivers. Two estuaries, Galveston Bay and Delaware Bay, were used as case studies. Galveston Bay, a bar-built, vertically mixed estuary, was modeled for the 2005 calendar year. Delaware Bay, a drowned river valley estuary, was modeled from October 22, 2019, to November 5, 2019. Water surface elevation was used to validate both models by comparing simulation results to NOAA’s Center for Operational Oceanographic Products and Services (CO-OPS) gauge data. Simulations were run using the Diffusion Wave Equations (DW), the Shallow Water Equations, Eulerian-Lagrangian Method (SWE-ELM), and the Shallow Water Equations Eulerian Method (SWE-EM) and compared for both accuracy and computational resources required. In general, the Diffusion Wave Equations results were found to be comparable to the two Shallow Water equations sets while requiring less computational power. The 1D/2D combined approach was valid for study areas within the 2D flow area, with the 1D flow serving mainly as an inflow boundary condition. Within the Delaware Bay estuary, the HEC-RAS DW model ran in 22 minutes and had an average R² value of 0.94 within the 2-D mesh. The Galveston Bay HEC-RAS DW ran in 6 hours and 47 minutes and had an average R² value of 0.83 within the 2-D mesh. The longer run time and lower R² for Galveston Bay can be attributed to the increased length of the time frame modeled and the greater complexity of the estuarine system. The models did not accurately capture tidal effects within the 1D flow area.

Keywords: Delaware bay, estuarine hydraulics, Galveston bay, HEC-RAS, one-dimensional modeling, two-dimensional modeling

Procedia PDF Downloads 187