Search results for: cohesion metrics
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 897

Search results for: cohesion metrics

267 A Multi Objective Reliable Location-Inventory Capacitated Disruption Facility Problem with Penalty Cost Solve with Efficient Meta Historic Algorithms

Authors: Elham Taghizadeh, Mostafa Abedzadeh, Mostafa Setak

Abstract:

Logistics network is expected that opened facilities work continuously for a long time horizon without any failure; but in real world problems, facilities may face disruptions. This paper studies a reliable joint inventory location problem to optimize cost of facility locations, customers’ assignment, and inventory management decisions when facilities face failure risks and doesn’t work. In our model we assume when a facility is out of work, its customers may be reassigned to other operational facilities otherwise they must endure high penalty costs associated with losing service. For defining the model closer to real world problems, the model is proposed based on p-median problem and the facilities are considered to have limited capacities. We define a new binary variable (Z_is) for showing that customers are not assigned to any facilities. Our problem involve a bi-objective model; the first one minimizes the sum of facility construction costs and expected inventory holding costs, the second one function that mention for the first one is minimizes maximum expected customer costs under normal and failure scenarios. For solving this model we use NSGAII and MOSS algorithms have been applied to find the pareto- archive solution. Also Response Surface Methodology (RSM) is applied for optimizing the NSGAII Algorithm Parameters. We compare performance of two algorithms with three metrics and the results show NSGAII is more suitable for our model.

Keywords: joint inventory-location problem, facility location, NSGAII, MOSS

Procedia PDF Downloads 525
266 Measuring the Influence of Functional Proximity on Environmental Urban Performance via IMM: Four Study Cases in Milan

Authors: Massimo Tadi, M. Hadi Mohammad Zadeh, Ozge Ogut

Abstract:

Although how cities’ forms are structured is studied, more efforts are needed on systemic comprehensions and evaluations of the urban morphology through quantitative metrics that are able to describe the performance of a city in relation to its formal properties. More research is required in this direction in order to better describe the urban form characteristics and their impact on the environmental performance of cities and to increase their sustainability stewardship. With the aim of developing a better understanding of the built environment’s systemic structure, the intention of this paper is to present a holistic methodology for studying the behavior of the built environment and investigate the methods for measuring the effect of urban structure to the environmental performance. This goal will be pursued through an inquiry into the morphological components of the urban systems and the complex relationships between them. Particularly, this paper focuses on proximity, referring to the proximity of different land-uses, is a concept with which Integrated Modification Methodology (IMM) explains how land-use allocation might affect the choice of mobility in neighborhoods, and especially, encourage or discourage non-motived mobility. This paper uses proximity to demonstrate that the structure attributes can quantifiably relate to the performing behavior in the city. The target is to devise a mathematical pattern from the structural elements and correlate it directly with urban performance indicators concerned with environmental sustainability. The paper presents some results of this rigorous investigation of urban proximity and its correlation with performance indicators in four different areas in the city of Milan, each of them characterized by different morphological features.

Keywords: built environment, ecology, sustainable indicators, sustainability, urban morphology

Procedia PDF Downloads 168
265 A Survey of Skin Cancer Detection and Classification from Skin Lesion Images Using Deep Learning

Authors: Joseph George, Anne Kotteswara Roa

Abstract:

Skin disease is one of the most common and popular kinds of health issues faced by people nowadays. Skin cancer (SC) is one among them, and its detection relies on the skin biopsy outputs and the expertise of the doctors, but it consumes more time and some inaccurate results. At the early stage, skin cancer detection is a challenging task, and it easily spreads to the whole body and leads to an increase in the mortality rate. Skin cancer is curable when it is detected at an early stage. In order to classify correct and accurate skin cancer, the critical task is skin cancer identification and classification, and it is more based on the cancer disease features such as shape, size, color, symmetry and etc. More similar characteristics are present in many skin diseases; hence it makes it a challenging issue to select important features from a skin cancer dataset images. Hence, the skin cancer diagnostic accuracy is improved by requiring an automated skin cancer detection and classification framework; thereby, the human expert’s scarcity is handled. Recently, the deep learning techniques like Convolutional neural network (CNN), Deep belief neural network (DBN), Artificial neural network (ANN), Recurrent neural network (RNN), and Long and short term memory (LSTM) have been widely used for the identification and classification of skin cancers. This survey reviews different DL techniques for skin cancer identification and classification. The performance metrics such as precision, recall, accuracy, sensitivity, specificity, and F-measures are used to evaluate the effectiveness of SC identification using DL techniques. By using these DL techniques, the classification accuracy increases along with the mitigation of computational complexities and time consumption.

Keywords: skin cancer, deep learning, performance measures, accuracy, datasets

Procedia PDF Downloads 128
264 ALEF: An Enhanced Approach to Arabic-English Bilingual Translation

Authors: Abdul Muqsit Abbasi, Ibrahim Chhipa, Asad Anwer, Saad Farooq, Hassan Berry, Sonu Kumar, Sundar Ali, Muhammad Owais Mahmood, Areeb Ur Rehman, Bahram Baloch

Abstract:

Accurate translation between structurally diverse languages, such as Arabic and English, presents a critical challenge in natural language processing due to significant linguistic and cultural differences. This paper investigates the effectiveness of Facebook’s mBART model, fine-tuned specifically for sequence-tosequence (seq2seq) translation tasks between Arabic and English, and enhanced through advanced refinement techniques. Our approach leverages the Alef Dataset, a meticulously curated parallel corpus spanning various domains to capture the linguistic richness, nuances, and contextual accuracy essential for high-quality translation. We further refine the model’s output using advanced language models such as GPT-3.5 and GPT-4, which improve fluency, coherence, and correct grammatical errors in translated texts. The fine-tuned model demonstrates substantial improvements, achieving a BLEU score of 38.97, METEOR score of 58.11, and TER score of 56.33, surpassing widely used systems such as Google Translate. These results underscore the potential of mBART, combined with refinement strategies, to bridge the translation gap between Arabic and English, providing a reliable, context-aware machine translation solution that is robust across diverse linguistic contexts.

Keywords: natural language processing, machine translation, fine-tuning, Arabic-English translation, transformer models, seq2seq translation, translation evaluation metrics, cross-linguistic communication

Procedia PDF Downloads 7
263 Influence of Travel Time Reliability on Elderly Drivers Crash Severity

Authors: Ren Moses, Emmanuel Kidando, Eren Ozguven, Yassir Abdelrazig

Abstract:

Although older drivers (defined as those of age 65 and above) are less involved with speeding, alcohol use as well as night driving, they are more vulnerable to severe crashes. The major contributing factors for severe crashes include frailty and medical complications. Several studies have evaluated the contributing factors on severity of crashes. However, few studies have established the impact of travel time reliability (TTR) on road safety. In particular, the impact of TTR on senior adults who face several challenges including hearing difficulties, decreasing of the processing skills and cognitive problems in driving is not well established. Therefore, this study focuses on determining possible impacts of TTR on the traffic safety with focus on elderly drivers. Historical travel speed data from freeway links in the study area were used to calculate travel time and the associated TTR metrics that is, planning time index, the buffer index, the standard deviation of the travel time and the probability of congestion. Four-year information on crashes occurring on these freeway links was acquired. The binary logit model estimated using the Markov Chain Monte Carlo (MCMC) sampling technique was used to evaluate variables that could be influencing elderly crash severity. Preliminary results of the analysis suggest that TTR is statistically significant in affecting the severity of a crash involving an elderly driver. The result suggests that one unit increase in the probability of congestion reduces the likelihood of the elderly severe crash by nearly 22%. These findings will enhance the understanding of TTR and its impact on the elderly crash severity.

Keywords: highway safety, travel time reliability, elderly drivers, traffic modeling

Procedia PDF Downloads 493
262 Tractography Analysis of the Evolutionary Origin of Schizophrenia

Authors: Asmaa Tahiri, Mouktafi Amine

Abstract:

A substantial number of traditional medical research has been put forward to managing and treating mental disorders. At the present time, to our best knowledge, it is believed that fundamental understanding of the underlying causes of the majority psychological disorders needs to be explored further to inform early diagnosis, managing symptoms and treatment. The emerging field of evolutionary psychology is a promising prospect to address the origin of mental disorders, potentially leading to more effective treatments. Schizophrenia as a topical mental disorder has been linked to the evolutionary adaptation of the human brain represented in the brain connectivity and asymmetry directly linked to humans higher brain cognition in contrast to other primates being our direct living representation of the structure and connectivity of our earliest common African ancestors. As proposed in the evolutionary psychology scientific literature the pathophysiology of schizophrenia is expressed and directly linked to altered connectivity between the Hippocampal Formation (HF) and Dorsolateral Prefrontal Cortex (DLPFC). This research paper presents the results of the use of tractography analysis using multiple open access Diffusion Weighted Imaging (DWI) datasets of healthy subjects, schizophrenia-affected subjects and primates to illustrate the relevance of the aforementioned brain regions connectivity and the underlying evolutionary changes in the human brain. Deterministic fiber tracking and streamline analysis were used to generate connectivity matrices from the DWI datasets overlaid to compute distances and highlight disconnectivity patterns in conjunction with other fiber tracking metrics; Fractional Anisotropy (FA), Mean Diffusivity (MD) and Radial Diffusivity (RD).

Keywords: tractography, evolutionary psychology, schizophrenia, brain connectivity

Procedia PDF Downloads 71
261 Ultrasound-Assisted Extraction of Bioactive Compounds from Cocoa Shell and Their Encapsulation in Gum Arabic and Maltodextrin: A Technology to Produce Functional Food Ingredients

Authors: Saeid Jafari, Khursheed Ahmad Sheikh, Randy W. Worobo, Kitipong Assatarakul

Abstract:

In this study, the extraction of cocoa shell powder (CSP) was optimized, and the optimized extracts were spray-dried for encapsulation purposes. Temperature (45-65 ◦C), extraction time (30–60 min), and ethanol concentration (60–100%) were the extraction parameters. The response surface methodology analysis revealed that the model was significant (p ≤ 0.05) in interactions between all variables (total phenolic compound, total flavonoid content, and antioxidant activity as measured by 2,2-Diphenyl-1-picrylhydrazyl (DPPH) and ferric reducing antioxidant power (FRAP assays), with a lack of fit test for the model being insignificant (p > 0.05). Temperature (55 ◦C), time (45 min), and ethanol concentration (60%) were found to be the optimal extraction conditions. For spray-drying encapsulation, some quality metrics (e.g., water solubility, water activity) were insignificant (p > 0.05). The microcapsules were found to be spherical in shape using a scanning electron microscope. Thermogravimetric and differential thermogravimetric measurements of the microcapsules revealed nearly identical results. The gum arabic + maltodextrin microcapsule (GMM) showed potential antibacterial (zone of inhibition: 11.50 mm; lower minimum inhibitory concentration: 1.50 mg/mL) and antioxidant (DPPH: 1063 mM trolox/100g dry wt.) activities (p ≤ 0.05). In conclusion, the microcapsules in this study, particularly GMM, are promising antioxidant and antibacterial agents to be fortified as functional food ingredients for the production of nutraceutical foods with health-promoting properties.

Keywords: functional foods, coco shell powder, antioxidant activity, encapsulation, extraction

Procedia PDF Downloads 57
260 Leveraging Mobile Apps for Citizen-Centric Urban Planning: Insights from Tajawob Implementation

Authors: Alae El Fahsi

Abstract:

This study explores the ‘Tajawob’ app's role in urban development, demonstrating how mobile applications can empower citizens and facilitate urban planning. Tajawob serves as a digital platform for community feedback, engagement, and participatory governance, addressing urban challenges through innovative tech solutions. This research synthesizes data from a variety of sources, including user feedback, engagement metrics, and interviews with city officials, to assess the app’s impact on citizen participation in urban development in Morocco. By integrating advanced data analytics and user experience design, Tajawob has bridged the communication gap between citizens and government officials, fostering a more collaborative and transparent urban planning process. The findings reveal a significant increase in civic engagement, with users actively contributing to urban management decisions, thereby enhancing the responsiveness and inclusivity of urban governance. Challenges such as digital literacy, infrastructure limitations, and privacy concerns are also discussed, providing a comprehensive overview of the obstacles and opportunities presented by mobile app-based citizen engagement platforms. The study concludes with strategic recommendations for scaling the Tajawob model to other contexts, emphasizing the importance of adaptive technology solutions in meeting the evolving needs of urban populations. This research contributes to the burgeoning field of smart city innovations, offering key insights into the role of digital tools in facilitating more democratic and participatory urban environments.

Keywords: smart cities, digital governance, urban planning, strategic design

Procedia PDF Downloads 58
259 Buffer Allocation and Traffic Shaping Policies Implemented in Routers Based on a New Adaptive Intelligent Multi Agent Approach

Authors: M. Taheri Tehrani, H. Ajorloo

Abstract:

In this paper, an intelligent multi-agent framework is developed for each router in which agents have two vital functionalities, traffic shaping and buffer allocation and are positioned in the ports of the routers. With traffic shaping functionality agents shape the traffic forward by dynamic and real time allocation of the rate of generation of tokens in a Token Bucket algorithm and with buffer allocation functionality agents share their buffer capacity between each other based on their need and the conditions of the network. This dynamic and intelligent framework gives this opportunity to some ports to work better under burst and more busy conditions. These agents work intelligently based on Reinforcement Learning (RL) algorithm and will consider effective parameters in their decision process. As RL have limitation considering much parameter in its decision process due to the volume of calculations, we utilize our novel method which invokes Principle Component Analysis (PCA) on the RL and gives a high dimensional ability to this algorithm to consider as much as needed parameters in its decision process. This implementation when is compared to our previous work where traffic shaping was done without any sharing and dynamic allocation of buffer size for each port, the lower packet drop in the whole network specifically in the source routers can be seen. These methods are implemented in our previous proposed intelligent simulation environment to be able to compare better the performance metrics. The results obtained from this simulation environment show an efficient and dynamic utilization of resources in terms of bandwidth and buffer capacities pre allocated to each port.

Keywords: principal component analysis, reinforcement learning, buffer allocation, multi- agent systems

Procedia PDF Downloads 518
258 Cooperative Cross Layer Topology for Concurrent Transmission Scheduling Scheme in Broadband Wireless Networks

Authors: Gunasekaran Raja, Ramkumar Jayaraman

Abstract:

In this paper, we consider CCL-N (Cooperative Cross Layer Network) topology based on the cross layer (both centralized and distributed) environment to form network communities. Various performance metrics related to the IEEE 802.16 networks are discussed to design CCL-N Topology. In CCL-N topology, nodes are classified as master nodes (Master Base Station [MBS]) and serving nodes (Relay Station [RS]). Nodes communities are organized based on the networking terminologies. Based on CCL-N Topology, various simulation analyses for both transparent and non-transparent relays are tabulated and throughput efficiency is calculated. Weighted load balancing problem plays a challenging role in IEEE 802.16 network. CoTS (Concurrent Transmission Scheduling) Scheme is formulated in terms of three aspects – transmission mechanism based on identical communities, different communities and identical node communities. CoTS scheme helps in identifying the weighted load balancing problem. Based on the analytical results, modularity value is inversely proportional to that of the error value. The modularity value plays a key role in solving the CoTS problem based on hop count. The transmission mechanism for identical node community has no impact since modularity value is same for all the network groups. In this paper three aspects of communities based on the modularity value which helps in solving the problem of weighted load balancing and CoTS are discussed.

Keywords: cross layer network topology, concurrent scheduling, modularity value, network communities and weighted load balancing

Procedia PDF Downloads 265
257 The Sociocultural, Economic, and Environmental Contestations of Agbogbloshie: A Critical Review

Authors: Khiddir Iddris, Martin Oteng – Ababio, Andreas Bürkert, Christoph Scherrer, Katharina Hemmler

Abstract:

Agbogbloshie, as an informal settlement and economy where the e-waste sector thrives, has become a global hub of complex urban contestations involving sociocultural, economic, and environmental dimensions due to the implication that e-waste and informal economic patterns have on livelihoods, urbanisation, development and sustainability. Multi-author collaborations have produced an ever-growing body of literature on Agbogbloshie and the informal e-waste economy. There is, however, a dearth of an assessment of Agbogbloshie as an urban informal settlement's intricate nexus of socioecological contestations. We address this gap by systematising, from literature, the context knowledge, navigating the complex terrain of Agbogbloshie's challenges, and employing a multidimensional lens to unravel the sociocultural intricacies, economic dynamics, and environmental complexities shaping its identity. A systematic critical review approach was espoused, with a pragmatic consolidation of content analysis and controversy mapping, grounded on the concept of ‘sustainable rurbanism,’ highlighted core themes and identified contrasting viewpoints. An analytical framework is presented. Five categories – geohistorical, sociocultural, economic, environmental and future trends - are proposed as an approach to systematising the literature. The review finds that the sociocultural dimension unveils a mosaic of cultural amalgamation, communal identity, and tensions impacting community cohesion. The analysis of economic intricacies reveals the prevalence of informal economies sustaining livelihoods yet entrenching economic disparities and marginalisation. Environmental scrutiny exposes the grim realities of e-waste disposal, pollution, and land use conflicts. The findings suggest that there is a high resilience within the community and the potential for sustainable trajectories. Theoretical and conceptual synergy is limited. This review provides a comprehensive exploration, offering insights and directions for future research, policy formulation, and community-driven interventions aimed at fostering sustainable transformations in Agbogbloshie and analogous urban contexts.

Keywords: Agbogbloshie, economic complexities, environmental challenges, resilience, sociocultural dynamics, sustainability, urban informal settlement

Procedia PDF Downloads 71
256 Comparison of Different Machine Learning Algorithms for Solubility Prediction

Authors: Muhammet Baldan, Emel Timuçin

Abstract:

Molecular solubility prediction plays a crucial role in various fields, such as drug discovery, environmental science, and material science. In this study, we compare the performance of five machine learning algorithms—linear regression, support vector machines (SVM), random forests, gradient boosting machines (GBM), and neural networks—for predicting molecular solubility using the AqSolDB dataset. The dataset consists of 9981 data points with their corresponding solubility values. MACCS keys (166 bits), RDKit properties (20 properties), and structural properties(3) features are extracted for every smile representation in the dataset. A total of 189 features were used for training and testing for every molecule. Each algorithm is trained on a subset of the dataset and evaluated using metrics accuracy scores. Additionally, computational time for training and testing is recorded to assess the efficiency of each algorithm. Our results demonstrate that random forest model outperformed other algorithms in terms of predictive accuracy, achieving an 0.93 accuracy score. Gradient boosting machines and neural networks also exhibit strong performance, closely followed by support vector machines. Linear regression, while simpler in nature, demonstrates competitive performance but with slightly higher errors compared to ensemble methods. Overall, this study provides valuable insights into the performance of machine learning algorithms for molecular solubility prediction, highlighting the importance of algorithm selection in achieving accurate and efficient predictions in practical applications.

Keywords: random forest, machine learning, comparison, feature extraction

Procedia PDF Downloads 40
255 Assessment of Mountain Hydrological Processes in the Gumera Catchment, Ethiopia

Authors: Tewele Gebretsadkan Haile

Abstract:

Mountain terrains are essential to regional water resources by regulating hydrological processes that use downstream water supplies. Nevertheless, limited observed earth data in complex topography poses challenges for water resources regulation. That's why satellite product is implemented in this study. This study evaluates hydrological processes on mountain catchment of Gumera, Ethiopia using HBV-light model with satellite precipitation products (CHIRPS) for the temporal scale of 1996 to 2010 and area coverage of 1289 km2. The catchment is characterized by cultivation dominant and elevation ranges from 1788 to 3606 m above sea level. Three meteorological stations have been used for downscaling of the satellite data and one stream flow for calibration and validation. The result shows total annual water balance showed that precipitation 1410 mm, simulated 828 mm surface runoff compared to 1042 mm observed stream flow with actual evapotranspiration estimate 586mm and 1495mm potential evapotranspiration. The temperature range is 9°C in winter to 21°C. The catchment contributes 74% as quack runoff to the total runoff and 26% as lower groundwater storage, which sustains stream flow during low periods. The model uncertainty was measured using different metrics such as coefficient of determination, model efficiency, efficiency for log(Q) and flow weighted efficiency 0.76, 0.74, 0.66 and 0.70 respectively. The research result highlights that HBV model captures the mountain hydrology simulation and the result indicates quack runoff due to the traditional agricultural system, slope factor of the topography and adaptation measure for water resource management is recommended.

Keywords: mountain hydrology, CHIRPS, Gumera, HBV model

Procedia PDF Downloads 11
254 Quantification of the Gumera Catchment's Mountain Hydrological Processes in Ethiopia

Authors: Tewele Gebretsadkan Haile

Abstract:

Mountain terrains are essential to regional water resources by regulating hydrological processes that use downstream water supplies. Nevertheless, limited observed earth data in complex topography poses challenges for water resources regulation. That's why satellite product is implemented in this study. This study evaluates hydrological processes on mountain catchment of Gumera, Ethiopia using HBV-light model with satellite precipitation products (CHIRPS) for the temporal scale of 1996 to 2010 and area coverage of 1289 km2. The catchment is characterized by cultivation dominant and elevation ranges from 1788 to 3606 m above sea level. Three meteorological stations have been used for downscaling of the satellite data and one stream flow for calibration and validation. The result shows total annual water balance showed that precipitation 1410 mm, simulated 828 mm surface runoff compared to 1042 mm observed stream flow with actual evapotranspiration estimate 586mm and 1495mm potential evapotranspiration. The temperature range is 9°C in winter to 21°C. The catchment contributes 74% as quack runoff to the total runoff and 26% as lower groundwater storage, which sustains stream flow during low periods. The model uncertainty was measured using different metrics such as coefficient of determination, model efficiency, efficiency for log(Q) and flow weighted efficiency 0.76, 0.74, 0.66 and 0.70 respectively. The research result highlights that HBV model captures the mountain hydrology simulation and the result indicates quack runoff due to the traditional agricultural system, slope factor of the topography and adaptation measure for water resource management is recommended.

Keywords: mountain hydrology, CHIRPS, HBV model, Gumera

Procedia PDF Downloads 10
253 A Comparative Time-Series Analysis and Deep Learning Projection of Innate Radon Gas Risk in Canadian and Swedish Residential Buildings

Authors: Selim M. Khan, Dustin D. Pearson, Tryggve Rönnqvist, Markus E. Nielsen, Joshua M. Taron, Aaron A. Goodarzi

Abstract:

Accumulation of radioactive radon gas in indoor air poses a serious risk to human health by increasing the lifetime risk of lung cancer and is classified by IARC as a category one carcinogen. Radon exposure risks are a function of geologic, geographic, design, and human behavioural variables and can change over time. Using time series and deep machine learning modelling, we analyzed long-term radon test outcomes as a function of building metrics from 25,489 Canadian and 38,596 Swedish residential properties constructed between 1945 to 2020. While Canadian and Swedish properties built between 1970 and 1980 are comparable (96–103 Bq/m³), innate radon risks subsequently diverge, rising in Canada and falling in Sweden such that 21st Century Canadian houses show 467% greater average radon (131 Bq/m³) relative to Swedish equivalents (28 Bq/m³). These trends are consistent across housing types and regions within each country. The introduction of energy efficiency measures within Canadian and Swedish building codes coincided with opposing radon level trajectories in each nation. Deep machine learning modelling predicts that, without intervention, average Canadian residential radon levels will increase to 176 Bq/m³ by 2050, emphasizing the importance and urgency of future building code intervention to achieve systemic radon reduction in Canada.

Keywords: radon health risk, time-series, deep machine learning, lung cancer, Canada, Sweden

Procedia PDF Downloads 85
252 Predicting Emerging Agricultural Investment Opportunities: The Potential of Structural Evolution Index

Authors: Kwaku Damoah

Abstract:

The agricultural sector is characterized by continuous transformation, driven by factors such as demographic shifts, evolving consumer preferences, climate change, and migration trends. This dynamic environment presents complex challenges for key stakeholders including farmers, governments, and investors, who must navigate these changes to achieve optimal investment returns. To effectively predict market trends and uncover promising investment opportunities, a systematic, data-driven approach is essential. This paper introduces the Structural Evolution Index (SEI), a machine learning-based methodology. SEI is specifically designed to analyse long-term trends and forecast the potential of emerging agricultural products for investment. Versatile in application, it evaluates various agricultural metrics such as production, yield, trade, land use, and consumption, providing a comprehensive view of the evolution within agricultural markets. By harnessing data from the UN Food and Agricultural Organisation (FAOSTAT), this study demonstrates the SEI's capabilities through Comparative Exploratory Analysis and evaluation of international trade in agricultural products, focusing on Malaysia and Singapore. The SEI methodology reveals intricate patterns and transitions within the agricultural sector, enabling stakeholders to strategically identify and capitalize on emerging markets. This predictive framework is a powerful tool for decision-makers, offering crucial insights that help anticipate market shifts and align investments with anticipated returns.

Keywords: agricultural investment, algorithm, comparative exploratory analytics, machine learning, market trends, predictive analytics, structural evolution index

Procedia PDF Downloads 63
251 Shear Strength Characteristics of Sand Mixed with Particulate Rubber

Authors: Firas Daghistani, Hossam Abuel Naga

Abstract:

Waste tyres is a global problem that has a negative effect on the environment, where there are approximately one billion waste tyres discarded worldwide yearly. Waste tyres are discarded in stockpiles, where they provide harm to the environment in many ways. Finding applications to these materials can help in reducing this global problem. One of these applications is recycling these waste materials and using them in geotechnical engineering. Recycled waste tyre particulates can be mixed with sand to form a lightweight material with varying shear strength characteristics. Contradicting results were found in the literature on the inclusion of particulate rubber to sand, where some experiments found that the inclusion of particulate rubber can increase the shear strength of the mixture, while other experiments stated that the addition of particulate rubber decreases the shear strength of the mixture. This research further investigates the inclusion of particulate rubber to sand and whether it can increase or decrease the shear strength characteristics of the mixture. For the experiment, a series of direct shear tests were performed on a poorly graded sand with a mean particle size of 0.32 mm mixed with recycled poorly graded particulate rubber with a mean particle size of 0.51 mm. The shear tests were performedon four normal stresses 30, 55, 105, 200 kPa at a shear rate of 1 mm/minute. Different percentages ofparticulate rubber content were used in the mixture i.e., 10%, 20%, 30% and 50% of sand dry weight at three density states, namely loose, slight dense, and dense state. The size ratio of the mixture,which is the mean particle size of the particulate rubber divided by the mean particle size of the sand, was 1.59. The results identified multiple parameters that can influence the shear strength of the mixture. The parameters were: normal stress, particulate rubber content, mixture gradation, mixture size ratio, and the mixture’s density. The inclusion of particulate rubber tosand showed a decrease to the internal friction angle and an increase to the apparent cohesion. Overall, the inclusion of particulate rubber did not have a significant influenceon the shear strength of the mixture. For all the dense states at the low normal stresses 33 and 55 kPa, the inclusion of particulate rubber showed aslight increase in the shear strength where the peak was at 20% rubber content of the sand’s dry weight. On the other hand, at the high normal stresses 105, and 200 kPa, there was a slight decrease in the shear strength.

Keywords: shear strength, direct shear, sand-rubber mixture, waste material, granular material

Procedia PDF Downloads 132
250 Performance Analysis of Search Medical Imaging Service on Cloud Storage Using Decision Trees

Authors: González A. Julio, Ramírez L. Leonardo, Puerta A. Gabriel

Abstract:

Telemedicine services use a large amount of data, most of which are diagnostic images in Digital Imaging and Communications in Medicine (DICOM) and Health Level Seven (HL7) formats. Metadata is generated from each related image to support their identification. This study presents the use of decision trees for the optimization of information search processes for diagnostic images, hosted on the cloud server. To analyze the performance in the server, the following quality of service (QoS) metrics are evaluated: delay, bandwidth, jitter, latency and throughput in five test scenarios for a total of 26 experiments during the loading and downloading of DICOM images, hosted by the telemedicine group server of the Universidad Militar Nueva Granada, Bogotá, Colombia. By applying decision trees as a data mining technique and comparing it with the sequential search, it was possible to evaluate the search times of diagnostic images in the server. The results show that by using the metadata in decision trees, the search times are substantially improved, the computational resources are optimized and the request management of the telemedicine image service is improved. Based on the experiments carried out, search efficiency increased by 45% in relation to the sequential search, given that, when downloading a diagnostic image, false positives are avoided in management and acquisition processes of said information. It is concluded that, for the diagnostic images services in telemedicine, the technique of decision trees guarantees the accessibility and robustness in the acquisition and manipulation of medical images, in improvement of the diagnoses and medical procedures in patients.

Keywords: cloud storage, decision trees, diagnostic image, search, telemedicine

Procedia PDF Downloads 204
249 Attention Deficit Hyperactivity Disorder and Criminality: A Psychological Profile of Convicts Serving Prison Sentences

Authors: Agnieszka Nowogrodzka

Abstract:

Objectives: ADHD is a neurodevelopmental disorder in which symptoms are most prominent throughout childhood. In the longer term, these symptoms, as well as the behaviour of the child, the experiences arising from the response of the community to the child's symptoms, as well as the functioning of the community itself, all contribute to the onset of secondary symptoms and subsequent outcomes of the disorder, such as crime or mental disorders. The purpose of this study is to estimate the prevalence of ADHD among Polish convicts serving a prison sentence. To that end, the study will focus on the relationship between the severity of ADHD and early childhood trauma, family relations, maladaptive cognitive schemas, as well as mental disorders. It is an attempt to assess the interdependence between ADHD, childhood experiences, and secondary outcomes. Methods: The study enrolled two groups of first-time convicts and repeat offenders aged between 21 and 65 –each of the study groups comprised 120 participants; 240 participants in total took part in the study. Participants were recruited in semi-open penal institutions in Poland (Poznań Custody Suite, Wronki Penal Institution, Iława Penal Institution). The control group comprised 110 men without criminal records aged 21 to 65. The DIVA 5.0 questionnaire was employed to identify the severity of ADHD symptoms. Other questionnaires employed in the course of the study included the Childhood Trauma Questionnaire (CTQ), The Family Adaptability and Cohesion Scale IV (FACES-IV), Young Schema Questionnaire (YSQ), and the General Health Questionnaire (GHQ-30). Results: The findings of the study in question are currently still being compiled and will be shared during the conference. The findings of a pilot study involving two cohorts of convicts (each numbering 20 men) and a control group (20 men with no criminal records) indicate a significant correlation between ADHD and the experience of early childhood trauma. The severity of ADHD also shows a correlation with the assessment of the functioning of the family, with the subjects assessing the relationships in their families more negatively than the control group. Furthermore, the severity of ADHD is also correlated with maladaptive emotional schemas manifesting in the participants. The findings also show a correlation between selected dimensions and the severity of offenses.

Keywords: ADHD, social impairments, mental disorders, early childhood traumas, criminality

Procedia PDF Downloads 92
248 The Third Level Digital Divide: Millennials and Post-Millennials Online Activities in South Africa

Authors: Ayanda Magida, Brian Armstrong

Abstract:

The study aimed to assess the third level of the digital divide among the millennials and post-millennials in South Africa. The millennials are people born from 1981-to 1996, that is, people between the ages of 25-40 years old and post-millennials are people born from 1997 to date. For the study, only post-millennials born between 1997-2003 were included as they were old enough to consent to participation in the study. Data was collected as part of the Ph.D. project that focuses on the relationship between income inequality, the digital divide, and social cohesion in South Africa. The digital divide has three main levels, namely the first, second and third. The first and second focus on access and usage, respectively. The third-level digital divide can be defined as the differences in the benefits associated with being online. The current paper focuses on the third level: the benefits derived by being online using four domains: economic, educational, social, and personal benefits. The economic benefits include income, employment and finance-related activities; the social benefits include socializing belonging, identity, and informal networks. The personal benefits include personal wellbeing and self-actualization. A total of 763 participants completed the survey, and 61.3% were post-millennials between the ages of 18-24 and s 38.6 % were millennials between 25 and 40. The majority of the respondents were female (62%), male (34%) and nonbinary (1%), respectively. Most of the respondents were black, followed by whites, Indians and colored, respectively. Thus, they represented the status of the demographics of the country. Most of the respondents had access to the internet and smartphone. Most expressed that they use laptops (68%) or mobile (71%) to access the internet and 54 % access the internet using wireless/Wi-Fi. There were no differences between the millennial and post-millennial economic and educational benefits of being online. However, the post-millennials were more inclined to use the internet for social and personal benefits than the millennials. This could be attributed to many factors, such as age. The post-millennials are still discovering themselves and therefore would derive social and personal benefits associated with being online. The findings confirm studies that argue that younger generations derive more benefits from being online than the older generation. Based on the findings, it is evident that the post-millennials are not using the internet or online activities for social networks and socializing but can derive economic benefits such as job looking and education benefits from being online. It can be inferred that there are no significant differences between the two groups, and it seems like the third-level digital divide is not evident among the two groups as they both have been able to derive meaningful benefits from being online. Further studies should focus on the third-level divide between the baby boomers and Generation X.

Keywords: third-level digital divide, millennials, post-millennials, online activities

Procedia PDF Downloads 103
247 Virtual Team Management in Companies and Organizations

Authors: Asghar Zamani, Mostafa Falahmorad

Abstract:

Virtualization is established to combine and use the unique capabilities of employees to increase productivity and agility to provide services regardless of location. Adapting to fast and continuous change and getting maximum access to human resources are reasons why virtualization is happening. The distance problem is solved by information. Flexibility is the most important feature of virtualization, and information will be the main focus of virtualized companies. In this research, we used the Covid-19 opportunity window to assess the productivity of the companies that had been going through more virtualized management before the Covid-19 in comparison with those that just started planning on developing infrastructures on virtual management after the crises of pandemic occurred. The research process includes financial (profitability and customer satisfaction) and behavioral (organizational culture and reluctance to change) metrics assessment. In addition to financial and CRM KPIs, a questionnaire is devised to assess how manager and employees’ attitude has been changing towards the migration to virtualization. The sample companies and questions are selected by asking from experts in the IT industry of Iran. In this article, the conclusion is that companies open to virtualization based on accurate strategic planning or willing to pay to train their employees for virtualization before the pandemic are more agile in adapting to change and moving forward in recession. The prospective companies in this research, not only could compensate for the short period loss from the first shock of the Covid-19, but they could also foresee new needs of their customer sooner than other competitors, resulting in the need to employ new staff for executing the emerging demands. Findings were aligned with the literature review. Results can be a wake-up call for business owners especially in developing countries to be more resilient toward modern management styles instead of continuing with traditional ones.

Keywords: virtual management, virtual organization, competitive advantage, KPI, profit

Procedia PDF Downloads 83
246 Tractography Analysis and the Evolutionary Origin of Schizophrenia

Authors: Mouktafi Amine, Tahiri Asmaa

Abstract:

A substantial number of traditional medical research has been put forward to managing and treating mental disorders. At the present time, to our best knowledge, it is believed that a fundamental understanding of the underlying causes of the majority of psychological disorders needs to be explored further to inform early diagnosis, managing symptoms and treatment. The emerging field of evolutionary psychology is a promising prospect to address the origin of mental disorders, potentially leading to more effective treatments. Schizophrenia as a topical mental disorder has been linked to the evolutionary adaptation of the human brain represented in the brain connectivity and asymmetry directly linked to humans' higher brain cognition in contrast to other primates being our direct living representation of the structure and connectivity of our earliest common African ancestors. As proposed in the evolutionary psychology scientific literature, the pathophysiology of schizophrenia is expressed and directly linked to altered connectivity between the Hippocampal Formation (HF) and Dorsolateral Prefrontal Cortex (DLPFC). This research paper presents the results of the use of tractography analysis using multiple open access Diffusion Weighted Imaging (DWI) datasets of healthy subjects, schizophrenia-affected subjects and primates to illustrate the relevance of the aforementioned brain regions' connectivity and the underlying evolutionary changes in the human brain. Deterministic fiber tracking and streamline analysis were used to generate connectivity matrices from the DWI datasets overlaid to compute distances and highlight disconnectivity patterns in conjunction with other fiber tracking metrics: Fractional Anisotropy (FA), Mean Diffusivity (MD) and Radial Diffusivity (RD).

Keywords: tractography, diffusion weighted imaging, schizophrenia, evolutionary psychology

Procedia PDF Downloads 49
245 Analysis of the Variation on Earth Pressure by Addition of Construction Demolition Waste (C&D Waste) In Black Cotton Soil

Authors: Nirav Jadav, M. G.Vanza

Abstract:

Black cotton soils mainly exhibit the property of swelling/shrinkage when they react to moisture variations. This property causes development of cracks in the structures resting on these soils, which poses instability to the structures. Soil stabilization is a technique to enhance the geotechnical characteristics of Black cotton soils by changing their properties. Due to rapid growth in construction industry, a lot of waste material is being generated every day, which poses the problem of its disposal. If the waste material can be utilized for soil stabilization, it will mitigate the problems of its disposal. The tests results evaluate that the strength of the Black cotton soils increased by the use of C&D waste material. This study determines various Index and engineering properties of soil and compare for different proportions of soil and C&D Waste. For finding properties of soil and C&D Waste, various test is carried out like sieve analysis, hydrometer test, specific gravity test, Atterberg’s limit test, Standard proctor test and soil Triaxial unconsolidated undrained test. It also takes into account the characteristics alteration due to addition of C&D Waste in active and passive pressure. This study presents the efficacy for use of C&D Waste as a stabilizing material to be mixed with backfill soil in retaining walls. Standard proctor test was conducted at proportions S1W0 (soil = 100%, Waste = 0%), S7W1 (soil = 87.5%, waste = 12.5%), S3W1, S5W3 and S1W1. From these, S5W3 showed optimum results, so this proportion was considered for Soil Triaxial UU-Test. Also, S1W0 was considered too. When 37.5% of soil is replaced by C&D Waste, the Optimum moisture content (OMC) decrease by 11.48%, further, increase C&D Waste in soil OMC remains constant, and maximum dry density (MDD) were observed to be increased by 9.27%, further increased C&D Waste in soil MDD reduces. Carried out strength test, which shows cohesion decreased by 162% and the internal friction angle increased by 49.4% with compare to virgin soil. The study focuses on the potential use of C&D Waste as a stabilizing material in the retaining wall backfill. The active earth pressure decreases, and the passive earth pressure increases in the S5W3 mixture compared to the S1W0 mixture at the same depth.

Keywords: black cotton soil, construction demolition waste, compaction test, strength test

Procedia PDF Downloads 82
244 EcoMush: Mapping Sustainable Mushroom Production in Bangladesh

Authors: A. A. Sadia, A. Emdad, E. Hossain

Abstract:

The increasing importance of mushrooms as a source of nutrition, health benefits, and even potential cancer treatment has raised awareness of the impact of climate-sensitive variables on their cultivation. Factors like temperature, relative humidity, air quality, and substrate composition play pivotal roles in shaping mushroom growth, especially in Bangladesh. Oyster mushrooms, a commonly cultivated variety in this region, are particularly vulnerable to climate fluctuations. This research explores the climatic dynamics affecting oyster mushroom cultivation and, presents an approach to address these challenges and provides tangible solutions to fortify the agro-economy, ensure food security, and promote the sustainability of this crucial food source. Using climate and production data, this study evaluates the performance of three clustering algorithms -KMeans, OPTICS, and BIRCH- based on various quality metrics. While each algorithm demonstrates specific strengths, the findings provide insights into their effectiveness for this specific dataset. The results yield essential information, pinpointing the optimal temperature range of 13°C-22°C, the unfavorable temperature threshold of 28°C and above, and the ideal relative humidity range of 75-85% with the suitable production regions in three different seasons: Kharif-1, 2, and Robi. Additionally, a user-friendly web application is developed to support mushroom farmers in making well-informed decisions about their cultivation practices. This platform offers valuable insights into the most advantageous periods for oyster mushroom farming, with the overarching goal of enhancing the efficiency and profitability of mushroom farming.

Keywords: climate variability, mushroom cultivation, clustering techniques, food security, sustainability, web-application

Procedia PDF Downloads 68
243 Diabetes Mellitus and Blood Glucose Variability Increases the 30-day Readmission Rate after Kidney Transplantation

Authors: Harini Chakkera

Abstract:

Background: Inpatient hyperglycemia is an established independent risk factor among several patient cohorts with hospital readmission. This has not been studied after kidney transplantation. Nearly one-third of patients who have undergone a kidney transplant reportedly experience 30-day readmission. Methods: Data on first-time solitary kidney transplantations were retrieved between September 2015 to December 2018. Information was linked to the electronic health record to determine a diagnosis of diabetes mellitus and extract glucometeric and insulin therapy data. Univariate logistic regression analysis and the XGBoost algorithm were used to predict 30-day readmission. We report the average performance of the models on the testing set on five bootstrapped partitions of the data to ensure statistical significance. Results: The cohort included 1036 patients who received kidney transplantation, and 224 (22%) experienced 30-day readmission. The machine learning algorithm was able to predict 30-day readmission with an average AUC of 77.3% (95% CI 75.30-79.3%). We observed statistically significant differences in the presence of pretransplant diabetes, inpatient-hyperglycemia, inpatient-hypoglycemia, and minimum and maximum glucose values among those with higher 30-day readmission rates. The XGBoost model identified the index admission length of stay, presence of hyper- and hypoglycemia and recipient and donor BMI values as the most predictive risk factors of 30-day readmission. Additionally, significant variations in the therapeutic management of blood glucose by providers were observed. Conclusions: Suboptimal glucose metrics during hospitalization after kidney transplantation is associated with an increased risk for 30-day hospital readmission. Optimizing the hospital blood glucose management, a modifiable factor, after kidney transplantation may reduce the risk of 30-day readmission.

Keywords: kidney, transplant, diabetes, insulin

Procedia PDF Downloads 90
242 Remote Sensing through Deep Neural Networks for Satellite Image Classification

Authors: Teja Sai Puligadda

Abstract:

Satellite images in detail can serve an important role in the geographic study. Quantitative and qualitative information provided by the satellite and remote sensing images minimizes the complexity of work and time. Data/images are captured at regular intervals by satellite remote sensing systems, and the amount of data collected is often enormous, and it expands rapidly as technology develops. Interpreting remote sensing images, geographic data mining, and researching distinct vegetation types such as agricultural and forests are all part of satellite image categorization. One of the biggest challenge data scientists faces while classifying satellite images is finding the best suitable classification algorithms based on the available that could able to classify images with utmost accuracy. In order to categorize satellite images, which is difficult due to the sheer volume of data, many academics are turning to deep learning machine algorithms. As, the CNN algorithm gives high accuracy in image recognition problems and automatically detects the important features without any human supervision and the ANN algorithm stores information on the entire network (Abhishek Gupta., 2020), these two deep learning algorithms have been used for satellite image classification. This project focuses on remote sensing through Deep Neural Networks i.e., ANN and CNN with Deep Sat (SAT-4) Airborne dataset for classifying images. Thus, in this project of classifying satellite images, the algorithms ANN and CNN are implemented, evaluated & compared and the performance is analyzed through evaluation metrics such as Accuracy and Loss. Additionally, the Neural Network algorithm which gives the lowest bias and lowest variance in solving multi-class satellite image classification is analyzed.

Keywords: artificial neural network, convolutional neural network, remote sensing, accuracy, loss

Procedia PDF Downloads 159
241 Evaluation of the Ability of COVID-19 Infected Sera to Induce Netosis Using an Ex-Vivo NETosis Monitoring Tool

Authors: Constant Gillot, Pauline Michaux, Julien Favresse, Jean-Michel Dogné, Jonathan Douxfils

Abstract:

Introduction: NETosis has emerged as a crucial yet paradoxical factor in severe COVID-19 cases. While neutrophil extracellular traps (NETs) help contain and eliminate viral particles, excessive NET formation can lead to hyperinflammation, exacerbating tissue damage and acute respiratory distress syndrome (ARDS). Aims: This study evaluates the relationship between COVID-19-infected sera and NETosis using an ex-vivo model. Methods: Sera from 8 post-admission COVID-19 patients, after receiving corticoid therapy, were used to induce NETosis in neutrophils from a healthy donor. NET formation was tracked using fluorescent markers for DNA and neutrophil elastase (NE) every 2 minutes for 8 hours. The results were expressed as a percentage of DNA/NE released over time. Key metrics, including T50 (time to 50% release) and AUC (area under the curve), representing total NETosis potential), were calculated. A 27-cytokine screening kit was used to assess the cytokine composition of the sera. Results: COVID-19 sera induced NETosis based on their cytokine profile. The AUC of NE and DNA release decreased with time following corticoid therapy, showing a significant reduction in 6 of the 8 patients (p<0.05). T50 also decreased in parallel with AUC for both markers. Cytokines concentration decrease with time after therapy administration. There is correlation between 14 cytokines concentration and NE release. Conclusion: This ex-vivo model successfully demonstrated the induction of NETosis by COVID-19 sera using two markers. A clear decrease in NETosis potential was observed over time with glucocorticoid therapy. This model can be a valuable tool for monitoring NETosis and investigating potential NETosis inducers and inhibitors.

Keywords: NETosis, COVID-19, cytokine storm, biomarkers

Procedia PDF Downloads 19
240 Understanding Cyber Kill Chains: Optimal Allocation of Monitoring Resources Using Cooperative Game Theory

Authors: Roy. H. A. Lindelauf

Abstract:

Cyberattacks are complex processes consisting of multiple interwoven tasks conducted by a set of agents. Interdictions and defenses against such attacks often rely on cyber kill chain (CKC) models. A CKC is a framework that tries to capture the actions taken by a cyber attacker. There exists a growing body of literature on CKCs. Most of this work either a) describes the CKC with respect to one or more specific cyberattacks or b) discusses the tools and technologies used by the attacker at each stage of the CKC. Defenders, facing scarce resources, have to decide where to allocate their resources given the CKC and partial knowledge on the tools and techniques attackers use. In this presentation CKCs are analyzed through the lens of covert projects, i.e., interrelated tasks that have to be conducted by agents (human and/or computer) with the aim of going undetected. Various aspects of covert project models have been studied abundantly in the operations research and game theory domain, think of resource-limited interdiction actions that maximally delay completion times of a weapons project for instance. This presentation has investigated both cooperative and non-cooperative game theoretic covert project models and elucidated their relation to CKC modelling. To view a CKC as a covert project each step in the CKC is broken down into tasks and there are players of which each one is capable of executing a subset of the tasks. Additionally, task inter-dependencies are represented by a schedule. Using multi-glove cooperative games it is shown how a defender can optimize the allocation of his scarce resources (what, where and how to monitor) against an attacker scheduling a CKC. This study presents and compares several cooperative game theoretic solution concepts as metrics for assigning resources to the monitoring of agents.

Keywords: cyber defense, cyber kill chain, game theory, information warfare techniques

Procedia PDF Downloads 140
239 An Extensive Review of Drought Indices

Authors: Shamsulhaq Amin

Abstract:

Drought can arise from several hydrometeorological phenomena that result in insufficient precipitation, soil moisture, and surface and groundwater flow, leading to conditions that are considerably drier than the usual water content or availability. Drought is often assessed using indices that are associated with meteorological, agricultural, and hydrological phenomena. In order to effectively handle drought disasters, it is essential to accurately determine the kind, intensity, and extent of the drought using drought characterization. This information is critical for managing the drought before, during, and after the rehabilitation process. Over a hundred drought assessments have been created in literature to evaluate drought disasters, encompassing a range of factors and variables. Some models utilise solely hydrometeorological drivers, while others employ remote sensing technology, and some incorporate a combination of both. Comprehending the entire notion of drought and taking into account drought indices along with their calculation processes are crucial for researchers in this discipline. Examining several drought metrics in different studies requires additional time and concentration. Hence, it is crucial to conduct a thorough examination of approaches used in drought indices in order to identify the most straightforward approach to avoid any discrepancies in numerous scientific studies. In case of practical application in real-world, categorizing indices relative to their usage in meteorological, agricultural, and hydrological phenomena might help researchers maximize their efficiency. Users have the ability to explore different indexes at the same time, allowing them to compare the convenience of use and evaluate the benefits and drawbacks of each. Moreover, certain indices exhibit interdependence, which enhances comprehension of their connections and assists in making informed decisions about their suitability in various scenarios. This study provides a comprehensive assessment of various drought indices, analysing their types and computation methodologies in a detailed and systematic manner.

Keywords: drought classification, drought severity, drought indices, agriculture, hydrological

Procedia PDF Downloads 41
238 ADP Approach to Evaluate the Blood Supply Network of Ontario

Authors: Usama Abdulwahab, Mohammed Wahab

Abstract:

This paper presents the application of uncapacitated facility location problems (UFLP) and 1-median problems to support decision making in blood supply chain networks. A plethora of factors make blood supply-chain networks a complex, yet vital problem for the regional blood bank. These factors are rapidly increasing demand; criticality of the product; strict storage and handling requirements; and the vastness of the theater of operations. As in the UFLP, facilities can be opened at any of $m$ predefined locations with given fixed costs. Clients have to be allocated to the open facilities. In classical location models, the allocation cost is the distance between a client and an open facility. In this model, the costs are the allocation cost, transportation costs, and inventory costs. In order to address this problem the median algorithm is used to analyze inventory, evaluate supply chain status, monitor performance metrics at different levels of granularity, and detect potential problems and opportunities for improvement. The Euclidean distance data for some Ontario cities (demand nodes) are used to test the developed algorithm. Sitation software, lagrangian relaxation algorithm, and branch and bound heuristics are used to solve this model. Computational experiments confirm the efficiency of the proposed approach. Compared to the existing modeling and solution methods, the median algorithm approach not only provides a more general modeling framework but also leads to efficient solution times in general.

Keywords: approximate dynamic programming, facility location, perishable product, inventory model, blood platelet, P-median problem

Procedia PDF Downloads 506