Search results for: particle union optimization algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8128

Search results for: particle union optimization algorithm

568 Suitability Evaluation of Human Settlements Using a Global Sensitivity Analysis Method: A Case Study in of China

Authors: Feifei Wu, Pius Babuna, Xiaohua Yang

Abstract:

The suitability evaluation of human settlements over time and space is essential to track potential challenges towards suitable human settlements and provide references for policy-makers. This study established a theoretical framework of human settlements based on the nature, human, economy, society and residence subsystems. Evaluation indicators were determined with the consideration of the coupling effect among subsystems. Based on the extended Fourier amplitude sensitivity test algorithm, the global sensitivity analysis that considered the coupling effect among indicators was used to determine the weights of indicators. The human settlement suitability was evaluated at both subsystems and comprehensive system levels in 30 provinces of China between 2000 and 2016. The findings were as follows: (1) human settlements suitability index (HSSI) values increased significantly in all 30 provinces from 2000 to 2016. Among the five subsystems, the suitability index of the residence subsystem in China exhibited the fastest growinggrowth, fol-lowed by the society and economy subsystems. (2) HSSI in eastern provinces with a developed economy was higher than that in western provinces with an underdeveloped economy. In con-trast, the growing rate of HSSI in eastern provinces was significantly higher than that in western provinces. (3) The inter-provincial difference of in HSSI decreased from 2000 to 2016. For sub-systems, it decreased for the residence system, whereas it increased for the economy system. (4) The suitability of the natural subsystem has become a limiting factor for the improvement of human settlements suitability, especially in economically developed provinces such as Beijing, Shanghai, and Guangdong. The results can be helpful to support decision-making and policy for improving the quality of human settlements in a broad nature, human, economy, society and residence context.

Keywords: human settlements, suitability evaluation, extended fourier amplitude, human settlement suitability

Procedia PDF Downloads 79
567 Efficient Residual Road Condition Segmentation Network Based on Reconstructed Images

Authors: Xiang Shijie, Zhou Dong, Tian Dan

Abstract:

This paper focuses on the application of real-time semantic segmentation technology in complex road condition recognition, aiming to address the critical issue of how to improve segmentation accuracy while ensuring real-time performance. Semantic segmentation technology has broad application prospects in fields such as autonomous vehicle navigation and remote sensing image recognition. However, current real-time semantic segmentation networks face significant technical challenges and optimization gaps in balancing speed and accuracy. To tackle this problem, this paper conducts an in-depth study and proposes an innovative Guided Image Reconstruction Module. By resampling high-resolution images into a set of low-resolution images, this module effectively reduces computational complexity, allowing the network to more efficiently extract features within limited resources, thereby improving the performance of real-time segmentation tasks. In addition, a dual-branch network structure is designed in this paper to fully leverage the advantages of different feature layers. A novel Hybrid Attention Mechanism is also introduced, which can dynamically capture multi-scale contextual information and effectively enhance the focus on important features, thus improving the segmentation accuracy of the network in complex road condition. Compared with traditional methods, the proposed model achieves a better balance between accuracy and real-time performance and demonstrates competitive results in road condition segmentation tasks, showcasing its superiority. Experimental results show that this method not only significantly improves segmentation accuracy while maintaining real-time performance, but also remains stable across diverse and complex road conditions, making it highly applicable in practical scenarios. By incorporating the Guided Image Reconstruction Module, dual-branch structure, and Hybrid Attention Mechanism, this paper presents a novel approach to real-time semantic segmentation tasks, which is expected to further advance the development of this field.

Keywords: hybrid attention mechanism, image reconstruction, real-time, road status recognition

Procedia PDF Downloads 20
566 Source Identification Model Based on Label Propagation and Graph Ordinary Differential Equations

Authors: Fuyuan Ma, Yuhan Wang, Junhe Zhang, Ying Wang

Abstract:

Identifying the sources of information dissemination is a pivotal task in the study of collective behaviors in networks, enabling us to discern and intercept the critical pathways through which information propagates from its origins. This allows for the control of the information’s dissemination impact in its early stages. Numerous methods for source detection rely on pre-existing, underlying propagation models as prior knowledge. Current models that eschew prior knowledge attempt to harness label propagation algorithms to model the statistical characteristics of propagation states or employ Graph Neural Networks (GNNs) for deep reverse modeling of the diffusion process. These approaches are either deficient in modeling the propagation patterns of information or are constrained by the over-smoothing problem inherent in GNNs, which limits the stacking of sufficient model depth to excavate global propagation patterns. Consequently, we introduce the ODESI model. Initially, the model employs a label propagation algorithm to delineate the distribution density of infected states within a graph structure and extends the representation of infected states from integers to state vectors, which serve as the initial states of nodes. Subsequently, the model constructs a deep architecture based on GNNs-coupled Ordinary Differential Equations (ODEs) to model the global propagation patterns of continuous propagation processes. Addressing the challenges associated with solving ODEs on graphs, we approximate the analytical solutions to reduce computational costs. Finally, we conduct simulation experiments on two real-world social network datasets, and the results affirm the efficacy of our proposed ODESI model in source identification tasks.

Keywords: source identification, ordinary differential equations, label propagation, complex networks

Procedia PDF Downloads 18
565 Femicide: The Political and Social Blind Spot in the Legal and Welfare State of Germany

Authors: Kristina F. Wolff

Abstract:

Background: In the Federal Republic of Germany, violence against women is deeply embedded in society. Germany is, as of March 2020, the most populous member state of the European Union with 83.2 million inhabitants and, although more than half of its inhabitants are women, gender equality was not certified in the Basic Law until 1957. Women have only been allowed to enter paid employment without their husband's consent since 1977 and have marital rape prosecuted only since 1997. While the lack of equality between men and women is named in the preamble of the Istanbul Convention as the cause of gender-specific, structural, traditional violence against women, Germany continues to sink on the latest Gender Equality Index. According to Police Crime Statistics (PCS), women are significantly more often victims of lethal violence, emanating from men than vice versa. The PCS, which, since 2015, also collects gender-specific data on violent crimes, is kept by the Federal Criminal Police Office, but without taking into account the relevant criteria for targeted prevention, such as the history of violence of the perpetrator/killer, weapon, motivation, etc.. Institutions such as EIGE or the World Health Organization have been asking Germany for years in vain for comparable data on violence against women in order to gain an overview or to develop cross-border synergies. The PCS are the only official data collection on violence against women. All players involved are depend on this data set, which is published only in November of the following year and is thus already completely outdated at the time of publication. In order to combat German femicides causally, purposefully and efficiently, evidence-based data was urgently needed. Methodology: Beginning in January 2019, a database was set up that now tracks more than 600 German femicides, broken down by more than 100 crime-related individual criteria, which in turn go far beyond the official PCS. These data are evaluated on the one hand by daily media research, and on the other hand by case-specific inquiries at the respective public prosecutor's offices and courts nationwide. This quantitative long-term study covers domestic violence as well as a variety of different types of gender-specific, lethal violence, including, for example, femicides committed by German citizens abroad. Additionallyalcohol/ narcotic and/or drug abuse, infanticides and the gender aspect in the judiciary are also considered. Results: Since November 2020, evidence-based data from a scientific survey have been available for the first time in Germany, supplementing the rudimentary picture of reality provided by PCS with a number of relevant parameters. The most important goal of the study is to identify "red flags" that enable general preventive awareness, that serve increasingly precise hazard assessment in acute hazard situations, and from which concrete instructions for action can be identified. Already at a very early stage of the study it could be proven that in more than half of all femicides with a sexual perpetrator/victim constellation there was an age difference of five years or more. Summary: Without reliable data and an understanding of the nature and extent, cause and effect, it is impossible to sustainably curb violence against girls and women, which increasingly often culminates in femicide. In Germany, valid data from a scientific survey has been available for the first time since November 2020, supplementing the rudimentary reality picture of the official and, to date, sole crime statistics with several relevant parameters. The basic research provides insights into geo-concentration, monthly peaks and the modus operandi of male violent excesses. A significant increase of child homicides in the course of femicides and/or child homicides as an instrument of violence against the mother could be proven as well as a danger of affected persons due to an age difference of five years and more. In view of the steadily increasing wave of violence against women, these study results are an eminent contribution to the preventive containment of German femicides.

Keywords: femicide, violence against women, gender specific data, rule Of law, Istanbul convention, gender equality, gender based violence

Procedia PDF Downloads 89
564 Application of Artificial Intelligence in Market and Sales Network Management: Opportunities, Benefits, and Challenges

Authors: Mohamad Mahdi Namdari

Abstract:

In today's rapidly changing and evolving business competition, companies and organizations require advanced and efficient tools to manage their markets and sales networks. Big data analysis, quick response in competitive markets, process and operations optimization, and forecasting customer behavior are among the concerns of executive managers. Artificial intelligence, as one of the emerging technologies, has provided extensive capabilities in this regard. The use of artificial intelligence in market and sales network management can lead to improved efficiency, increased decision-making accuracy, and enhanced customer satisfaction. Specifically, AI algorithms can analyze vast amounts of data, identify complex patterns, and offer strategic suggestions to improve sales performance. However, many companies are still distant from effectively leveraging this technology, and those that do face challenges in fully exploiting AI's potential in market and sales network management. It appears that the general public's and even the managerial and academic communities' lack of knowledge of this technology has caused the managerial structure to lag behind the progress and development of artificial intelligence. Additionally, high costs, fear of change and employee resistance, lack of quality data production processes, the need for updating structures and processes, implementation issues, the need for specialized skills and technical equipment, and ethical and privacy concerns are among the factors preventing widespread use of this technology in organizations. Clarifying and explaining this technology, especially to the academic, managerial, and elite communities, can pave the way for a transformative beginning. The aim of this research is to elucidate the capacities of artificial intelligence in market and sales network management, identify its opportunities and benefits, and examine the existing challenges and obstacles. This research aims to leverage AI capabilities to provide a framework for enhancing market and sales network performance for managers. The results of this research can help managers and decision-makers adopt more effective strategies for business growth and development by better understanding the capabilities and limitations of artificial intelligence.

Keywords: artificial intelligence, market management, sales network, big data analysis, decision-making, digital marketing

Procedia PDF Downloads 41
563 Coupled Space and Time Homogenization of Viscoelastic-Viscoplastic Composites

Authors: Sarra Haouala, Issam Doghri

Abstract:

In this work, a multiscale computational strategy is proposed for the analysis of structures, which are described at a refined level both in space and in time. The proposal is applied to two-phase viscoelastic-viscoplastic (VE-VP) reinforced thermoplastics subjected to large numbers of cycles. The main aim is to predict the effective long time response while reducing the computational cost considerably. The proposed computational framework is a combination of the mean-field space homogenization based on the generalized incrementally affine formulation for VE-VP composites, and the asymptotic time homogenization approach for coupled isotropic VE-VP homogeneous solids under large numbers of cycles. The time homogenization method is based on the definition of micro and macro-chronological time scales, and on asymptotic expansions of the unknown variables. First, the original anisotropic VE-VP initial-boundary value problem of the composite material is decomposed into coupled micro-chronological (fast time scale) and macro-chronological (slow time-scale) problems. The former is purely VE, and solved once for each macro time step, whereas the latter problem is nonlinear and solved iteratively using fully implicit time integration. Second, mean-field space homogenization is used for both micro and macro-chronological problems to determine the micro and macro-chronological effective behavior of the composite material. The response of the matrix material is VE-VP with J2 flow theory assuming small strains. The formulation exploits the return-mapping algorithm for the J2 model, with its two steps: viscoelastic predictor and plastic corrections. The proposal is implemented for an extended Mori-Tanaka scheme, and verified against finite element simulations of representative volume elements, for a number of polymer composite materials subjected to large numbers of cycles.

Keywords: asymptotic expansions, cyclic loadings, inclusion-reinforced thermoplastics, mean-field homogenization, time homogenization

Procedia PDF Downloads 368
562 Toward Indoor and Outdoor Surveillance using an Improved Fast Background Subtraction Algorithm

Authors: El Harraj Abdeslam, Raissouni Naoufal

Abstract:

The detection of moving objects from a video image sequences is very important for object tracking, activity recognition, and behavior understanding in video surveillance. The most used approach for moving objects detection / tracking is background subtraction algorithms. Many approaches have been suggested for background subtraction. But, these are illumination change sensitive and the solutions proposed to bypass this problem are time consuming. In this paper, we propose a robust yet computationally efficient background subtraction approach and, mainly, focus on the ability to detect moving objects on dynamic scenes, for possible applications in complex and restricted access areas monitoring, where moving and motionless persons must be reliably detected. It consists of three main phases, establishing illumination changes in variance, background/foreground modeling and morphological analysis for noise removing. We handle illumination changes using Contrast Limited Histogram Equalization (CLAHE), which limits the intensity of each pixel to user determined maximum. Thus, it mitigates the degradation due to scene illumination changes and improves the visibility of the video signal. Initially, the background and foreground images are extracted from the video sequence. Then, the background and foreground images are separately enhanced by applying CLAHE. In order to form multi-modal backgrounds we model each channel of a pixel as a mixture of K Gaussians (K=5) using Gaussian Mixture Model (GMM). Finally, we post process the resulting binary foreground mask using morphological erosion and dilation transformations to remove possible noise. For experimental test, we used a standard dataset to challenge the efficiency and accuracy of the proposed method on a diverse set of dynamic scenes.

Keywords: video surveillance, background subtraction, contrast limited histogram equalization, illumination invariance, object tracking, object detection, behavior understanding, dynamic scenes

Procedia PDF Downloads 255
561 Digital Transformation and Digitalization of Public Administration

Authors: Govind Kumar

Abstract:

The concept of ‘e-governance’ that was brought about by the new wave of reforms, namely ‘LPG’ in the early 1990s, has been enabling governments across the globe to digitally transform themselves. Digital transformation is leading the governments with qualitative decisions, optimization in rational use of resources, facilitation of cost-benefit analyses, and elimination of redundancy and corruption with the help of ICT-based applications interface. ICT-based applications/technologies have enormous potential for impacting positive change in the social lives of the global citizenry. Supercomputers test and analyze millions of drug molecules for developing candidate vaccines to combat the global pandemic. Further, e-commerce portals help distribute and supply household items and medicines, while videoconferencing tools provide a visual interface between the clients and hosts. Besides, crop yields are being maximized with the help of drones and machine learning, whereas satellite data, artificial intelligence, and cloud computing help governments with the detection of illegal mining, tackling deforestation, and managing freshwater resources. Such e-applications have the potential to take governance an extra mile by achieving 5 Es (effective, efficient, easy, empower, and equity) of e-governance and six Rs (reduce, reuse, recycle, recover, redesign and remanufacture) of sustainable development. If such digital transformation gains traction within the government framework, it will replace the traditional administration with the digitalization of public administration. On the other hand, it has brought in a new set of challenges, like the digital divide, e-illiteracy, technological divide, etc., and problems like handling e-waste, technological obsolescence, cyber terrorism, e-fraud, hacking, phishing, etc. before the governments. Therefore, it would be essential to bring in a rightful mixture of technological and humanistic interventions for addressing the above issues. This is on account of the reason that technology lacks an emotional quotient, and the administration does not work like technology. Both are self-effacing unless a blend of technology and a humane face are brought in into the administration. The paper will empirically analyze the significance of the technological framework of digital transformation within the government set up for the digitalization of public administration on the basis of the synthesis of two case studies undertaken from two diverse fields of administration and present a future framework of the study.

Keywords: digital transformation, electronic governance, public administration, knowledge framework

Procedia PDF Downloads 98
560 Optimizing Electric Vehicle Charging Networks with Dynamic Pricing and Demand Elasticity

Authors: Chiao-Yi Chen, Dung-Ying Lin

Abstract:

With the growing awareness of environmental protection and the implementation of government carbon reduction policies, the number of electric vehicles (EVs) has rapidly increased, leading to a surge in charging demand and imposing significant challenges on the existing power grid’s capacity. Traditional urban power grid planning has not adequately accounted for the additional load generated by EV charging, which often strains the infrastructure. This study aims to optimize grid operation and load management by dynamically adjusting EV charging prices based on real-time electricity supply and demand, leveraging consumer demand elasticity to enhance system efficiency. This study uniquely addresses the intricate interplay between urban traffic patterns and power grid dynamics in the context of electric vehicle (EV) adoption. By integrating Hsinchu City's road network with the IEEE 33-bus system, the research creates a comprehensive model that captures both the spatial and temporal aspects of EV charging demand. This approach allows for a nuanced analysis of how traffic flow directly influences the load distribution across the power grid. The strategic placement of charging stations at key nodes within the IEEE 33-bus system, informed by actual road traffic data, enables a realistic simulation of the dynamic relationship between vehicle movement and energy consumption. This integration of transportation and energy systems provides a holistic view of the challenges and opportunities in urban EV infrastructure planning, highlighting the critical need for solutions that can adapt to the ever-changing interplay between traffic patterns and grid capacity. The proposed dynamic pricing strategy effectively reduces peak charging loads, enhances the operational efficiency of charging stations, and maximizes operator profits, all while ensuring grid stability. These findings provide practical insights and a valuable framework for optimizing EV charging infrastructure and policies in future smart cities, contributing to more resilient and sustainable urban energy systems.

Keywords: dynamic pricing, demand elasticity, EV charging, grid load balancing, optimization

Procedia PDF Downloads 17
559 Infrared Spectroscopy in Tandem with Machine Learning for Simultaneous Rapid Identification of Bacteria Isolated Directly from Patients' Urine Samples and Determination of Their Susceptibility to Antibiotics

Authors: Mahmoud Huleihel, George Abu-Aqil, Manal Suleiman, Klaris Riesenberg, Itshak Lapidot, Ahmad Salman

Abstract:

Urinary tract infections (UTIs) are considered to be the most common bacterial infections worldwide, which are caused mainly by Escherichia (E.) coli (about 80%). Klebsiella pneumoniae (about 10%) and Pseudomonas aeruginosa (about 6%). Although antibiotics are considered as the most effective treatment for bacterial infectious diseases, unfortunately, most of the bacteria already have developed resistance to the majority of the commonly available antibiotics. Therefore, it is crucial to identify the infecting bacteria and to determine its susceptibility to antibiotics for prescribing effective treatment. Classical methods are time consuming, require ~48 hours for determining bacterial susceptibility. Thus, it is highly urgent to develop a new method that can significantly reduce the time required for determining both infecting bacterium at the species level and diagnose its susceptibility to antibiotics. Fourier-Transform Infrared (FTIR) spectroscopy is well known as a sensitive and rapid method, which can detect minor molecular changes in bacterial genome associated with the development of resistance to antibiotics. The main goal of this study is to examine the potential of FTIR spectroscopy, in tandem with machine learning algorithms, to identify the infected bacteria at the species level and to determine E. coli susceptibility to different antibiotics directly from patients' urine in about 30minutes. For this goal, 1600 different E. coli isolates were isolated for different patients' urine sample, measured by FTIR, and analyzed using different machine learning algorithm like Random Forest, XGBoost, and CNN. We achieved 98% success in isolate level identification and 89% accuracy in susceptibility determination.

Keywords: urinary tract infections (UTIs), E. coli, Klebsiella pneumonia, Pseudomonas aeruginosa, bacterial, susceptibility to antibiotics, infrared microscopy, machine learning

Procedia PDF Downloads 169
558 Extraction of Nutraceutical Bioactive Compounds from the Native Algae Using Solvents with a Deep Natural Eutectic Point and Ultrasonic-assisted Extraction

Authors: Seyedeh Bahar Hashemi, Alireza Rahimi, Mehdi Arjmand

Abstract:

Food is the source of energy and growth through the breakdown of its vital components and plays a vital role in human health and nutrition. Many natural compounds found in plant and animal materials play a special role in biological systems and the origin of many such compounds directly or indirectly is algae. Algae is an enormous source of polysaccharides and have gained much interest in human flourishing. In this study, algae biomass extraction is conducted using deep eutectic-based solvents (NADES) and Ultrasound-assisted extraction (UAE). The aim of this research is to extract bioactive compounds including total carotenoid, antioxidant activity, and polyphenolic contents. For this purpose, the influence of three important extraction parameters namely, biomass-to-solvent ratio, temperature, and time are studied with respect to their impact on the recovery of carotenoids, and phenolics, and on the extracts’ antioxidant activity. Here we employ the Response Surface Methodology for the process optimization. The influence of the independent parameters on each dependent is determined through Analysis of Variance. Our results show that Ultrasound-assisted extraction (UAE) for 50 min is the best extraction condition, and proline:lactic acid (1:1) and choline chloride:urea (1:2) extracts show the highest total phenolic contents (50.00 ± 0.70 mgGAE/gdw) and antioxidant activity [60.00 ± 1.70 mgTE/gdw, 70.00 ± 0.90 mgTE/gdw in 2.2-diphenyl-1-picrylhydrazyl (DPPH), and 2.2′-azino-bis (3-ethylbenzothiazoline-6-sulfonic acid) (ABTS)]. Our results confirm that the combination of UAE and NADES provides an excellent alternative to organic solvents for sustainable and green extraction and has huge potential for use in industrial applications involving the extraction of bioactive compounds from algae. This study is among the first attempts to optimize the effects of ultrasonic-assisted extraction, ultrasonic devices, and deep natural eutectic point and investigate their application in bioactive compounds extraction from algae. We also study the future perspective of ultrasound technology which helps to understand the complex mechanism of ultrasonic-assisted extraction and further guide its application in algae.

Keywords: natural deep eutectic solvents, ultrasound-assisted extraction, algae, antioxidant activity, phenolic compounds, carotenoids

Procedia PDF Downloads 177
557 Circadian Rhythmic Expression of Choroid Plexus Membrane Transport Proteins

Authors: Rafael Mineiro, André Furtado, Isabel Gonçalves, Cecília Santos, Telma Quintela

Abstract:

The choroid plexus (CP) epithelial cells form the blood-cerebrospinal fluid barrier. This barrier is highly important for brain protection by physically separating the blood from the cerebrospinal fluid, controlling the trafficking of molecules, including therapeutic drugs, from blood to the brain. The control is achieved by tight junctions between epithelial cells, membrane receptors and transport proteins from the solute carrier and ATP-binding cassette superfamily on the choroid plexus epithelial cells membrane. Previous research of our group showed a functional molecular clock in the CP. The key findings included a rhythmic expression of Bmal1, Per2, and Cry2 in female rat CP. and a rhythmic expression of Cry2 and Per2 in male rat CP. Furthermore, in cultured rat CP epithelial cells we already showed that 17β-estradiol upregulates the expression of Bmal1 and Per1, where the Per1 and Per2 upregulation was abrogated in the presence of the estrogen receptors antagonist ICI. These findings, together with the fact that the CP produces robust rhythms, prompt us to understand the impact of sex hormones and circadian rhythms in CP drug transporters expression, which is a step towards the development and optimization of therapeutic strategies for efficiently delivering drugs to the brain. For that, we analyzed the circadian rhythmicity of the Abcb1, Abcc2, Abcc4 Abcg2, and Oat3 drug transporters at the CP of male and female rats. This analysis was performed by accessing the gene expression of the mentioned transporters at 4 time points by RT-qPCR and the presence of rhythms was evaluated by the CircWave software. Our findings showed a rhythmic expression of Abcc1 in the CP of male rats, of Abcg2 in female rats, and of Abcc4 and Oat3 in both male and female rats with an almost antiphasic pattern between male and female rats for Abcc4. In conclusion, these findings translated to a functional point of view may account for daily variations in brain permeability for several therapeutic drugs, making our findings important data for the future establishment and development of therapeutic strategies according to daytime.

Keywords: choroid plexus, circadian rhythm, membrane transporters, sex hormones

Procedia PDF Downloads 11
556 Design of the Ice Rink of the Future

Authors: Carine Muster, Prina Howald Erika

Abstract:

Today's ice rinks are important energy consumers for the production and maintenance of ice. At the same time, users demand that the other rooms should be tempered or heated. The building complex must equally provide cooled and heated zones, which does not translate as carbon-zero ice rinks. The study provides an analysis of how the civil engineering sector can significantly impact minimizing greenhouse gas emissions and optimizing synergies across an entire ice rink complex. The analysis focused on three distinct aspects: the layout, including the volumetric layout of the premises present in an ice rink; the materials chosen that can potentially use the most ecological structural approach; and the construction methods based on innovative solutions to reduce carbon footprint. The first aspect shows that the organization of the interior volumes and defining the shape of the rink play a significant role. Its layout makes the use and operation of the premises as efficient as possible, thanks to the differentiation between heated and cooled volumes while optimising heat loss between the different rooms. The sprayed concrete method, which is still little known, proves that it is possible to achieve the strength of traditional concrete for the structural aspect of the load-bearing and non-load-bearing walls of the ice rink by using materials excavated from the construction site and providing a more ecological and sustainable solution. The installation of an empty sanitary space underneath the ice floor, making it independent of the rest of the structure, provides a natural insulating layer, preventing the transfer of cold to the rest of the structure and reducing energy losses. The addition of active pipes as part of the foundation of the ice floor, coupled with a suitable system, gives warmth in the winter and storage in the summer; this is all possible thanks to the natural heat in the ground. In conclusion, this study provides construction recommendations for future ice rinks with a significantly reduced energy demand, using some simple preliminary design concepts. By optimizing the layout, materials, and construction methods of ice rinks, the civil engineering sector can play a key role in reducing greenhouse gas emissions and promoting sustainability.

Keywords: climate change, energy optimization, green building, sustainability

Procedia PDF Downloads 65
555 Cleaning of Scientific References in Large Patent Databases Using Rule-Based Scoring and Clustering

Authors: Emiel Caron

Abstract:

Patent databases contain patent related data, organized in a relational data model, and are used to produce various patent statistics. These databases store raw data about scientific references cited by patents. For example, Patstat holds references to tens of millions of scientific journal publications and conference proceedings. These references might be used to connect patent databases with bibliographic databases, e.g. to study to the relation between science, technology, and innovation in various domains. Problematic in such studies is the low data quality of the references, i.e. they are often ambiguous, unstructured, and incomplete. Moreover, a complete bibliographic reference is stored in only one attribute. Therefore, a computerized cleaning and disambiguation method for large patent databases is developed in this work. The method uses rule-based scoring and clustering. The rules are based on bibliographic metadata, retrieved from the raw data by regular expressions, and are transparent and adaptable. The rules in combination with string similarity measures are used to detect pairs of records that are potential duplicates. Due to the scoring, different rules can be combined, to join scientific references, i.e. the rules reinforce each other. The scores are based on expert knowledge and initial method evaluation. After the scoring, pairs of scientific references that are above a certain threshold, are clustered by means of single-linkage clustering algorithm to form connected components. The method is designed to disambiguate all the scientific references in the Patstat database. The performance evaluation of the clustering method, on a large golden set with highly cited papers, shows on average a 99% precision and a 95% recall. The method is therefore accurate but careful, i.e. it weighs precision over recall. Consequently, separate clusters of high precision are sometimes formed, when there is not enough evidence for connecting scientific references, e.g. in the case of missing year and journal information for a reference. The clusters produced by the method can be used to directly link the Patstat database with bibliographic databases as the Web of Science or Scopus.

Keywords: clustering, data cleaning, data disambiguation, data mining, patent analysis, scientometrics

Procedia PDF Downloads 193
554 Seismic Retrofit of Tall Building Structure with Viscous, Visco-Elastic, Visco-Plastic Damper

Authors: Nicolas Bae, Theodore L. Karavasilis

Abstract:

Increasingly, a large number of new and existing tall buildings are required to improve their resilient performance against strong winds and earthquakes to minimize direct, as well as indirect damages to society. Those advent stationary functions of tall building structures in metropolitan regions can be severely hazardous, in socio-economic terms, which also increase the requirement of advanced seismic performance. To achieve these progressive requirements, the seismic reinforcement for some old, conventional buildings have become enormously costly. The methods of increasing the buildings’ resilience against wind or earthquake loads have also become more advanced. Up to now, vibration control devices, such as the passive damper system, is still regarded as an effective and an easy-to-install option, in improving the seismic resilience of buildings at affordable prices. The main purpose of this paper is to examine 1) the optimization of the shape of visco plastic brace damper (VPBD) system which is one of hybrid damper system so that it can maximize its energy dissipation capacity in tall buildings against wind and earthquake. 2) the verification of the seismic performance of the visco plastic brace damper system in tall buildings; up to forty-storey high steel frame buildings, by comparing the results of Non-Linear Response History Analysis (NLRHA), with and without a damper system. The most significant contribution of this research is to introduce the optimized hybrid damper system that is adequate for high rise buildings. The efficiency of this visco plastic brace damper system and the advantages of its use in tall buildings can be verified since tall buildings tend to be affected by wind load at its normal state and also by earthquake load after yielding of steel plates. The modeling of the prototype tall building will be conducted using the Opensees software. Three types of modeling were used to verify the performance of the damper (MRF, MRF with visco-elastic, MRF with visco-plastic model) 22-set seismic records used and the scaling procedure was followed according to the FEMA code. It is shown that MRF with viscous, visco-elastic damper, it is superior effective to reduce inelastic deformation such as roof displacement, maximum story drift, roof velocity compared to the MRF only.

Keywords: tall steel building, seismic retrofit, viscous, viscoelastic damper, performance based design, resilience based design

Procedia PDF Downloads 189
553 Entry, Descent and Landing System Design and Analysis of a Small Platform in Mars Environment

Authors: Daniele Calvi, Loris Franchi, Sabrina Corpino

Abstract:

Thanks to the latest Mars mission, the planetary exploration has made enormous strides over the past ten years increasing the interest of the scientific community and beyond. These missions aim to fulfill many complex operations which are of paramount importance to mission success. Among these, a special mention goes to the Entry, Descent and Landing (EDL) functions which require a dedicated system to overcome all the obstacles of these critical phases. The general objective of the system is to safely bring the spacecraft from orbital conditions to rest on the planet surface, following the designed mission profile. For this reason, this work aims to develop a simulation tool integrating the re-entry trajectory algorithm in order to support the EDL design during the preliminary phase of the mission. This tool was used on a reference unmanned mission, whose objective is finding bio-evidence and bio-hazards on Martian (sub)surface in order to support the future manned mission. Regarding the concept of operations (CONOPS) of the mission, it concerns the use of Space Penetrator Systems (SPS) that will descend on Mars surface following a ballistic fall and will penetrate the ground after the impact with the surface (around 50 and 300 cm of depth). Each SPS shall contain all the instrumentation required to sample and make the required analyses. Respecting the low-cost and low-mass requirements, as result of the tool, an Entry Descent and Impact (EDI) system based on inflatable structure has been designed. Hence, a solution could be the one chosen by Finnish Meteorological Institute in the Mars Met-Net mission, using an inflatable Thermal Protection System (TPS) called Inflatable Braking Unit (IBU) and an additional inflatable decelerator. Consequently, there are three configurations during the EDI: at altitude of 125 km the IBU is inflated at speed 5.5 km/s; at altitude of 16 km the IBU is jettisoned and an Additional Inflatable Braking Unit (AIBU) is inflated; Lastly at about 13 km, the SPS is ejected from AIBU and it impacts on the Martian surface. Since all parameters are evaluated, it is possible to confirm that the chosen EDI system and strategy verify the requirements of the mission.

Keywords: EDL, Mars, mission, SPS, TPS

Procedia PDF Downloads 166
552 Acetic Acid Adsorption and Decomposition on Pt(111): Comparisons to Ni(111)

Authors: Lotanna Ezeonu, Jason P. Robbins, Ziyu Tang, Xiaofang Yang, Bruce E. Koel, Simon G. Podkolzin

Abstract:

The interaction of organic molecules with metal surfaces is of interest in numerous technological applications, such as catalysis, bone replacement, and biosensors. Acetic acid is one of the main products of bio-oils produced from the pyrolysis of hemicellulosic feedstocks. However, their high oxygen content makes them unsuitable for use as fuels. Hydrodeoxygenation is a proven technique for catalytic deoxygenation of bio-oils. An understanding of the energetics and control of the bond-breaking sequences of biomass-derived oxygenates on metal surfaces will enable a guided optimization of existing catalysts and the development of more active/selective processes for biomass transformations to fuels. Such investigations have been carried out with the aid of ultrahigh vacuum and its concomitant techniques. The high catalytic activity of platinum in biomass-derived oxygenate transformations has sparked a lot of interest. We herein exploit infrared reflection absorption spectroscopy(IRAS), temperature-programmed desorption(TPD), and density functional theory(DFT) to study the adsorption and decomposition of acetic acid on a Pt(111) surface, which was then compared with Ni(111), a model non-noble metal. We found that acetic acid adsorbs molecularly on the Pt(111) surface, interacting through the lone pair of electrons of one oxygen atomat 90 K. At 140 K, the molecular form is still predominant, with some dissociative adsorption (in the form of acetate and hydrogen). Annealing to 193 K led to complete dehydrogenation of molecular acetic acid species leaving adsorbed acetate. At 440 K, decomposition of the acetate species occurs via decarbonylation and decarboxylation as evidenced by desorption peaks for H₂,CO, CO₂ and CHX fragments (x=1, 2) in theTPD.The assignments for the experimental IR peaks were made using visualization of the DFT-calculated vibrational modes. The results showed that acetate adsorbs in a bridged bidentate (μ²η²(O,O)) configuration. The coexistence of linear and bridge bonded CO was also predicted by the DFT results. Similar molecular acid adsorption energy was predicted in the case of Ni(111) whereas a significant difference was found for acetate adsorption.

Keywords: acetic acid, platinum, nickel, infared-absorption spectrocopy, temperature programmed desorption, density functional theory

Procedia PDF Downloads 105
551 Development of a Computer Aided Diagnosis Tool for Brain Tumor Extraction and Classification

Authors: Fathi Kallel, Abdulelah Alabd Uljabbar, Abdulrahman Aldukhail, Abdulaziz Alomran

Abstract:

The brain is an important organ in our body since it is responsible about the majority actions such as vision, memory, etc. However, different diseases such as Alzheimer and tumors could affect the brain and conduct to a partial or full disorder. Regular diagnosis are necessary as a preventive measure and could help doctors to early detect a possible trouble and therefore taking the appropriate treatment, especially in the case of brain tumors. Different imaging modalities are proposed for diagnosis of brain tumor. The powerful and most used modality is the Magnetic Resonance Imaging (MRI). MRI images are analyzed by doctor in order to locate eventual tumor in the brain and describe the appropriate and needed treatment. Diverse image processing methods are also proposed for helping doctors in identifying and analyzing the tumor. In fact, a large Computer Aided Diagnostic (CAD) tools including developed image processing algorithms are proposed and exploited by doctors as a second opinion to analyze and identify the brain tumors. In this paper, we proposed a new advanced CAD for brain tumor identification, classification and feature extraction. Our proposed CAD includes three main parts. Firstly, we load the brain MRI. Secondly, a robust technique for brain tumor extraction is proposed. This technique is based on both Discrete Wavelet Transform (DWT) and Principal Component Analysis (PCA). DWT is characterized by its multiresolution analytic property, that’s why it was applied on MRI images with different decomposition levels for feature extraction. Nevertheless, this technique suffers from a main drawback since it necessitates a huge storage and is computationally expensive. To decrease the dimensions of the feature vector and the computing time, PCA technique is considered. In the last stage, according to different extracted features, the brain tumor is classified into either benign or malignant tumor using Support Vector Machine (SVM) algorithm. A CAD tool for brain tumor detection and classification, including all above-mentioned stages, is designed and developed using MATLAB guide user interface.

Keywords: MRI, brain tumor, CAD, feature extraction, DWT, PCA, classification, SVM

Procedia PDF Downloads 246
550 River Network Delineation from Sentinel 1 Synthetic Aperture Radar Data

Authors: Christopher B. Obida, George A. Blackburn, James D. Whyatt, Kirk T. Semple

Abstract:

In many regions of the world, especially in developing countries, river network data are outdated or completely absent, yet such information is critical for supporting important functions such as flood mitigation efforts, land use and transportation planning, and the management of water resources. In this study, a method was developed for delineating river networks using Sentinel 1 imagery. Unsupervised classification was applied to multi-temporal Sentinel 1 data to discriminate water bodies from other land covers then the outputs were combined to generate a single persistent water bodies product. A thinning algorithm was then used to delineate river centre lines, which were converted into vector features and built into a topologically structured geometric network. The complex river system of the Niger Delta was used to compare the performance of the Sentinel-based method against alternative freely available water body products from United States Geological Survey, European Space Agency and OpenStreetMap and a river network derived from a Shuttle Rader Topography Mission Digital Elevation Model. From both raster-based and vector-based accuracy assessments, it was found that the Sentinel-based river network products were superior to the comparator data sets by a substantial margin. The geometric river network that was constructed permitted a flow routing analysis which is important for a variety of environmental management and planning applications. The extracted network will potentially be applied for modelling dispersion of hydrocarbon pollutants in Ogoniland, a part of the Niger Delta. The approach developed in this study holds considerable potential for generating up to date, detailed river network data for the many countries where such data are deficient.

Keywords: Sentinel 1, image processing, river delineation, large scale mapping, data comparison, geometric network

Procedia PDF Downloads 137
549 Indian Premier League (IPL) Score Prediction: Comparative Analysis of Machine Learning Models

Authors: Rohini Hariharan, Yazhini R, Bhamidipati Naga Shrikarti

Abstract:

In the realm of cricket, particularly within the context of the Indian Premier League (IPL), the ability to predict team scores accurately holds significant importance for both cricket enthusiasts and stakeholders alike. This paper presents a comprehensive study on IPL score prediction utilizing various machine learning algorithms, including Support Vector Machines (SVM), XGBoost, Multiple Regression, Linear Regression, K-nearest neighbors (KNN), and Random Forest. Through meticulous data preprocessing, feature engineering, and model selection, we aimed to develop a robust predictive framework capable of forecasting team scores with high precision. Our experimentation involved the analysis of historical IPL match data encompassing diverse match and player statistics. Leveraging this data, we employed state-of-the-art machine learning techniques to train and evaluate the performance of each model. Notably, Multiple Regression emerged as the top-performing algorithm, achieving an impressive accuracy of 77.19% and a precision of 54.05% (within a threshold of +/- 10 runs). This research contributes to the advancement of sports analytics by demonstrating the efficacy of machine learning in predicting IPL team scores. The findings underscore the potential of advanced predictive modeling techniques to provide valuable insights for cricket enthusiasts, team management, and betting agencies. Additionally, this study serves as a benchmark for future research endeavors aimed at enhancing the accuracy and interpretability of IPL score prediction models.

Keywords: indian premier league (IPL), cricket, score prediction, machine learning, support vector machines (SVM), xgboost, multiple regression, linear regression, k-nearest neighbors (KNN), random forest, sports analytics

Procedia PDF Downloads 50
548 Spatial Suitability Assessment of Onshore Wind Systems Using the Analytic Hierarchy Process

Authors: Ayat-Allah Bouramdane

Abstract:

Since 2010, there have been sustained decreases in the unit costs of onshore wind energy and large increases in its deployment, varying widely across regions. In fact, the onshore wind production is affected by air density— because cold air is more dense and therefore more effective at producing wind power— and by wind speed—as wind turbines cannot operate in very low or extreme stormy winds. The wind speed is essentially affected by the surface friction or the roughness and other topographic features of the land, which slow down winds significantly over the continent. Hence, the identification of the most appropriate locations of onshore wind systems is crucial to maximize their energy output and therefore minimize their Levelized Cost of Electricity (LCOE). This study focuses on the preliminary assessment of onshore wind energy potential, in several areas in Morocco with a particular focus on the Dakhla city, by analyzing the diurnal and seasonal variability of wind speed for different hub heights, the frequency distribution of wind speed, the wind rose and the wind performance indicators such as wind power density, capacity factor, and LCOE. In addition to climate criterion, other criteria (i.e., topography, location, environment) were selected fromGeographic Referenced Information (GRI), reflecting different considerations. The impact of each criterion on the suitability map of onshore wind farms was identified using the Analytic Hierarchy Process (AHP). We find that the majority of suitable zones are located along the Atlantic Ocean and the Mediterranean Sea. We discuss the sensitivity of the onshore wind site suitability to different aspects such as the methodology—by comparing the Multi-Criteria Decision-Making (MCDM)-AHP results to the Mean-Variance Portfolio optimization framework—and the potential impact of climate change on this suitability map, and provide the final recommendations to the Moroccan energy strategy by analyzing if the actual Morocco's onshore wind installations are located within areas deemed suitable. This analysis may serve as a decision-making framework for cost-effective investment in onshore wind power in Morocco and to shape the future sustainable development of the Dakhla city.

Keywords: analytic hierarchy process (ahp), dakhla, geographic referenced information, morocco, multi-criteria decision-making, onshore wind, site suitability.

Procedia PDF Downloads 167
547 E4D-MP: Time-Lapse Multiphysics Simulation and Joint Inversion Toolset for Large-Scale Subsurface Imaging

Authors: Zhuanfang Fred Zhang, Tim C. Johnson, Yilin Fang, Chris E. Strickland

Abstract:

A variety of geophysical techniques are available to image the opaque subsurface with little or no contact with the soil. It is common to conduct time-lapse surveys of different types for a given site for improved results of subsurface imaging. Regardless of the chosen survey methods, it is often a challenge to process the massive amount of survey data. The currently available software applications are generally based on the one-dimensional assumption for a desktop personal computer. Hence, they are usually incapable of imaging the three-dimensional (3D) processes/variables in the subsurface of reasonable spatial scales; the maximum amount of data that can be inverted simultaneously is often very small due to the capability limitation of personal computers. Presently, high-performance or integrating software that enables real-time integration of multi-process geophysical methods is needed. E4D-MP enables the integration and inversion of time-lapsed large-scale data surveys from geophysical methods. Using the supercomputing capability and parallel computation algorithm, E4D-MP is capable of processing data across vast spatiotemporal scales and in near real time. The main code and the modules of E4D-MP for inverting individual or combined data sets of time-lapse 3D electrical resistivity, spectral induced polarization, and gravity surveys have been developed and demonstrated for sub-surface imaging. E4D-MP provides capability of imaging the processes (e.g., liquid or gas flow, solute transport, cavity development) and subsurface properties (e.g., rock/soil density, conductivity) critical for successful control of environmental engineering related efforts such as environmental remediation, carbon sequestration, geothermal exploration, and mine land reclamation, among others.

Keywords: gravity survey, high-performance computing, sub-surface monitoring, electrical resistivity tomography

Procedia PDF Downloads 155
546 Resale Housing Development Board Price Prediction Considering Covid-19 through Sentiment Analysis

Authors: Srinaath Anbu Durai, Wang Zhaoxia

Abstract:

Twitter sentiment has been used as a predictor to predict price values or trends in both the stock market and housing market. The pioneering works in this stream of research drew upon works in behavioural economics to show that sentiment or emotions impact economic decisions. Latest works in this stream focus on the algorithm used as opposed to the data used. A literature review of works in this stream through the lens of data used shows that there is a paucity of work that considers the impact of sentiments caused due to an external factor on either the stock or the housing market. This is despite an abundance of works in behavioural economics that show that sentiment or emotions caused due to an external factor impact economic decisions. To address this gap, this research studies the impact of Twitter sentiment pertaining to the Covid-19 pandemic on resale Housing Development Board (HDB) apartment prices in Singapore. It leverages SNSCRAPE to collect tweets pertaining to Covid-19 for sentiment analysis, lexicon based tools VADER and TextBlob are used for sentiment analysis, Granger Causality is used to examine the relationship between Covid-19 cases and the sentiment score, and neural networks are leveraged as prediction models. Twitter sentiment pertaining to Covid-19 as a predictor of HDB price in Singapore is studied in comparison with the traditional predictors of housing prices i.e., the structural and neighbourhood characteristics. The results indicate that using Twitter sentiment pertaining to Covid19 leads to better prediction than using only the traditional predictors and performs better as a predictor compared to two of the traditional predictors. Hence, Twitter sentiment pertaining to an external factor should be considered as important as traditional predictors. This paper demonstrates the real world economic applications of sentiment analysis of Twitter data.

Keywords: sentiment analysis, Covid-19, housing price prediction, tweets, social media, Singapore HDB, behavioral economics, neural networks

Procedia PDF Downloads 113
545 Biogas Potential of Deinking Sludge from Wastepaper Recycling Industry: Influence of Dewatering Degree and High Calcium Carbonate Content

Authors: Moses Kolade Ogun, Ina Korner

Abstract:

To improve on the sustainable resource management in the wastepaper recycling industry, studies into the valorization of wastes generated by the industry are necessary. The industry produces different residues, among which is the deinking sludge (DS). The DS is generated from the deinking process and constitutes a major fraction of the residues generated by the European pulp and paper industry. The traditional treatment of DS by incineration is capital intensive due to energy requirement for dewatering and the need for complementary fuel source due to DS low calorific value. This could be replaced by a biotechnological approach. This study, therefore, investigated the biogas potential of different DS streams (different dewatering degrees) and the influence of the high calcium carbonate content of DS on its biogas potential. Dewatered DS (solid fraction) sample from filter press and the filtrate (liquid fraction) were collected from a partner wastepaper recycling company in Germany. The solid fraction and the liquid fraction were mixed in proportion to realize DS with different water content (55–91% fresh mass). Spiked samples of DS using deionized water, cellulose and calcium carbonate were prepared to simulate DS with varying calcium carbonate content (0– 40% dry matter). Seeding sludge was collected from an existing biogas plant treating sewage sludge in Germany. Biogas potential was studied using a 1-liter batch test system under the mesophilic condition and ran for 21 days. Specific biogas potential in the range 133- 230 NL/kg-organic dry matter was observed for DS samples investigated. It was found out that an increase in the liquid fraction leads to an increase in the specific biogas potential and a reduction in the absolute biogas potential (NL-biogas/ fresh mass). By comparing the absolute biogas potential curve and the specific biogas potential curve, an optimal dewatering degree corresponding to a water content of about 70% fresh mass was identified. This degree of dewatering is a compromise when factors such as biogas yield, reactor size, energy required for dewatering and operation cost are considered. No inhibitory influence was observed in the biogas potential of DS due to the reported high calcium carbonate content of DS. This study confirms that DS is a potential bioresource for biogas production. Further optimization such as nitrogen supplementation due to DS high C/N ratio can increase biogas yield.

Keywords: biogas, calcium carbonate, deinking sludge, dewatering, water content

Procedia PDF Downloads 179
544 Process Modeling in an Aeronautics Context

Authors: Sophie Lemoussu, Jean-Charles Chaudemar, Robertus A. Vingerhoeds

Abstract:

Many innovative projects exist in the field of aeronautics, each addressing specific areas so to reduce weight, increase autonomy, reduction of CO2, etc. In many cases, such innovative developments are being carried out by very small enterprises (VSE’s) or small and medium sized-enterprises (SME’s). A good example concerns airships that are being studied as a real alternative to passenger and cargo transportation. Today, no international regulations propose a precise and sufficiently detailed framework for the development and certification of airships. The absence of such a regulatory framework requires a very close contact with regulatory instances. However, VSE’s/SME’s do not always have sufficient resources and internal knowledge to handle this complexity and to discuss these issues. This poses an additional challenge for those VSE’s/SME’s, in particular those that have system integration responsibilities and that must provide all the necessary evidence to demonstrate their ability to design, produce, and operate airships with the expected level of safety and reliability. The main objective of this research is to provide a methodological framework enabling VSE’s/SME’s with limited resources to organize the development of airships while taking into account the constraints of safety, cost, time and performance. This paper proposes to provide a contribution to this problematic by proposing a Model-Based Systems Engineering approach. Through a comprehensive process modeling approach applied to the development processes, the regulatory constraints, existing best practices, etc., a good image can be obtained as to the process landscape that may influence the development of airships. To this effect, not only the necessary regulatory information is taken on board, also other international standards and norms on systems engineering and project management are being modeled and taken into account. In a next step, the model can be used for analysis of the specific situation for given developments, derive critical paths for the development, identify eventual conflicting aspects between the norms, standards, and regulatory expectations, or also identify those areas where not enough information is available. Once critical paths are known, optimization approaches can be used and decision support techniques can be applied so to better support VSE’s/SME’s in their innovative developments. This paper reports on the adopted modeling approach, the retained modeling languages, and how they all fit together.

Keywords: aeronautics, certification, process modeling, project management, regulation, SME, systems engineering, VSE

Procedia PDF Downloads 160
543 Microalgae Hydrothermal Liquefaction Process Optimization and Comprehension to Produce High Quality Biofuel

Authors: Lucie Matricon, Anne Roubaud, Geert Haarlemmer, Christophe Geantet

Abstract:

Introduction: This case discusses the management of two floor of mouth (FOM) Squamous Cell Carcinomas (SCC) not identified upon initial biopsy. Case Report: A 51 year-old male presented with right FOM erythroleukoplakia. Relevant medical history included alcoholic dependence syndrome and alcoholic liver disease. Relevant drug therapy encompassed acamprosate, folic acid, hydroxocobalamin and thiamine. The patient had a 55.5 pack-year smoking history and alcohol dependence from age 14, drinking 16 units/day. FOM incisional biopsy and histopathological analysis diagnosed Carcinoma in situ. Treatment involved wide local excision. Specimen analysis revealed two separate foci of pT1 moderately differentiated SCCs. Carcinoma staging scans revealed no pathological lymphadenopathy, no local invasion or metastasis. SCCs had been excised in completion with narrow margins. MDT discussion concluded that in view of the field changes it would be difficult to identify specific areas needing further excision, although techniques such as Lugol’s Iodine were considered. Further surgical resection, surgical neck management and sentinel lymph node biopsy was offered. The patient declined intervention, primary management involved close monitoring alongside alcohol and smoking cessation referral. Discussion: Narrow excisional margins can increase carcinoma recurrence risk. Biopsy failed to identify SCCs, despite sampling an area of clinical concern. For gross field change multiple incisional biopsies should be considered to increase chance of accurate diagnosis and appropriate treatment. Coupling of tobacco and alcohol has a synergistic effect, exponentially increasing the relative risk of oral carcinoma development. Tobacco and alcoholic control is fundamental in reducing treatment‑related side effects, recurrence risk, and second primary cancer development.

Keywords: microalgae, biofuels, hydrothermal liquefaction, biomass

Procedia PDF Downloads 131
542 Combine Resection of Talocalcaneal Tarsal Coalition and Calcaneal Lengthening Osteotomy. Short-to-Intermediate Term Results

Authors: Naum Simanovsky, Vladimir Goldman, Michael Zaidman

Abstract:

Background: The optimal algorithm for the management of symptomatic tarsal coalition is still under discussion in pediatric literature. It's debatable what surgical steps are essential to achieve the best outcome. Method: The investigators retrospectively reviewed the records of twelve patients with symptomatic tarsal coalition that were treated operatively between 2017 and 2019. Only painful flat feet were operated. Two patients were excluded from the study due to lack of sufficient follow-up. Ten of eleven feet were treated with the combination of calcaneal lengthening osteotomy (CLO) and resection of coalition (RC). Only one foot was operated with CLO alone. In half of our patients, Achilles lengthening was performed. For two children, medial plication was added. Short leg cast was applied to all children for 6-8 weeks, and soft shoe insoles for medial arch support were prescribed after. Demographic, clinical, and radiographic records were reviewed. The outcome was evaluated using American Orthopedic Foot and Ankle Society (AOFAS) Ankle Hindfoot Score. Results: There were seven boys and three girls. The mean age at the time of surgery was 13.9 (range 12 to 17) years, and the mean follow-up was 18 (range 8 to 34) months. The early complications included one superficial wound infection and dehiscence. Late complication includes two children with residual forefoot supination. None of our patients required additional operations during the follow-up period. All feet achieved complete deformity correction or dramatic improvement. In the last follow-up, seven feet were painless, and four children had some mild pain after intensive activities. All feet achieved excellent and good scoring on AOFAS. Conclusions: Many patients with talocalcaneal coalition also have rigid or stiff, painful, flat feet. For these patients, the resection of coalition with concomitant CLO can be safely recommended.

Keywords: Tarsal coalition, calcaneal lengthening osteotomy., flat foot, coalition resection

Procedia PDF Downloads 63
541 Magnetic Solid-Phase Separation of Uranium from Aqueous Solution Using High Capacity Diethylenetriamine Tethered Magnetic Adsorbents

Authors: Amesh P, Suneesh A S, Venkatesan K A

Abstract:

The magnetic solid-phase extraction is a relatively new method among the other solid-phase extraction techniques for the separating of metal ions from aqueous solutions, such as mine water and groundwater, contaminated wastes, etc. However, the bare magnetic particles (Fe3O4) exhibit poor selectivity due to the absence of target-specific functional groups for sequestering the metal ions. The selectivity of these magnetic particles can be remarkably improved by covalently tethering the task-specific ligands on magnetic surfaces. The magnetic particles offer a number of advantages such as quick phase separation aided by the external magnetic field. As a result, the solid adsorbent can be prepared with the particle size ranging from a few micrometers to the nanometer, which again offers the advantages such as enhanced kinetics of extraction, higher extraction capacity, etc. Conventionally, the magnetite (Fe3O4) particles were prepared by the hydrolysis and co-precipitation of ferrous and ferric salts in aqueous ammonia solution. Since the covalent linking of task-specific functionalities on Fe3O4 was difficult, and it is also susceptible to redox reaction in the presence of acid or alkali, it is necessary to modify the surface of Fe3O4 by silica coating. This silica coating is usually carried out by hydrolysis and condensation of tetraethyl orthosilicate over the surface of magnetite to yield a thin layer of silica-coated magnetite particles. Since the silica-coated magnetite particles amenable for further surface modification, it can be reacted with task-specific functional groups to obtain the functionalized magnetic particles. The surface area exhibited by such magnetic particles usually falls in the range of 50 to 150 m2.g-1, which offer advantage such as quick phase separation, as compared to the other solid-phase extraction systems. In addition, the magnetic (Fe3O4) particles covalently linked on mesoporous silica matrix (MCM-41) and task-specific ligands offer further advantages in terms of extraction kinetics, high stability, longer reusable cycles, and metal extraction capacity, due to the large surface area, ample porosity and enhanced number of functional groups per unit area on these adsorbents. In view of this, the present paper deals with the synthesis of uranium specific diethylenetriamine ligand (DETA) ligand anchored on silica-coated magnetite (Fe-DETA) as well as on magnetic mesoporous silica (MCM-Fe-DETA) and studies on the extraction of uranium from aqueous solution spiked with uranium to mimic the mine water or groundwater contaminated with uranium. The synthesized solid-phase adsorbents were characterized by FT-IR, Raman, TG-DTA, XRD, and SEM. The extraction behavior of uranium on the solid-phase was studied under several conditions like the effect of pH, initial concentration of uranium, rate of extraction and its variation with pH and initial concentration of uranium, effect of interference ions like CO32-, Na+, Fe+2, Ni+2, and Cr+3, etc. The maximum extraction capacity of 233 mg.g-1 was obtained for Fe-DETA, and a huge capacity of 1047 mg.g-1 was obtained for MCM-Fe-DETA. The mechanism of extraction, speciation of uranium, extraction studies, reusability, and the other results obtained in the present study suggests Fe-DETA and MCM-Fe-DETA are the potential candidates for the extraction of uranium from mine water, and groundwater.

Keywords: diethylenetriamine, magnetic mesoporous silica, magnetic solid-phase extraction, uranium extraction, wastewater treatment

Procedia PDF Downloads 167
540 RPM-Synchronous Non-Circular Grinding: An Approach to Enhance Efficiency in Grinding of Non-Circular Workpieces

Authors: Matthias Steffan, Franz Haas

Abstract:

The production process grinding is one of the latest steps in a value-added manufacturing chain. Within this step, workpiece geometry and surface roughness are determined. Up to this process stage, considerable costs and energy have already been spent on components. According to the current state of the art, therefore, large safety reserves are calculated in order to guarantee a process capability. Especially for non-circular grinding, this fact leads to considerable losses of process efficiency. With present technology, various non-circular geometries on a workpiece must be grinded subsequently in an oscillating process where X- and Q-axis of the machine are coupled. With the approach of RPM-Synchronous Noncircular Grinding, such workpieces can be machined in an ordinary plung grinding process. Therefore, the workpieces and the grinding wheels revolutionary rate are in a fixed ratio. A non-circular grinding wheel is used to transfer its geometry onto the workpiece. The authors use a worldwide unique machine tool that was especially designed for this technology. Highest revolution rates on the workpiece spindle (up to 4500 rpm) are mandatory for the success of this grinding process. This grinding approach is performed in a two-step process. For roughing, a highly porous vitrified bonded grinding wheel with medium grain size is used. It ensures high specific material removal rates for efficiently producing the non-circular geometry on the workpiece. This process step is adapted by a force control algorithm, which uses acquired data from a three-component force sensor located in the dead centre of the tailstock. For finishing, a grinding wheel with a fine grain size is used. Roughing and finishing are performed consecutively among the same clamping of the workpiece with two locally separated grinding spindles. The approach of RPM-Synchronous Noncircular Grinding shows great efficiency enhancement in non-circular grinding. For the first time, three-dimensional non-circular shapes can be grinded that opens up various fields of application. Especially automotive industries show big interest in the emerging trend in finishing machining.

Keywords: efficiency enhancement, finishing machining, non-circular grinding, rpm-synchronous grinding

Procedia PDF Downloads 282
539 Joint Training Offer Selection and Course Timetabling Problems: Models and Algorithms

Authors: Gianpaolo Ghiani, Emanuela Guerriero, Emanuele Manni, Alessandro Romano

Abstract:

In this article, we deal with a variant of the classical course timetabling problem that has a practical application in many areas of education. In particular, in this paper we are interested in high schools remedial courses. The purpose of such courses is to provide under-prepared students with the skills necessary to succeed in their studies. In particular, a student might be under prepared in an entire course, or only in a part of it. The limited availability of funds, as well as the limited amount of time and teachers at disposal, often requires schools to choose which courses and/or which teaching units to activate. Thus, schools need to model the training offer and the related timetabling, with the goal of ensuring the highest possible teaching quality, by meeting the above-mentioned financial, time and resources constraints. Moreover, there are some prerequisites between the teaching units that must be satisfied. We first present a Mixed-Integer Programming (MIP) model to solve this problem to optimality. However, the presence of many peculiar constraints contributes inevitably in increasing the complexity of the mathematical model. Thus, solving it through a general purpose solver may be performed for small instances only, while solving real-life-sized instances of such model requires specific techniques or heuristic approaches. For this purpose, we also propose a heuristic approach, in which we make use of a fast constructive procedure to obtain a feasible solution. To assess our exact and heuristic approaches we perform extensive computational results on both real-life instances (obtained from a high school in Lecce, Italy) and randomly generated instances. Our tests show that the MIP model is never solved to optimality, with an average optimality gap of 57%. On the other hand, the heuristic algorithm is much faster (in about the 50% of the considered instances it converges in approximately half of the time limit) and in many cases allows achieving an improvement on the objective function value obtained by the MIP model. Such an improvement ranges between 18% and 66%.

Keywords: heuristic, MIP model, remedial course, school, timetabling

Procedia PDF Downloads 604