Search results for: inverse problem in tomography
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7949

Search results for: inverse problem in tomography

6389 The Development Practice and SystemConstruction of Low- Carbon City in China

Authors: Xu Xiao China, Xu Lei China

Abstract:

After the 1990s, the concept of urban sustainable development has been increasing attention in urban planning and urban design. High carbon city, not a sustainable city construction model, has become an important problem which restricts the sustainable development of the city. Therefore, low-carbon city construction is the urgent need to solve the problem, and China is one of the core areas of low-carbon city construction in the world. The research work of low-carbon cities were participated by the Chinese government and academic institutes on theory and practice since 2007, and nowadays it comes to a practice stage with six low-carbon pilot provinces and 36 low-carbon pilot cities identified. To achieve the low-carbon target, developing low-carbon energy, adopting non-pollution technique, constructing green buildings and adopting ecolife-style are suggest by the government. Meanwhile, besides a new standard system and a new eco-environmental status evaluation method, the government also established the Chinese urban development institute including the Low-Carbon City Group. Finally, we want to transform the modern industrial civilization into an ecological civilization and realize sustainable urban development.

Keywords: low-carbon city, China, development practice, system construction, urban sustainability

Procedia PDF Downloads 531
6388 Multi-Modality Imaging of Aggressive Hoof Wall Neoplasia in Two Horses

Authors: Hannah Nagel, Hayley Lang, Albert Sole Guitart, Natasha Lean, Rachel Allavena, Cleide Sprohnie-Barrera, Alex Young

Abstract:

Aggressive neoplasia of the hoof is a rare occurrence in horses and has been only sporadically described in the literature. In the few cases reported intra-hoof wall, aggressive neoplasia has been documented radiographically and has been described with variable imaging characteristics. These include a well-defined osteolytic area, a smoothly outlined semi-circular defect, an extensive draining tract beneath the hoof wall, as well as an additional large area of osteolysis or an extensive central lytic region. A 20-year-old Quarterhorse gelding and a 10-year-old Thoroughbred gelding were both presented for chronic reoccurring lameness in the left forelimb and left hindlimb, respectively. Both of the cases displayed radiographic lesions that have been previously described but also displayed osteoproliferative expansile regions of additional bone formation. Changes associated with hoof neoplasia are often non-specific due to the nature and capacity of bone to react to pathological insult, which is either to proliferate or be absorbed. Both cases depict and describe imaging findings seen on radiography, contrast radiography, computed tomography, and magnetic resonance imaging before reaching a histological diagnosis of malignant melanoma and squamous cell carcinoma. Although aggressive hoof wall neoplasia is rare, there are some imaging features which may raise our index of suspicion for an aggressive hoof wall lesion. This case report documents two horses with similar imaging findings who underwent multiple assessments, surgical interventions, and imaging modalities with a final diagnosis of malignant neoplasia.

Keywords: horse, hoof, imaging, radiography, neoplasia

Procedia PDF Downloads 135
6387 On Boundary Value Problems of Fractional Differential Equations Involving Stieltjes Derivatives

Authors: Baghdad Said

Abstract:

Differential equations of fractional order have proved to be important tools to describe many physical phenomena and have been used in diverse fields such as engineering, mathematics as well as other applied sciences. On the other hand, the theory of differential equations involving the Stieltjes derivative (SD) with respect to a non-decreasing function is a new class of differential equations and has many applications as a unified framework for dynamic equations on time scales and differential equations with impulses at fixed times. The aim of this paper is to investigate the existence, uniqueness, and generalized Ulam-Hyers-Rassias stability (UHRS) of solutions for a boundary value problem of sequential fractional differential equations (SFDE) containing (SD). This study is based on the technique of noncompactness measures (MNCs) combined with Monch-Krasnoselski fixed point theorems (FPT), and the results are proven in an appropriate Banach space under sufficient hypotheses. We also give an illustrative example. In this work, we introduced a class of (SFDE) and the results are obtained under a few hypotheses. Future directions connected to this work could focus on another problem with different types of fractional integrals and derivatives, and the (SD) will be assumed under a more general hypothesis in more general functional spaces.

Keywords: SFDE, SD, UHRS, MNCs, FPT

Procedia PDF Downloads 46
6386 Multiple Version of Roman Domination in Graphs

Authors: J. C. Valenzuela-Tripodoro, P. Álvarez-Ruíz, M. A. Mateos-Camacho, M. Cera

Abstract:

In 2004, it was introduced the concept of Roman domination in graphs. This concept was initially inspired and related to the defensive strategy of the Roman Empire. An undefended place is a city so that no legions are established on it, whereas a strong place is a city in which two legions are deployed. This situation may be modeled by labeling the vertices of a finite simple graph with labels {0, 1, 2}, satisfying the condition that any 0-vertex must be adjacent to, at least, a 2-vertex. Roman domination in graphs is a variant of classic domination. Clearly, the main aim is to obtain such labeling of the vertices of the graph with minimum cost, that is to say, having minimum weight (sum of all vertex labels). Formally, a function f: V (G) → {0, 1, 2} is a Roman dominating function (RDF) in the graph G = (V, E) if f(u) = 0 implies that f(v) = 2 for, at least, a vertex v which is adjacent to u. The weight of an RDF is the positive integer w(f)= ∑_(v∈V)▒〖f(v)〗. The Roman domination number, γ_R (G), is the minimum weight among all the Roman dominating functions? Obviously, the set of vertices with a positive label under an RDF f is a dominating set in the graph, and hence γ(G)≤γ_R (G). In this work, we start the study of a generalization of RDF in which we consider that any undefended place should be defended from a sudden attack by, at least, k legions. These legions can be deployed in the city or in any of its neighbours. A function f: V → {0, 1, . . . , k + 1} such that f(N[u]) ≥ k + |AN(u)| for all vertex u with f(u) < k, where AN(u) represents the set of active neighbours (i.e., with a positive label) of vertex u, is called a [k]-multiple Roman dominating functions and it is denoted by [k]-MRDF. The minimum weight of a [k]-MRDF in the graph G is the [k]-multiple Roman domination number ([k]-MRDN) of G, denoted by γ_[kR] (G). First, we prove that the [k]-multiple Roman domination decision problem is NP-complete even when restricted to bipartite and chordal graphs. A problem that had been resolved for other variants and wanted to be generalized. We know the difficulty of calculating the exact value of the [k]-MRD number, even for families of particular graphs. Here, we present several upper and lower bounds for the [k]-MRD number that permits us to estimate it with as much precision as possible. Finally, some graphs with the exact value of this parameter are characterized.

Keywords: multiple roman domination function, decision problem np-complete, bounds, exact values

Procedia PDF Downloads 111
6385 Lung HRCT Pattern Classification for Cystic Fibrosis Using a Convolutional Neural Network

Authors: Parisa Mansour

Abstract:

Cystic fibrosis (CF) is one of the most common autosomal recessive diseases among whites. It mostly affects the lungs, causing infections and inflammation that account for 90% of deaths in CF patients. Because of this high variability in clinical presentation and organ involvement, investigating treatment responses and evaluating lung changes over time is critical to preventing CF progression. High-resolution computed tomography (HRCT) greatly facilitates the assessment of lung disease progression in CF patients. Recently, artificial intelligence was used to analyze chest CT scans of CF patients. In this paper, we propose a convolutional neural network (CNN) approach to classify CF lung patterns in HRCT images. The proposed network consists of two convolutional layers with 3 × 3 kernels and maximally connected in each layer, followed by two dense layers with 1024 and 10 neurons, respectively. The softmax layer prepares a predicted output probability distribution between classes. This layer has three exits corresponding to the categories of normal (healthy), bronchitis and inflammation. To train and evaluate the network, we constructed a patch-based dataset extracted from more than 1100 lung HRCT slices obtained from 45 CF patients. Comparative evaluation showed the effectiveness of the proposed CNN compared to its close peers. Classification accuracy, average sensitivity and specificity of 93.64%, 93.47% and 96.61% were achieved, indicating the potential of CNNs in analyzing lung CF patterns and monitoring lung health. In addition, the visual features extracted by our proposed method can be useful for automatic measurement and finally evaluation of the severity of CF patterns in lung HRCT images.

Keywords: HRCT, CF, cystic fibrosis, chest CT, artificial intelligence

Procedia PDF Downloads 69
6384 Multi-Objective Production Planning Problem: A Case Study of Certain and Uncertain Environment

Authors: Ahteshamul Haq, Srikant Gupta, Murshid Kamal, Irfan Ali

Abstract:

This case study designs and builds a multi-objective production planning model for a hardware firm with certain & uncertain data. During the time of interaction with the manager of the firm, they indicate some of the parameters may be vague. This vagueness in the formulated model is handled by the concept of fuzzy set theory. Triangular & Trapezoidal fuzzy numbers are used to represent the uncertainty in the collected data. The fuzzy nature is de-fuzzified into the crisp form using well-known defuzzification method via graded mean integration representation method. The proposed model attempts to maximize the production of the firm, profit related to the manufactured items & minimize the carrying inventory costs in both certain & uncertain environment. The recommended optimal plan is determined via fuzzy programming approach, and the formulated models are solved by using optimizing software LINGO 16.0 for getting the optimal production plan. The proposed model yields an efficient compromise solution with the overall satisfaction of decision maker.

Keywords: production planning problem, multi-objective optimization, fuzzy programming, fuzzy sets

Procedia PDF Downloads 216
6383 The Types of Annuities with Flexible Premium

Authors: Deniz Ünal Özpalamutcu, Burcu Altman

Abstract:

Actuaria uses mathematics, statistic and financial information when analyzing the financial impacts of uncertainties, risks, insurance and pension related issues. In other words, it deals with the likelihood of potential risks, their financial impacts and especially the financial measures. Handling these measures require some long-term payment and investments. So, it is obvious it is inevitable to plan the periodic payments with equal time intervals considering also the changing value of money over time. These series of payment made specific intervals of time is called annuity or rant. In literature, rants are classified based on start and end dates, start times, payments times, payments amount or frequency. Classification of rants based on payment amounts changes based on the constant, descending or ascending payment methods. The literature about handling the annuity is very limited. Yet in a daily life, especially in today’s world where the economic issues gained a prominence, it is very crucial to use the variable annuity method in line with the demands of the customers. In this study, the types of annuities with flexible payment are discussed. In other words, we focus on calculating payment amount of a period by adding a certain percentage of previous period payment was studied. While studying this problem, formulas were created considering both start and end period payments for cash value and accumulated. Also increase of each period payment by r interest rate each period payments calculated with previous periods increases. And the problem of annuities (rants) of which each period payment increased with previous periods’ increase by r interest rate has been analyzed. Cash value and accumulated value calculation of this problem were studied separately based on the period start/end and their relations were expressed by formulas.

Keywords: actuaria, annuity, flexible payment, rant

Procedia PDF Downloads 223
6382 Harmonic Assessment and Mitigation in Medical Diagonesis Equipment

Authors: S. S. Adamu, H. S. Muhammad, D. S. Shuaibu

Abstract:

Poor power quality in electrical power systems can lead to medical equipment at healthcare centres to malfunction and present wrong medical diagnosis. Equipment such as X-rays, computerized axial tomography, etc. can pollute the system due to their high level of harmonics production, which may cause a number of undesirable effects like heating, equipment damages and electromagnetic interferences. The conventional approach of mitigation uses passive inductor/capacitor (LC) filters, which has some drawbacks such as, large sizes, resonance problems and fixed compensation behaviours. The current trends of solutions generally employ active power filters using suitable control algorithms. This work focuses on assessing the level of Total Harmonic Distortion (THD) on medical facilities and various ways of mitigation, using radiology unit of an existing hospital as a case study. The measurement of the harmonics is conducted with a power quality analyzer at the point of common coupling (PCC). The levels of measured THD are found to be higher than the IEEE 519-1992 standard limits. The system is then modelled as a harmonic current source using MATLAB/SIMULINK. To mitigate the unwanted harmonic currents a shunt active filter is developed using synchronous detection algorithm to extract the fundamental component of the source currents. Fuzzy logic controller is then developed to control the filter. The THD without the active power filter are validated using the measured values. The THD with the developed filter show that the harmonics are now within the recommended limits.

Keywords: power quality, total harmonics distortion, shunt active filters, fuzzy logic

Procedia PDF Downloads 480
6381 Product Line Design with Customization in the Presence of Demand Uncertainty

Authors: Parisa Bagheri Tookanlou

Abstract:

In this paper, we analyze a product line design problem faced by a manufacturing firm where the product line consists of a customized product in addition to a standard product and is offered in a market in which customers are heterogeneous on aesthetic attributes of the product. The customization level of a product is defined by the fraction of aesthetic attributes of the product that the manufacturer chooses to customize. In contrast to the existing literature on product line design that predominantly assumes deterministic demand, we consider the presence of demand uncertainty and frame the product line design problem in a single period (news vendor) setting. We examine the effect of demand uncertainty on product line decisions. Furthermore, we also examine how product line decisions are influenced by channel structure. While we use the centralized channel as a benchmark, we consider the decentralized dual channel where the customized product is sold through an online channel owned by the manufacturer and the standard product is sold through a retailer. We introduce a supply contract between the manufacturer and the retailer for improving channel efficiency and coordinate the distribution channel.

Keywords: product line design, demand uncertainty, customization level, distribution channel

Procedia PDF Downloads 189
6380 Integration of Big Data to Predict Transportation for Smart Cities

Authors: Sun-Young Jang, Sung-Ah Kim, Dongyoun Shin

Abstract:

The Intelligent transportation system is essential to build smarter cities. Machine learning based transportation prediction could be highly promising approach by delivering invisible aspect visible. In this context, this research aims to make a prototype model that predicts transportation network by using big data and machine learning technology. In detail, among urban transportation systems this research chooses bus system.  The research problem that existing headway model cannot response dynamic transportation conditions. Thus, bus delay problem is often occurred. To overcome this problem, a prediction model is presented to fine patterns of bus delay by using a machine learning implementing the following data sets; traffics, weathers, and bus statues. This research presents a flexible headway model to predict bus delay and analyze the result. The prototyping model is composed by real-time data of buses. The data are gathered through public data portals and real time Application Program Interface (API) by the government. These data are fundamental resources to organize interval pattern models of bus operations as traffic environment factors (road speeds, station conditions, weathers, and bus information of operating in real-time). The prototyping model is designed by the machine learning tool (RapidMiner Studio) and conducted tests for bus delays prediction. This research presents experiments to increase prediction accuracy for bus headway by analyzing the urban big data. The big data analysis is important to predict the future and to find correlations by processing huge amount of data. Therefore, based on the analysis method, this research represents an effective use of the machine learning and urban big data to understand urban dynamics.

Keywords: big data, machine learning, smart city, social cost, transportation network

Procedia PDF Downloads 263
6379 Solving Weighted Number of Operation Plus Processing Time Due-Date Assignment, Weighted Scheduling and Process Planning Integration Problem Using Genetic and Simulated Annealing Search Methods

Authors: Halil Ibrahim Demir, Caner Erden, Mumtaz Ipek, Ozer Uygun

Abstract:

Traditionally, the three important manufacturing functions, which are process planning, scheduling and due-date assignment, are performed separately and sequentially. For couple of decades, hundreds of studies are done on integrated process planning and scheduling problems and numerous researches are performed on scheduling with due date assignment problem, but unfortunately the integration of these three important functions are not adequately addressed. Here, the integration of these three important functions is studied by using genetic, random-genetic hybrid, simulated annealing, random-simulated annealing hybrid and random search techniques. As well, the importance of the integration of these three functions and the power of meta-heuristics and of hybrid heuristics are studied.

Keywords: process planning, weighted scheduling, weighted due-date assignment, genetic search, simulated annealing, hybrid meta-heuristics

Procedia PDF Downloads 472
6378 Performance of Non-Deterministic Structural Optimization Algorithms Applied to a Steel Truss Structure

Authors: Ersilio Tushaj

Abstract:

The efficient solution that satisfies the optimal condition is an important issue in the structural engineering design problem. The new codes of structural design consist in design methodology that looks after the exploitation of the total resources of the construction material. In recent years some non-deterministic or meta-heuristic structural optimization algorithms have been developed widely in the research community. These methods search the optimum condition starting from the simulation of a natural phenomenon, such as survival of the fittest, the immune system, swarm intelligence or the cooling process of molten metal through annealing. Among these techniques the most known are: the genetic algorithms, simulated annealing, evolution strategies, particle swarm optimization, tabu search, ant colony optimization, harmony search and big bang crunch optimization. In this study, five of these algorithms are applied for the optimum weight design of a steel truss structure with variable geometry but fixed topology. The design process selects optimum distances and size sections from a set of commercial steel profiles. In the formulation of the design problem are considered deflection limitations, buckling and allowable stress constraints. The approach is repeated starting from different initial populations. The design problem topology is taken from an existing steel structure. The optimization process helps the engineer to achieve good final solutions, avoiding the repetitive evaluation of alternative designs in a time consuming process. The algorithms used for the application, the results of the optimal solutions, the number of iterations and the minimal weight designs, will be reported in the paper. Based on these results, it would be estimated, the amount of the steel that could be saved by applying structural analysis combined with non-deterministic optimization methods.

Keywords: structural optimization, non-deterministic methods, truss structures, steel truss

Procedia PDF Downloads 232
6377 Endoscopic Treatment of Esophageal Injuries Using Vacuum Therapy

Authors: Murad Gasanov, Shagen Danielyan, Ali Gasanov, Yuri Teterin, Peter Yartsev

Abstract:

Background: Despite the advances made in modern surgery, the treatment of patients with esophageal injuries remains one of the most topical and complex issues. In recent years, high-technology minimally invasive methods, such as endoscopic vacuum therapy (EVT) in the treatment of esophageal injuries. The effectiveness of EVT has been sufficiently studied in case of failure of esophageal anastomoses, however the application of this method in case of mechanical esophageal injuries is limited by a small series of observations, indicating the necessity of additional study. Aim: The aim was to аnalyzed of own experience in the use of endoscopic vacuum therapy (EVT) in a comprehensive examination of patients with esophageal injuries. Methods: We analyzed the results of treatment of 24 patients with mechanical injuries of the esophagus for the period 2019-2021. Complex treatment of patients included the use of minimally invasive technologies, including percutaneous endoscopic gastrostomy (PEG), EVT and video-assisted thoracoscopic debridement. Evaluation of the effectiveness of treatment was carried out using multislice computed tomography (MSCT), endoscopy and laboratory tests. The duration of inpatient treatment and the duration of EVT, the number of system replacements, complications and mortality were taken into account. Result: EVT in patients with mechanical injuries of the esophagus allowed to achieve epithelialization of the esophageal defect in 21 patients (87.5%) in the form of linear scar on the site of perforation or pseudodiverticulum. Complications were noted in 4 patients (16.6%), including bleeding (2) and and esophageal stenosis in the perforation area (2). Lethal outcome was in one observation (4.2%). Conclusion. EVT may be the method of choice in complex treatment in patients with esophageal lesions.

Keywords: esophagus injuries, damage to the esophagus, perforation of the esophagus, spontaneous perforation of the esophagus, mediastinitis, endoscopic vacuum therapy

Procedia PDF Downloads 110
6376 The Achievements and Challenges of Physics Teachers When Implementing Problem-Based Learning: An Exploratory Study Applied to Rural High Schools

Authors: Osman Ali, Jeanne Kriek

Abstract:

Introduction: The current instructional approach entrenched in memorizing does not assist conceptual understanding in science. Instructional approaches that encourage research, investigation, and experimentation, which depict how scientists work, should be encouraged. One such teaching strategy is problem-based learning (PBL). PBL has many advantages; enhanced self-directed learning and improved problem-solving and critical thinking skills. However, despite many advantages, PBL has challenges. Research confirmed is time-consuming and difficult to formulate ill-structured questions. Professional development interventions are needed for in-service educators to adopt the PBL strategy. The purposively selected educators had to implement PBL in their classrooms after the intervention to develop their practice and then reflect on the implementation. They had to indicate their achievements and challenges. This study differs from previous studies as the rural educators were subjected to implementing PBL in their classrooms and reflected on their experiences, beliefs, and attitudes regarding PBL. Theoretical Framework: The study reinforced Vygotskian sociocultural theory. According to Vygotsky, the development of a child's cognitive is sustained by the interaction between the child and more able peers in his immediate environment. The theory suggests that social interactions in small groups create an opportunity for learners to form concepts and skills on their own better than working individually. PBL emphasized learning in small groups. Research Methodology: An exploratory case study was employed. The reason is that the study was not necessarily for specific conclusive evidence. Non-probability purposive sampling was adopted to choose eight schools from 89 rural public schools. In each school, two educators were approached, teaching physical sciences in grades 10 and 11 (N = 16). The research instruments were questionnaires, interviews, and lesson observation protocol. Two open-ended questionnaires were developed before and after intervention and analyzed thematically. Three themes were identified. The semi-structured interviews and responses were coded and transcribed into three themes. Subsequently, the Reform Teaching Observation Protocol (RTOP) was adopted for lesson observation and was analyzed using five constructs. Results: Evidence from analyzing the questionnaires before and after the intervention shows that participants knew better what was required to develop an ill-structured problem during the implementation. Furthermore, indications from the interviews are that participants had positive views about the PBL strategy. They stated that they only act as facilitators, and learners’ problem-solving and critical thinking skills are enhanced. They suggested a change in curriculum to adopt the PBL strategy. However, most participants may not continue to apply the PBL strategy stating that it is time-consuming and difficult to complete the Annual Teaching Plan (ATP). They complained about materials and equipment and learners' readiness to work. Evidence from RTOP shows that after the intervention, participants learn to encourage exploration and use learners' questions and comments to determine the direction and focus of classroom discussions.

Keywords: problem-solving, self-directed, critical thinking, intervention

Procedia PDF Downloads 122
6375 Modeling in the Middle School: Eighth-Grade Students’ Construction of the Summer Job Problem

Authors: Neslihan Sahin Celik, Ali Eraslan

Abstract:

Mathematical model and modeling are one of the topics that have been intensively discussed in recent years. In line with the results of the PISA studies, researchers in many countries have begun to question how much students in school-education system are prepared to solve the real-world problems they encounter in their future professional lives. As a result, many mathematics educators have begun to emphasize the importance of new skills and understanding such as constructing, Hypothesizing, Describing, manipulating, predicting, working together for complex and multifaceted problems for success in beyond the school. When students increasingly face this kind of situations in their daily life, it is important to make sure that students have enough experience to work together and interpret mathematical situations that enable them to think in different ways and share their ideas with their peers. Thus, model eliciting activities are one of main tools that help students to gain experiences and the new skills required. This research study was carried on the town center of a big city located in the Black Sea region in Turkey. The participants were eighth-grade students in a middle school. After a six-week preliminary study, three students in an eighth-grade classroom were selected using criterion sampling technique and placed in a focus group. The focus group of three students was videotaped as they worked on a model eliciting activity, the Summer Job Problem. The conversation of the group was transcribed, examined with students’ written work and then qualitatively analyzed through the lens of Blum’s (1996) modeling processing cycle. The study results showed that eighth grade students can successfully work with the model eliciting, develop a model based on the two parameters and review the whole process. On the other hand, they had difficulties to relate parameters to each other and take all parameters into account to establish the model.

Keywords: middle school, modeling, mathematical modeling, summer job problem

Procedia PDF Downloads 341
6374 A First Step towards Automatic Evolutionary for Gas Lifts Allocation Optimization

Authors: Younis Elhaddad, Alfonso Ortega

Abstract:

Oil production by means of gas lift is a standard technique in oil production industry. To optimize the total amount of oil production in terms of the amount of gas injected is a key question in this domain. Different methods have been tested to propose a general methodology. Many of them apply well-known numerical methods. Some of them have taken into account the power of evolutionary approaches. Our goal is to provide the experts of the domain with a powerful automatic searching engine into which they can introduce their knowledge in a format close to the one used in their domain, and get solutions comprehensible in the same terms, as well. These proposals introduced in the genetic engine the most expressive formal models to represent the solutions to the problem. These algorithms have proven to be as effective as other genetic systems but more flexible and comfortable for the researcher although they usually require huge search spaces to justify their use due to the computational resources involved in the formal models. The first step to evaluate the viability of applying our approaches to this realm is to fully understand the domain and to select an instance of the problem (gas lift optimization) in which applying genetic approaches could seem promising. After analyzing the state of the art of this topic, we have decided to choose a previous work from the literature that faces the problem by means of numerical methods. This contribution includes details enough to be reproduced and complete data to be carefully analyzed. We have designed a classical, simple genetic algorithm just to try to get the same results and to understand the problem in depth. We could easily incorporate the well mathematical model, and the well data used by the authors and easily translate their mathematical model, to be numerically optimized, into a proper fitness function. We have analyzed the 100 curves they use in their experiment, similar results were observed, in addition, our system has automatically inferred an optimum total amount of injected gas for the field compatible with the addition of the optimum gas injected in each well by them. We have identified several constraints that could be interesting to incorporate to the optimization process but that could be difficult to numerically express. It could be interesting to automatically propose other mathematical models to fit both, individual well curves and also the behaviour of the complete field. All these facts and conclusions justify continuing exploring the viability of applying the approaches more sophisticated previously proposed by our research group.

Keywords: evolutionary automatic programming, gas lift, genetic algorithms, oil production

Procedia PDF Downloads 163
6373 Problem-Based Learning for Hospitality Students. The Case of Madrid Luxury Hotels and the Recovery after the Covid Pandemic

Authors: Caridad Maylin-Aguilar, Beatriz Duarte-Monedero

Abstract:

Problem-based learning (PBL) is a useful tool for adult and practice oriented audiences, as University students. As a consequence of the huge disruption caused by the COVID pandemic in the hospitality industry, hotels of all categories closed down in Spain from March 2020. Since that moment, the luxury segment was blooming with optimistic prospects for new openings. Hence, Hospitality students were expecting a positive situation in terms of employment and career development. By the beginning of the 2020-21 academic year, these expectations were seriously harmed. By October 2020, only 9 of the 32 hotels in the luxury segment were opened with an occupation rate of 9%. Shortly after, the evidence of a second wave affecting especially Spain and the homelands of incoming visitors bitterly smashed all forecasts. In accordance with the situation, a team of four professors and practitioners, from four different subject areas, developed a real case, inspired in one of these hotels, the 5-stars Emperatriz by Barceló. Students in their 2nd course were provided with real information as marketing plans, profit and losses and operational accounts, employees profiles and employment costs. The challenge for them was to act as consultants, identifying potential courses of action, related to best, base and worst case. In order to do that, they were organized in teams and supported by 4th course students. Each professor deployed the problem in their subject; thus, research on the customers behavior and feelings were necessary to review, as part of the marketing plan, if the current offering of the hotel was clear enough to guarantee and to communicate a safe environment, as well as the ranking of other basic, supporting and facilitating services. Also, continuous monitoring of competitors’ activity was necessary to understand what was the behavior of the open outlets. The actions designed after the diagnose were ranked in accordance with their impact and feasibility in terms of time and resources. Also they must be actionable by the current staff of the hotel and their managers and a vision of internal marketing was appreciated. After a process of refinement, seven teams presented their conclusions to Emperatriz general manager and the rest of professors. Four main ideas were chosen, and all the teams, irrespectively of authorship, were asked to develop them to the state of a minimum viable product, with estimations of impacts and costs. As the process continues, students are nowadays accompanying the hotel and their staff in the prudent reopening of facilities, almost one year after the closure. From a professor’s point of view, key learnings were 1.- When facing a real problem, a holistic view is needed. Therefore, the vision of subjects as silos collapses, 2- When educating new professionals, providing them with the resilience and resistance necessaries to deal with a problem is always mandatory, but now seems more relevant and 3.- collaborative work and contact with real practitioners in such an uncertain and changing environment is a challenge, but it is worth when considering the learning result and its potential.

Keywords: problem-based learning, hospitality recovery, collaborative learning, resilience

Procedia PDF Downloads 187
6372 Development of GIS-Based Geotechnical Guidance Maps for Prediction of Soil Bearing Capacity

Authors: Q. Toufeeq, R. Kauser, U. R. Jamil, N. Sohaib

Abstract:

Foundation design of a structure needs soil investigation to avoid failures due to settlements. This soil investigation is expensive and time-consuming. Developments of new residential societies involve huge leveling of large sites that is accompanied by heavy land filling. Poor practices of land fill for deep depths cause differential settlements and consolidations of underneath soil that sometimes result in the collapse of structures. The extent of filling remains unknown to the individual developer unless soil investigation is carried out. Soil investigation cannot be performed on each available site due to involved costs. However, fair estimate of bearing capacity can be made if such tests are already done in the surrounding areas. The geotechnical guidance maps can provide a fair assessment of soil properties. Previously, GIS-based approaches have been used to develop maps using extrapolation and interpolations techniques for bearing capacities, underground recharge, soil classification, geological hazards, landslide hazards, socio-economic, and soil liquefaction mapping. Standard penetration test (SPT) data of surrounding sites were already available. Google Earth is used for digitization of collected data. Few points were considered for data calibration and validation. Resultant Geographic information system (GIS)-based guidance maps are helpful to anticipate the bearing capacity in the real estate industry.

Keywords: bearing capacity, soil classification, geographical information system, inverse distance weighted, radial basis function

Procedia PDF Downloads 139
6371 Prenatal Lead Exposure and Postpartum Depression: An Exploratory Study of Women in Mexico

Authors: Nia McRae, Robert Wright, Ghalib Bello

Abstract:

Introduction: Postpartum depression is a prevalent mood disorder that is detrimental to the mental and physical health of mothers and their newborns. Lead (Pb) is a toxic metal that is associated with hormonal imbalance and mental impairments. The hormone changes that accompany pregnancy and childbirth may be exacerbated by Pb and increase new mothers’ susceptibility to postpartum depression. To the best of the author’s knowledge, this is the only study that investigates the association between prenatal Pb exposure and postpartum depression. Identifying risk factors can contribute to improved prevention and treatment strategies for postpartum depression. Methods: Data was derived from the Programming Research in Obesity, Growth, Environment and Social Stress (PROGRESS) study which is an ongoing longitudinal birth cohort. Postpartum depression was identified by a score of 13 or above on the 10-Item Edinburg Postnatal Depression Scale (EPDS) 6-months and 12-months postpartum. Pb was measured in the blood (BPb) in the second and third trimester and in the tibia and patella 1-month postpartum. Quantile regression models were used to assess the relationship between BPb and postpartum depression. Results: BPb in the second trimester was negatively associated with the 80th percentile of depression 6-months postpartum (β: -0.26; 95% CI: -0.51, -0.01). No significant association was found between BPb in the third trimester and depression 6-months postpartum. BPb in the third trimester exhibited an inverse relationship with the 60th percentile (β: -0.23; 95% CI: -0.41, -0.06), 70th percentile (β: -0.31; 95% CI: -0.52, -0.10), and 90th percentile of depression 12-months postpartum (β: -0.36; 95% CI: -0.69, -0.03). There was no significant association between BPb in the second trimester and depression 12-months postpartum. Bone Pb concentrations were not significantly associated with postpartum depression. Conclusion: The negative association between BPb and postpartum depression may support research which demonstrates lead is a nontherapeutic stimulant. Further research is needed to verify these results and identify effect modifiers.

Keywords: depression, lead, postpartum, prenatal

Procedia PDF Downloads 228
6370 Norms and Laws: Fate of Community Forestry in Jharkhand

Authors: Pawas Suren

Abstract:

The conflict between livelihood and forest protection has been a perpetual phenomenon in India. In the era of climate change, the problem is expected to aggravate the declining trend of dense forest in the country, creating impediments in the climate change adaptation by the forest dependent communities. In order to access the complexity of the problem, Hazarinagh and Chatra districts of Jharkhand were selected as a case study. To identify norms practiced by the communities to manage community forestry, the ethnographic study was designed to understand the values, traditions, and cultures of forest dependent communities, most of whom were tribal. It was observed that internalization of efficient forest norms is reflected in the pride and honor of such behavior while violators are sanctioned through guilt and shame. The study analyzes the effect of norms being practiced in the management and ecology of community forestry as common property resource. The light of the findings led towards the gaps in the prevalent forest laws to address efficient allocation of property rights. The conclusion embarks on reconsidering accepted factors of forest degradation in India.

Keywords: climate change, common property resource, community forestry, norms

Procedia PDF Downloads 346
6369 A Linear Programming Approach to Assist Roster Construction Under a Salary Cap

Authors: Alex Contarino

Abstract:

Professional sports leagues often have a “free agency” period, during which teams may sign players with expiring contracts.To promote parity, many leagues operate under a salary cap that limits the amount teams can spend on player’s salaries in a given year. Similarly, in fantasy sports leagues, salary cap drafts are a popular method for selecting players. In order to sign a free agent in either setting, teams must bid against one another to buy the player’s services while ensuring the sum of their player’s salaries is below the salary cap. This paper models the bidding process for a free agent as a constrained optimization problem that can be solved using linear programming. The objective is to determine the largest bid that a team should offer the player subject to the constraint that the value of signing the player must exceed the value of using the salary cap elsewhere. Iteratively solving this optimization problem for each available free agent provides teams with an effective framework for maximizing the talent on their rosters. The utility of this approach is demonstrated for team sport roster construction and fantasy sport drafts, using recent data sets from both settings.

Keywords: linear programming, optimization, roster management, salary cap

Procedia PDF Downloads 114
6368 Biophysical Features of Glioma-Derived Extracellular Vesicles as Potential Diagnostic Markers

Authors: Abhimanyu Thakur, Youngjin Lee

Abstract:

Glioma is a lethal brain cancer whose early diagnosis and prognosis are limited due to the dearth of a suitable technique for its early detection. Current approaches, including magnetic resonance imaging (MRI), computed tomography (CT), and invasive biopsy for the diagnosis of this lethal disease, hold several limitations, demanding an alternative method. Recently, extracellular vesicles (EVs) have been used in numerous biomarker studies, majorly exosomes and microvesicles (MVs), which are found in most of the cells and biofluids, including blood, cerebrospinal fluid (CSF), and urine. Remarkably, glioma cells (GMs) release a high number of EVs, which are found to cross the blood-brain-barrier (BBB) and impersonate the constituents of parent GMs including protein, and lncRNA; however, biophysical properties of EVs have not been explored yet as a biomarker for glioma. We isolated EVs from cell culture conditioned medium of GMs and regular primary culture, blood, and urine of wild-type (WT)- and glioma mouse models, and characterized by nano tracking analyzer, transmission electron microscopy, immunogold-EM, and differential light scanning. Next, we measured the biophysical parameters of GMs-EVs by using atomic force microscopy. Further, the functional constituents of EVs were examined by FTIR and Raman spectroscopy. Exosomes and MVs-derived from GMs, blood, and urine showed distinction biophysical parameters (roughness, adhesion force, and stiffness) and different from that of regular primary glial cells, WT-blood, and -urine, which can be attributed to the characteristic functional constituents. Therefore, biophysical features can be potential diagnostic biomarkers for glioma.

Keywords: glioma, extracellular vesicles, exosomes, microvesicles, biophysical properties

Procedia PDF Downloads 145
6367 Improvement of Central Composite Design in Modeling and Optimization of Simulation Experiments

Authors: A. Nuchitprasittichai, N. Lerdritsirikoon, T. Khamsing

Abstract:

Simulation modeling can be used to solve real world problems. It provides an understanding of a complex system. To develop a simplified model of process simulation, a suitable experimental design is required to be able to capture surface characteristics. This paper presents the experimental design and algorithm used to model the process simulation for optimization problem. The CO2 liquefaction based on external refrigeration with two refrigeration circuits was used as a simulation case study. Latin Hypercube Sampling (LHS) was purposed to combine with existing Central Composite Design (CCD) samples to improve the performance of CCD in generating the second order model of the system. The second order model was then used as the objective function of the optimization problem. The results showed that adding LHS samples to CCD samples can help capture surface curvature characteristics. Suitable number of LHS sample points should be considered in order to get an accurate nonlinear model with minimum number of simulation experiments.

Keywords: central composite design, CO2 liquefaction, latin hypercube sampling, simulation-based optimization

Procedia PDF Downloads 170
6366 Using the Simple Fixed Rate Approach to Solve Economic Lot Scheduling Problem under the Basic Period Approach

Authors: Yu-Jen Chang, Yun Chen, Hei-Lam Wong

Abstract:

The Economic Lot Scheduling Problem (ELSP) is a valuable mathematical model that can support decision-makers to make scheduling decisions. The basic period approach is effective for solving the ELSP. The assumption for applying the basic period approach is that a product must use its maximum production rate to be produced. However, a product can lower its production rate to reduce the average total cost when a facility has extra idle time. The past researches discussed how a product adjusts its production rate under the common cycle approach. To the best of our knowledge, no studies have addressed how a product lowers its production rate under the basic period approach. This research is the first paper to discuss this topic. The research develops a simple fixed rate approach that adjusts the production rate of a product under the basic period approach to solve the ELSP. Our numerical example shows our approach can find a better solution than the traditional basic period approach. Our mathematical model that applies the fixed rate approach under the basic period approach can serve as a reference for other related researches.

Keywords: economic lot, basic period, genetic algorithm, fixed rate

Procedia PDF Downloads 566
6365 Reduction of Multiple User Interference for Optical CDMA Systems Using Successive Interference Cancellation Scheme

Authors: Tawfig Eltaif, Hesham A. Bakarman, N. Alsowaidi, M. R. Mokhtar, Malek Harbawi

Abstract:

In Commonly, it is primary problem that there is multiple user interference (MUI) noise resulting from the overlapping among the users in optical code-division multiple access (OCDMA) system. In this article, we aim to mitigate this problem by studying an interference cancellation scheme called successive interference cancellation (SIC) scheme. This scheme will be tested on two different detection schemes, spectral amplitude coding (SAC) and direct detection systems (DS), using partial modified prime (PMP) as the signature codes. It was found that SIC scheme based on both SAC and DS methods had a potential to suppress the intensity noise, that is to say, it can mitigate MUI noise. Furthermore, SIC/DS scheme showed much lower bit error rate (BER) performance relative to SIC/SAC scheme for different magnitude of effective power. Hence, many more users can be supported by SIC/DS receiver system.

Keywords: optical code-division multiple access (OCDMA), successive interference cancellation (SIC), multiple user interference (MUI), spectral amplitude coding (SAC), partial modified prime code (PMP)

Procedia PDF Downloads 522
6364 Development of Electronic Waste Management Framework at College of Design Art, Design and Technology

Authors: Wafula Simon Peter, Kimuli Nabayego Ibtihal, Nabaggala Kimuli Nashua

Abstract:

The worldwide use of information and communications technology (ICT) equipment and other electronic equipment is growing and consequently, there is a growing amount of equipment that becomes waste after its time in use. This growth is expected to accelerate since equipment lifetime decreases with time and growing consumption. As a result, e-waste is one of the fastest-growing waste streams globally. The United Nations University (UNU) calculates in its second Global E-waste Monitor 44.7 million metric tonnes (Mt) of e-waste were generated globally in 2016. The study population was 80 respondents, from which a sample of 69 respondents was selected using simple and purposive sampling techniques. This research was carried out to investigate the problem of e-waste and come up with a framework to improve e-waste management. The objective of the study was to develop a framework for improving e-waste management at the College of Engineering, Design, Art and Technology (CEDAT). This was achieved by breaking it down into specific objectives, and these included the establishment of the policy and other Regulatory frameworks being used in e-waste management at CEDAT, the determination of the effectiveness of the e-waste management practices at CEDAT, the establishment of the critical challenges constraining e-waste management at the College, development of a framework for e-waste management. The study reviewed the e-waste regulatory framework used at the college and then collected data which was used to come up with a framework. The study also established that weak policy and regulatory framework, lack of proper infrastructure, improper disposal of e-waste and a general lack of awareness of the e-waste and the magnitude of the problem are the critical challenges of e-waste management. In conclusion, the policy and regulatory framework should be revised, localized and strengthened to contextually address the problem. Awareness campaigns, the development of proper infrastructure and extensive research to establish the volumes and magnitude of the problems will come in handy. The study recommends a framework for the improvement of e-waste.

Keywords: e-waste, treatment, disposal, computers, model, management policy and guidelines

Procedia PDF Downloads 81
6363 Mapping Iron Content in the Brain with Magnetic Resonance Imaging and Machine Learning

Authors: Gabrielle Robertson, Matthew Downs, Joseph Dagher

Abstract:

Iron deposition in the brain has been linked with a host of neurological disorders such as Alzheimer’s, Parkinson’s, and Multiple Sclerosis. While some treatment options exist, there are no objective measurement tools that allow for the monitoring of iron levels in the brain in vivo. An emerging Magnetic Resonance Imaging (MRI) method has been recently proposed to deduce iron concentration through quantitative measurement of magnetic susceptibility. This is a multi-step process that involves repeated modeling of physical processes via approximate numerical solutions. For example, the last two steps of this Quantitative Susceptibility Mapping (QSM) method involve I) mapping magnetic field into magnetic susceptibility and II) mapping magnetic susceptibility into iron concentration. Process I involves solving an ill-posed inverse problem by using regularization via injection of prior belief. The end result from Process II highly depends on the model used to describe the molecular content of each voxel (type of iron, water fraction, etc.) Due to these factors, the accuracy and repeatability of QSM have been an active area of research in the MRI and medical imaging community. This work aims to estimate iron concentration in the brain via a single step. A synthetic numerical model of the human head was created by automatically and manually segmenting the human head on a high-resolution grid (640x640x640, 0.4mm³) yielding detailed structures such as microvasculature and subcortical regions as well as bone, soft tissue, Cerebral Spinal Fluid, sinuses, arteries, and eyes. Each segmented region was then assigned tissue properties such as relaxation rates, proton density, electromagnetic tissue properties and iron concentration. These tissue property values were randomly selected from a Probability Distribution Function derived from a thorough literature review. In addition to having unique tissue property values, different synthetic head realizations also possess unique structural geometry created by morphing the boundary regions of different areas within normal physical constraints. This model of the human brain is then used to create synthetic MRI measurements. This is repeated thousands of times, for different head shapes, volume, tissue properties and noise realizations. Collectively, this constitutes a training-set that is similar to in vivo data, but larger than datasets available from clinical measurements. This 3D convolutional U-Net neural network architecture was used to train data-driven Deep Learning models to solve for iron concentrations from raw MRI measurements. The performance was then tested on both synthetic data not used in training as well as real in vivo data. Results showed that the model trained on synthetic MRI measurements is able to directly learn iron concentrations in areas of interest more effectively than other existing QSM reconstruction methods. For comparison, models trained on random geometric shapes (as proposed in the Deep QSM method) are less effective than models trained on realistic synthetic head models. Such an accurate method for the quantitative measurement of iron deposits in the brain would be of important value in clinical studies aiming to understand the role of iron in neurological disease.

Keywords: magnetic resonance imaging, MRI, iron deposition, machine learning, quantitative susceptibility mapping

Procedia PDF Downloads 140
6362 Modern Trends in Foreign Direct Investments in Georgia

Authors: Rusudan Kinkladze, Guguli Kurashvili, Ketevan Chitaladze

Abstract:

Foreign direct investment is a driving force in the development of the interdependent national economies, and the study and analysis of investments is an urgent problem. It is particularly important for transitional economies, such as Georgia, and the study and analysis of investments is an urgent problem. Consequently, the goal of the research is the study and analysis of direct foreign investments in Georgia, and identification and forecasting of modern trends, and covers the period of 2006-2015. The study uses the methods of statistical observation, grouping and analysis, the methods of analytical indicators of time series, trend identification and the predicted values are calculated, as well as various literary and Internet sources relevant to the research. The findings showed that modern investment policy In Georgia is favorable for domestic as well as foreign investors. Georgia is still a net importer of investments. In 2015, the top 10 investing countries was led by Azerbaijan, United Kingdom and Netherlands, and the largest share of FDIs were allocated in the transport and communication sector; the financial sector was the second, followed by the health and social work sector, and the same trend will continue in the future. 

Keywords: foreign direct investments, methods, statistics, analysis

Procedia PDF Downloads 334
6361 Convex Restrictions for Outage Constrained MU-MISO Downlink under Imperfect Channel State Information

Authors: A. Preetha Priyadharshini, S. B. M. Priya

Abstract:

In this paper, we consider the MU-MISO downlink scenario, under imperfect channel state information (CSI). The main issue in imperfect CSI is to keep the probability of each user achievable outage rate below the given threshold level. Such a rate outage constraints present significant and analytical challenges. There are many probabilistic methods are used to minimize the transmit optimization problem under imperfect CSI. Here, decomposition based large deviation inequality and Bernstein type inequality convex restriction methods are used to perform the optimization problem under imperfect CSI. These methods are used for achieving improved output quality and lower complexity. They provide a safe tractable approximation of the original rate outage constraints. Based on these method implementations, performance has been evaluated in the terms of feasible rate and average transmission power. The simulation results are shown that all the two methods offer significantly improved outage quality and lower computational complexity.

Keywords: imperfect channel state information, outage probability, multiuser- multi input single output, channel state information

Procedia PDF Downloads 817
6360 A Picture is worth a Billion Bits: Real-Time Image Reconstruction from Dense Binary Pixels

Authors: Tal Remez, Or Litany, Alex Bronstein

Abstract:

The pursuit of smaller pixel sizes at ever increasing resolution in digital image sensors is mainly driven by the stringent price and form-factor requirements of sensors and optics in the cellular phone market. Recently, Eric Fossum proposed a novel concept of an image sensor with dense sub-diffraction limit one-bit pixels (jots), which can be considered a digital emulation of silver halide photographic film. This idea has been recently embodied as the EPFL Gigavision camera. A major bottleneck in the design of such sensors is the image reconstruction process, producing a continuous high dynamic range image from oversampled binary measurements. The extreme quantization of the Poisson statistics is incompatible with the assumptions of most standard image processing and enhancement frameworks. The recently proposed maximum-likelihood (ML) approach addresses this difficulty, but suffers from image artifacts and has impractically high computational complexity. In this work, we study a variant of a sensor with binary threshold pixels and propose a reconstruction algorithm combining an ML data fitting term with a sparse synthesis prior. We also show an efficient hardware-friendly real-time approximation of this inverse operator. Promising results are shown on synthetic data as well as on HDR data emulated using multiple exposures of a regular CMOS sensor.

Keywords: binary pixels, maximum likelihood, neural networks, sparse coding

Procedia PDF Downloads 204