Search results for: geometric inverse source problem
11272 Modelling Vehicle Fuel Consumption Utilising Artificial Neural Networks
Authors: Aydin Azizi, Aburrahman Tanira
Abstract:
The main source of energy used in this modern age is fossil fuels. There is a myriad of problems that come with the use of fossil fuels, out of which the issues with the greatest impact are its scarcity and the cost it imposes on the planet. Fossil fuels are the only plausible option for many vital functions and processes; the most important of these is transportation. Thus, using this source of energy wisely and as efficiently as possible is a must. The aim of this work was to explore utilising mathematical modelling and artificial intelligence techniques to enhance fuel consumption in passenger cars by focusing on the speed at which cars are driven. An artificial neural network with an error less than 0.05 was developed to be applied practically as to predict the rate of fuel consumption in vehicles.Keywords: mathematical modeling, neural networks, fuel consumption, fossil fuel
Procedia PDF Downloads 40511271 Mathematical Programming Models for Portfolio Optimization Problem: A Review
Authors: Mazura Mokhtar, Adibah Shuib, Daud Mohamad
Abstract:
Portfolio optimization problem has received a lot of attention from both researchers and practitioners over the last six decades. This paper provides an overview of the current state of research in portfolio optimization with the support of mathematical programming techniques. On top of that, this paper also surveys the solution algorithms for solving portfolio optimization models classifying them according to their nature in heuristic and exact methods. To serve these purposes, 40 related articles appearing in the international journal from 2003 to 2013 have been gathered and analyzed. Based on the literature review, it has been observed that stochastic programming and goal programming constitute the highest number of mathematical programming techniques employed to tackle the portfolio optimization problem. It is hoped that the paper can meet the needs of researchers and practitioners for easy references of portfolio optimization.Keywords: portfolio optimization, mathematical programming, multi-objective programming, solution approaches
Procedia PDF Downloads 34911270 Rewriting, Reframing, and Restructuring the Story: A Narrative and Solution Focused Therapy Approach to Family Therapy
Authors: Eman Tadros
Abstract:
Solution Focused Therapy sheds a positive light on a client’s problem(s) by instilling hope, focusing on the connection with the client, and describing the problem in a way to display change being possible. Solution focused therapists highlight clients’ positive strengths, reframe what clients say, do, or believe in a positive statement, action, or belief. Narrative Therapy focuses on the stories individuals tell about their past in which shape their current and future lives. Changing the language used aids clients in reevaluating their values and views of themselves, this then constructs a more positive way of thinking about their story. Both therapies are based on treating each client as an individual with a problem rather than that the individual is a problem and being able to give power back to the client. The purpose of these ideologies is to open a client to alternative understandings. This paper displays how clinicians can empower and identify their clients’ positive strengths and resiliency factors. Narrative and Solution-Focused Techniques will be integrated to instill positivity and empowerment in clients. Techniques such as deconstruction, collaboration, complimenting, miracle/exception/scaling questioning will be analyzed and modeled. Furthermore, bridging Solution Focused Therapy and Narrative Therapy gives a voice to unheard client(s).Keywords: solution focused therapy, narrative therapy, empowerment, resilience
Procedia PDF Downloads 23911269 Assessment of Vehicular Accidents and Possible Mitigation Measures: A Case of Ahmedabad, Gujarat, India
Abstract:
Rapid urbanization is one of the consequences of rapid population explosion, which has also led to massive increase in number of motorized vehicles essential for carrying out all activities needed for sustaining urban livelihood. With this increased use of motorized vehicles over the time there has also been an increase in number of accidents. Study of road network and geometric features are essential to tackle problems of road accidents in any district or town. The increase in road accidents is one of the burning issues in the present society. Records show that there is one death at every 3.7 minutes because of road accident. It has been found from the research that, accidents occur due to, mistakes of the driver (86%) followed by bad street condition (5%), mistake of pedestrian (4%), as well as technical and maintenance defects (1%). Here, case study of Ahmedabad, Gujarat is taken up where first road safety level is assessed considering various parameters. The study confined to accident characteristics of all types of vehicles. For deeper analysis, road safety index for various stretches in Ahmedabad was found out. Crash rate for same stretches was found out. Based on various parameters priority was decided so that which stretch should be look out first to minimize road accidents on that stretch and which stretch should look out last. The major findings of the study are that accident severity of Ahmedabad has increased, but accident fatality risk has decreased; thus there is need to undertake some traffic engineering measures or make some traffic rules that are strictly followed by traffic. From the above study and literature studied it is found that Ahmedabad is suffering from similar problem of accidents and injuries and deaths caused by them, after properly investigating the issue short-term and long-term solutions to minimize road accidents have been presented in this paper.Keywords: accident severity index, accident fatality rate, accident fatality risk, accident risk, road safety index
Procedia PDF Downloads 14211268 The Effects of Above-Average Precipitation after Extended Drought on Phytoplankton in Southern California Surface Water Reservoirs
Authors: Margaret K. Spoo-Chupka
Abstract:
The Metropolitan Water District of Southern California (MWDSC) manages surface water reservoirs that are a source of drinking water for more than 19 million people in Southern California. These reservoirs experience periodic planktonic cyanobacteria blooms that can impact water quality. MWDSC imports water from two sources – the Colorado River (CR) and the State Water Project (SWP). The SWP brings supplies from the Sacramento-San Joaquin Delta that are characterized as having higher nutrients than CR water. Above average precipitation in 2017 after five years of drought allowed the majority of the reservoirs to fill. Phytoplankton was analyzed during the drought and after the drought at three reservoirs: Diamond Valley Lake (DVL), which receives SWP water exclusively, Lake Skinner, which can receive a blend of SWP and CR water, and Lake Mathews, which generally receives only CR water. DVL experienced a significant increase in water elevation in 2017 due to large SWP inflows, and there were no significant changes to total phytoplankton biomass, Shannon-Wiener diversity of the phytoplankton, or cyanobacteria biomass in 2017 compared to previous drought years despite the higher nutrient loads. The biomass of cyanobacteria that could potentially impact DVL water quality (Microcystis spp., Aphanizomenon flos-aquae, Dolichospermum spp., and Limnoraphis birgei) did not differ significantly between the heavy precipitation year and drought years. Compared to the other reservoirs, DVL generally has the highest concentration of cyanobacteria due to the water supply having greater nutrients. Lake Mathews’ water levels were similar in drought and wet years due to a reliable supply of CR water and there were no significant changes in the total phytoplankton biomass, phytoplankton diversity, or cyanobacteria biomass in 2017 compared to previous drought years. The biomass of cyanobacteria that could potentially impact water quality at Lake Mathews (L. birgei and Microcystis spp.) did not differ significantly between 2017 and previous drought years. Lake Mathews generally had the lowest cyanobacteria biomass due to the water supply having lower nutrients. The CR supplied most of the water to Lake Skinner during drought years, while the SWP was the primary source during 2017. This change in water source resulted in a significant increase in phytoplankton biomass in 2017, no significant change in diversity, and a significant increase in cyanobacteria biomass. Cyanobacteria that could potentially impact water quality at Skinner included: Microcystis spp., Dolichospermum spp., and A.flos-aquae. There was no significant difference in Microcystis spp. biomass in 2017 compared to previous drought years, but biomass of Dolichospermum spp. and A.flos-aquae were significantly greater in 2017 compared to previous drought years. Dolichospermum sp. and A. flos-aquae are two cyanobacteria that are more sensitive to nutrients than Microcystis spp., which are more sensitive to temperature. Patterns in problem cyanobacteria abundance among Southern California reservoirs as a result of above-average precipitation after more than five years of drought were most closely related to nutrient loading.Keywords: drought, reservoirs, cyanobacteria, and phytoplankton ecology
Procedia PDF Downloads 28611267 Speckle-Based Phase Contrast Micro-Computed Tomography with Neural Network Reconstruction
Authors: Y. Zheng, M. Busi, A. F. Pedersen, M. A. Beltran, C. Gundlach
Abstract:
X-ray phase contrast imaging has shown to yield a better contrast compared to conventional attenuation X-ray imaging, especially for soft tissues in the medical imaging energy range. This can potentially lead to better diagnosis for patients. However, phase contrast imaging has mainly been performed using highly brilliant Synchrotron radiation, as it requires high coherence X-rays. Many research teams have demonstrated that it is also feasible using a laboratory source, bringing it one step closer to clinical use. Nevertheless, the requirement of fine gratings and high precision stepping motors when using a laboratory source prevents it from being widely used. Recently, a random phase object has been proposed as an analyzer. This method requires a much less robust experimental setup. However, previous studies were done using a particular X-ray source (liquid-metal jet micro-focus source) or high precision motors for stepping. We have been working on a much simpler setup with just small modification of a commercial bench-top micro-CT (computed tomography) scanner, by introducing a piece of sandpaper as the phase analyzer in front of the X-ray source. However, it needs a suitable algorithm for speckle tracking and 3D reconstructions. The precision and sensitivity of speckle tracking algorithm determine the resolution of the system, while the 3D reconstruction algorithm will affect the minimum number of projections required, thus limiting the temporal resolution. As phase contrast imaging methods usually require much longer exposure time than traditional absorption based X-ray imaging technologies, a dynamic phase contrast micro-CT with a high temporal resolution is particularly challenging. Different reconstruction methods, including neural network based techniques, will be evaluated in this project to increase the temporal resolution of the phase contrast micro-CT. A Monte Carlo ray tracing simulation (McXtrace) was used to generate a large dataset to train the neural network, in order to address the issue that neural networks require large amount of training data to get high-quality reconstructions.Keywords: micro-ct, neural networks, reconstruction, speckle-based x-ray phase contrast
Procedia PDF Downloads 25711266 System Identification in Presence of Outliers
Authors: Chao Yu, Qing-Guo Wang, Dan Zhang
Abstract:
The outlier detection problem for dynamic systems is formulated as a matrix decomposition problem with low-rank, sparse matrices and further recast as a semidefinite programming (SDP) problem. A fast algorithm is presented to solve the resulting problem while keeping the solution matrix structure and it can greatly reduce the computational cost over the standard interior-point method. The computational burden is further reduced by proper construction of subsets of the raw data without violating low rank property of the involved matrix. The proposed method can make exact detection of outliers in case of no or little noise in output observations. In case of significant noise, a novel approach based on under-sampling with averaging is developed to denoise while retaining the saliency of outliers and so-filtered data enables successful outlier detection with the proposed method while the existing filtering methods fail. Use of recovered “clean” data from the proposed method can give much better parameter estimation compared with that based on the raw data.Keywords: outlier detection, system identification, matrix decomposition, low-rank matrix, sparsity, semidefinite programming, interior-point methods, denoising
Procedia PDF Downloads 30711265 Assessment of the Root Causes of Marine Debris Problem in Lagos State
Authors: Chibuzo Okoye Daniels, Gillian Glegg, Lynda Rodwell
Abstract:
The continuously growing quantity of very slow degrading litter deliberately discarded into the coastal waters around Lagos as marine debris is obvious. What is not known is how to tackle this problem to reduce its prevalence and impact on the environment, economy and community. To identify ways of tackling the marine debris problem two case study areas (Ikoyi and Victoria Islands of Lagos State) were used to assess the root causes, the threat posed by marine debris in the coastal waters around Lagos and the efficacy of current instruments, programmes and initiatives that address marine debris in the study areas. The following methods were used: (1) Self-completed questionnaires for households and businesses within the study areas; (2) Semi-structured interviews with key stakeholders; (3) Observational studies of waste management from collection to disposal and waste management facilities for waste originating from land and maritime sources; (4) Beach surveys and marine debris surveys on shorelines and ports; and (5) Fishing for marine debris. Results of this study identified the following root causes: (1) Indiscriminate human activities and behaviors, and lack of awareness on the part of the main stakeholders and the public of the potential consequences of their actions; (2) Poor solid waste management practices; (3) Lack of strict legal frameworks addressing waste and marine debris problem; and (4) Disposal of non-degradable wastes into domestic sewer system and open streets drains. To effectively tackle marine debris problem in the study areas, adequate, appropriate and cost effective solutions to the above mentioned root causes needs to be identified and effectively transferred for implementation in the study areas.Keywords: marine debris problem, Lagos state, litter, coastal waters
Procedia PDF Downloads 38011264 The Use of Creativity to Nudge Students Into Heutagogy: An Implementation in Graduate Business Education
Authors: Ricardo Bragança, Tom Vinaimont
Abstract:
This paper discusses the introduction of processes of self-determined learning (heutagogy) into a graduate course on financial modeling, using elements of entangled pedagogy and Biggs’ constructive alignment. To encourage learners to take control of their own learning journey and develop critical thinking and problem-solving skills, each session in the course receives tailor-made media-enhanced pedagogical assets. The design of those assets specifically supports entangled pedagogy, which opposes technological or pedagogical determinism in support of the collaborative integration of pedagogy and technology. Media assets for each of the ten sessions in this course consist of three components. The first component in this three-pronged approach is a game-cut-like cinematographic representation that introduces the context of the session. The second component represents a character from an open-source-styled community that encourages self-determined learning. The third component consists of a character, which refers to the in-person instructor and also aligns learning outcomes and assessment tasks, using Biggs’ constructive alignment, to the cinematographic and open-source-styled component. In essence, the course's metamorphosis helps students apply the concepts they've studied to actual financial modeling issues. The audio-visual media assets create a storyline throughout the course based on gamified and real-world applications, thus encouraging student engagement and interaction. The structured entanglement of pedagogy and technology also guides the instructor in the design of the in-class interactions and directs the focus on outcomes and assessments. The transformation process of this graduate course in financial modeling led to an institutional teaching award in 2021. The transformation of this course may be used as a model for other courses and programs in many disciplines to help with intended learning outcomes integration, constructive alignment, and Assurance of Learning.Keywords: innovative education, active learning, entangled pedagogy, heutagogy, constructive alignment, project based learning, financial modeling, graduate business education
Procedia PDF Downloads 7211263 A Hybrid Model of Goal, Integer and Constraint Programming for Single Machine Scheduling Problem with Sequence Dependent Setup Times: A Case Study in Aerospace Industry
Authors: Didem Can
Abstract:
Scheduling problems are one of the most fundamental issues of production systems. Many different approaches and models have been developed according to the production processes of the parts and the main purpose of the problem. In this study, one of the bottleneck stations of a company serving in the aerospace industry is analyzed and considered as a single machine scheduling problem with sequence-dependent setup times. The objective of the problem is assigning a large number of similar parts to the same shift -to reduce chemical waste- while minimizing the number of tardy jobs. The goal programming method will be used to achieve two different objectives simultaneously. The assignment of parts to the shift will be expressed using the integer programming method. Finally, the constraint programming method will be used as it provides a way to find a result in a short time by avoiding worse resulting feasible solutions with the defined variables set. The model to be established will be tested and evaluated with real data in the application part.Keywords: constraint programming, goal programming, integer programming, sequence-dependent setup, single machine scheduling
Procedia PDF Downloads 23711262 Young Adult Gay Men's Healthcare Access in the Era of the Affordable Care Act
Authors: Marybec Griffin
Abstract:
Purpose: The purpose of this cross-sectional study was to get a better understanding of healthcare usage and satisfaction among young adult gay men (YAGM), including the facility used as the usual source of healthcare, preference for coordinated healthcare, and if their primary care provider (PCP) adequately addressed the health needs of gay men. Methods: Interviews were conducted among n=800 YAGM in New York City (NYC). Participants were surveyed about their sociodemographic characteristics and healthcare usage and satisfaction access using multivariable logistic regression models. The surveys were conducted between November 2015 and June 2016. Results: The mean age of the sample was 24.22 years old (SD=4.26). The racial and ethnic background of the participants is as follows: 35.8% (n=286) Black Non-Hispanic, 31.9% (n=225) Hispanic/Latino, 20.5% (n=164) White Non-Hispanic, 4.4% (n=35) Asian/Pacific Islander, and 6.9% (n=55) reporting some other racial or ethnic background. 31.1% (n=249) of the sample had an income below $14,999. 86.7% (n=694) report having either public or private health insurance. For usual source of healthcare, 44.6% (n=357) of the sample reported a private doctor’s office, 16.3% (n=130) reported a community health center, and 7.4% (n=59) reported an urgent care facility, and 7.6% (n=61) reported not having a usual source of healthcare. 56.4% (n=451) of the sample indicated a preference for coordinated healthcare. 54% (n=334) of the sample were very satisfied with their healthcare. Findings from multivariable logistical regression models indicate that participants with higher incomes (AOR=0.54, 95% CI 0.36-0.81, p < 0.01) and participants with a PCP (AOR=0.12, 95% CI 0.07-0.20, p < 0.001) were less likely to use a walk-in facility as their usual source of healthcare. Results from the second multivariable logistic regression model indicated that participants who experienced discrimination in a healthcare setting were less likely to prefer coordinated healthcare (AOR=0.63, 95% CI 0.42-0.96, p < 0.05). In the final multivariable logistic model, results indicated that participants who had disclosed their sexual orientation to their PCP (AOR=2.57, 95% CI 1.25-5.21, p < 0.01) and were comfortable discussing their sexual activity with their PCP (AOR=8.04, 95% CI 4.76-13.58, p < 0.001) were more likely to agree that their PCP adequately addressed the healthcare needs of gay men. Conclusion: Understanding healthcare usage and satisfaction among YAGM is necessary as the healthcare landscape changes, especially given the relatively recent addition of urgent care facilities. The type of healthcare facility used as a usual source of care influences the ability to seek comprehensive and coordinated healthcare services. While coordinated primary and sexual healthcare may be ideal, individual preference for this coordination among YAGM is desired but may be limited due to experiences of discrimination in primary care settings.Keywords: healthcare policy, gay men, healthcare access, Affordable Care Act
Procedia PDF Downloads 23911261 A Multi-Tenant Problem Oriented Medical Record System for Representing Patient Care Cases using SOAP (Subjective-Objective-Assessment-Plan) Note
Authors: Sabah Mohammed, Jinan Fiaidhi, Darien Sawyer
Abstract:
Describing clinical cases according to a clinical charting standard that enforces interoperability and enables connected care services can save lives in the event of a medical emergency or provide efficient and effective interventions for the benefit of the patients through the integration of bedside and bench side clinical research. This article presented a multi-tenant extension to the problem-oriented medical record that we have prototyped previously upon using the GraphQL Application Programming Interface to represent the notion of a problem list. Our implemented extension enables physicians and patients to collaboratively describe the patient case via using multi chatbots to collaboratively describe the patient case using the SOAP charting standard. Our extension also connects the described SOAP patient case with the HL7 FHIR (Health Interoperability Resources) medical record for connecting the patient case to the bench data.Keywords: problem-oriented medical record, graphQL, chatbots, SOAP
Procedia PDF Downloads 9111260 Invasive Asian Carp Fish Species: A Natural and Sustainable Source of Methionine for Organic Poultry Production
Authors: Komala Arsi, Ann M. Donoghue, Dan J. Donoghue
Abstract:
Methionine is an essential dietary amino acid necessary to promote growth and health of poultry. Synthetic methionine is commonly used as a supplement in conventional poultry diets and is temporarily allowed in organic poultry feed for lack of natural and organically approved sources of methionine. It has been a challenge to find a natural, sustainable and cost-effective source for methionine which reiterates the pressing need to explore potential alternatives of methionine for organic poultry production. Fish have high concentrations of methionine, but wild-caught fish are expensive and adversely impact wild fish populations. Asian carp (AC) is an invasive species and its utilization has the potential to be used as a natural methionine source. However, to our best knowledge, there is no proven technology to utilize this fish as a methionine source. In this study, we co-extruded Asian carp and soybean meal to form a dry-extruded, methionine-rich AC meal. In order to formulate rations with the novel extruded carp meal, the product was tested on cecectomized roosters for its amino acid digestibility and total metabolizable energy (TMEn). Excreta was collected and the gross energy, protein content of the feces was determined to calculate Total Metabolizable Energy (TME). The methionine content, digestibility and TME values were greater for the extruded AC meal than control diets. Carp meal was subsequently tested as a methionine source in feeds formulated for broilers, and production performance (body weight gain and feed conversion ratio) was assessed in comparison with broilers fed standard commercial diets supplemented with synthetic methionine. In this study, broiler chickens were fed either a control diet with synthetic methionine or a treatment diet with extruded AC meal (8 replicates/treatment; n=30 birds/replicate) from day 1 to 42 days of age. At the end of the trial, data for body weights, feed intake and feed conversion ratio (FCR) was analyzed using one-way ANOVA with Fisher LSD test for multiple comparisons. Results revealed that birds on AC diet had body weight gains and feed intake comparable to diets containing synthetic methionine (P > 0.05). Results from the study suggest that invasive AC-derived fish meal could potentially be an effective and inexpensive source of sustainable natural methionine for organic poultry farmers.Keywords: Asian carp, methionine, organic, poultry
Procedia PDF Downloads 15811259 Construction of Graph Signal Modulations via Graph Fourier Transform and Its Applications
Authors: Xianwei Zheng, Yuan Yan Tang
Abstract:
Classical window Fourier transform has been widely used in signal processing, image processing, machine learning and pattern recognition. The related Gabor transform is powerful enough to capture the texture information of any given dataset. Recently, in the emerging field of graph signal processing, researchers devoting themselves to develop a graph signal processing theory to handle the so-called graph signals. Among the new developing theory, windowed graph Fourier transform has been constructed to establish a time-frequency analysis framework of graph signals. The windowed graph Fourier transform is defined by using the translation and modulation operators of graph signals, following the similar calculations in classical windowed Fourier transform. Specifically, the translation and modulation operators of graph signals are defined by using the Laplacian eigenvectors as follows. For a given graph signal, its translation is defined by a similar manner as its definition in classical signal processing. Specifically, the translation operator can be defined by using the Fourier atoms; the graph signal translation is defined similarly by using the Laplacian eigenvectors. The modulation of the graph can also be established by using the Laplacian eigenvectors. The windowed graph Fourier transform based on these two operators has been applied to obtain time-frequency representations of graph signals. Fundamentally, the modulation operator is defined similarly to the classical modulation by multiplying a graph signal with the entries in each Fourier atom. However, a single Laplacian eigenvector entry cannot play a similar role as the Fourier atom. This definition ignored the relationship between the translation and modulation operators. In this paper, a new definition of the modulation operator is proposed and thus another time-frequency framework for graph signal is constructed. Specifically, the relationship between the translation and modulation operations can be established by the Fourier transform. Specifically, for any signal, the Fourier transform of its translation is the modulation of its Fourier transform. Thus, the modulation of any signal can be defined as the inverse Fourier transform of the translation of its Fourier transform. Therefore, similarly, the graph modulation of any graph signal can be defined as the inverse graph Fourier transform of the translation of its graph Fourier. The novel definition of the graph modulation operator established a relationship of the translation and modulation operations. The new modulation operation and the original translation operation are applied to construct a new framework of graph signal time-frequency analysis. Furthermore, a windowed graph Fourier frame theory is developed. Necessary and sufficient conditions for constructing windowed graph Fourier frames, tight frames and dual frames are presented in this paper. The novel graph signal time-frequency analysis framework is applied to signals defined on well-known graphs, e.g. Minnesota road graph and random graphs. Experimental results show that the novel framework captures new features of graph signals.Keywords: graph signals, windowed graph Fourier transform, windowed graph Fourier frames, vertex frequency analysis
Procedia PDF Downloads 34211258 The Use of Water Hyacinth for Bioenergy Electric Generation: For the case of Tana Water Hyacinth
Authors: Seada Hussen Adem, Frie Ayalew Yimam
Abstract:
Due to its high biomass output and potential to produce renewable energy, water hyacinth, a rapidly expanding aquatic weed, has gained recognition as a prospective bioenergy feedstock. Through a variety of conversion processes, such as anaerobic digestion, combustion, and gasification, this study suggests using water hyacinth to generate energy. The suggested strategy helps to reduce the annoyance brought on by the excessive growth of water hyacinth in Tana water bodies in addition to offering an alternate source of energy. The study emphasizes the value of environmentally friendly methods for managing Tana water resources as well as the potential of water hyacinth as a source of bioenergy.Keywords: anaerobic digestion, bioenergy, combustion, gasification, water hyacinth
Procedia PDF Downloads 6711257 A Collaborative Problem Driven Approach to Design an HR Analytics Application
Authors: L. Atif, C. Rosenthal-Sabroux, M. Grundstein
Abstract:
The requirements engineering process is a crucial phase in the design of complex systems. The purpose of our research is to present a collaborative problem-driven requirements engineering approach that aims at improving the design of a Decision Support System as an Analytics application. This approach has been adopted to design a Human Resource management DSS. The Requirements Engineering process is presented as a series of guidelines for activities that must be implemented to assure that the final product satisfies end-users requirements and takes into account the limitations identified. For this, we know that a well-posed statement of the problem is “a problem whose crucial character arises from collectively produced estimation and a formulation found to be acceptable by all the parties”. Moreover, we know that DSSs were developed to help decision-makers solve their unstructured problems. So, we thus base our research off of the assumption that developing DSS, particularly for helping poorly structured or unstructured decisions, cannot be done without considering end-user decision problems, how to represent them collectively, decisions content, their meaning, and the decision-making process; thus, arise the field issues in a multidisciplinary perspective. Our approach addresses a problem-driven and collaborative approach to designing DSS technologies: It will reflect common end-user problems in the upstream design phase and in the downstream phase these problems will determine the design choices and potential technical solution. We will thus rely on a categorization of HR’s problems for a development mirroring the Analytics solution. This brings out a new data-driven DSS typology: Descriptive Analytics, Explicative or Diagnostic Analytics, Predictive Analytics, Prescriptive Analytics. In our research, identifying the problem takes place with design of the solution, so, we would have to resort a significant transformations of representations associated with the HR Analytics application to build an increasingly detailed representation of the goal to be achieved. Here, the collective cognition is reflected in the establishment of transfer functions of representations during the whole of the design process.Keywords: DSS, collaborative design, problem-driven requirements, analytics application, HR decision making
Procedia PDF Downloads 29511256 A Robust Optimization for Multi-Period Lost-Sales Inventory Control Problem
Authors: Shunichi Ohmori, Sirawadee Arunyanart, Kazuho Yoshimoto
Abstract:
We consider a periodic review inventory control problem of minimizing production cost, inventory cost, and lost-sales under demand uncertainty, in which product demands are not specified exactly and it is only known to belong to a given uncertainty set, yet the constraints must hold for possible values of the data from the uncertainty set. We propose a robust optimization formulation for obtaining lowest cost possible and guaranteeing the feasibility with respect to range of order quantity and inventory level under demand uncertainty. Our formulation is based on the adaptive robust counterpart, which suppose order quantity is affine function of past demands. We derive certainty equivalent problem via second-order cone programming, which gives 'not too pessimistic' worst-case.Keywords: robust optimization, inventory control, supply chain managment, second-order programming
Procedia PDF Downloads 40911255 New Approach for Minimizing Wavelength Fragmentation in Wavelength-Routed WDM Networks
Authors: Sami Baraketi, Jean Marie Garcia, Olivier Brun
Abstract:
Wavelength Division Multiplexing (WDM) is the dominant transport technology used in numerous high capacity backbone networks, based on optical infrastructures. Given the importance of costs (CapEx and OpEx) associated to these networks, resource management is becoming increasingly important, especially how the optical circuits, called “lightpaths”, are routed throughout the network. This requires the use of efficient algorithms which provide routing strategies with the lowest cost. We focus on the lightpath routing and wavelength assignment problem, known as the RWA problem, while optimizing wavelength fragmentation over the network. Wavelength fragmentation poses a serious challenge for network operators since it leads to the misuse of the wavelength spectrum, and then to the refusal of new lightpath requests. In this paper, we first establish a new Integer Linear Program (ILP) for the problem based on a node-link formulation. This formulation is based on a multilayer approach where the original network is decomposed into several network layers, each corresponding to a wavelength. Furthermore, we propose an efficient heuristic for the problem based on a greedy algorithm followed by a post-treatment procedure. The obtained results show that the optimal solution is often reached. We also compare our results with those of other RWA heuristic methods.Keywords: WDM, lightpath, RWA, wavelength fragmentation, optimization, linear programming, heuristic
Procedia PDF Downloads 52711254 A Genetic Algorithm Approach to Solve a Weaving Job Scheduling Problem, Aiming Tardiness Minimization
Authors: Carolina Silva, João Nuno Oliveira, Rui Sousa, João Paulo Silva
Abstract:
This study uses genetic algorithms to solve a job scheduling problem in a weaving factory. The underline problem regards an NP-Hard problem concerning unrelated parallel machines, with sequence-dependent setup times. This research uses real data regarding a weaving industry located in the North of Portugal, with a capacity of 96 looms and a production, on average, of 440000 meters of fabric per month. Besides, this study includes a high level of complexity once most of the real production constraints are applied, and several real data instances are tested. Topics such as data analyses and algorithm performance are addressed and tested, to offer a solution that can generate reliable and due date results. All the approaches will be tested in the operational environment, and the KPIs monitored, to understand the solution's impact on the production, with a particular focus on the total number of weeks of late deliveries to clients. Thus, the main goal of this research is to develop a solution that allows for the production of automatically optimized production plans, aiming to the tardiness minimizing.Keywords: genetic algorithms, textile industry, job scheduling, optimization
Procedia PDF Downloads 15711253 Problem Solving in Chilean Higher Education: Figurations Prior in Interpretations of Cartesian Graphs
Authors: Verónica Díaz
Abstract:
A Cartesian graph, as a mathematical object, becomes a tool for configuration of change. Its best comprehension is done through everyday life problem-solving associated with its representation. Despite this, the current educational framework favors general graphs, without consideration of their argumentation. Students are required to find the mathematical function without associating it to the development of graphical language. This research describes the use made by students of configurations made prior to Cartesian graphs with regards to an everyday life problem related to a time and distance variation phenomenon. The theoretical framework describes the function conditions of study and their modeling. This is a qualitative, descriptive study involving six undergraduate case studies that were carried out during the first term in 2016 at University of Los Lagos. The research problem concerned the graphic modeling of a real person’s movement phenomenon, and two levels of analysis were identified. The first level aims to identify local and global graph interpretations; a second level describes the iconicity and referentiality degree of an image. According to the results, students were able to draw no figures before the Cartesian graph, highlighting the need for students to represent the context and the movement of which causes the phenomenon change. From this, they managed Cartesian graphs representing changes in position, therefore, achieved an overall view of the graph. However, the local view only indicates specific events in the problem situation, using graphic and verbal expressions to represent movement. This view does not enable us to identify what happens on the graph when the movement characteristics change based on possible paths in the person’s walking speed.Keywords: cartesian graphs, higher education, movement modeling, problem solving
Procedia PDF Downloads 21811252 Applications of Probabilistic Interpolation via Orthogonal Matrices
Authors: Dariusz Jacek Jakóbczak
Abstract:
Mathematics and computer science are interested in methods of 2D curve interpolation and extrapolation using the set of key points (knots). A proposed method of Hurwitz- Radon Matrices (MHR) is such a method. This novel method is based on the family of Hurwitz-Radon (HR) matrices which possess columns composed of orthogonal vectors. Two-dimensional curve is interpolated via different functions as probability distribution functions: polynomial, sinus, cosine, tangent, cotangent, logarithm, exponent, arcsin, arccos, arctan, arcctg or power function, also inverse functions. It is shown how to build the orthogonal matrix operator and how to use it in a process of curve reconstruction.Keywords: 2D data interpolation, hurwitz-radon matrices, MHR method, probabilistic modeling, curve extrapolation
Procedia PDF Downloads 52511251 Remote Radiation Mapping Based on UAV Formation
Authors: Martin Arguelles Perez, Woosoon Yim, Alexander Barzilov
Abstract:
High-fidelity radiation monitoring is an essential component in the enhancement of the situational awareness capabilities of the Department of Energy’s Office of Environmental Management (DOE-EM) personnel. In this paper, multiple units of unmanned aerial vehicles (UAVs) each equipped with a cadmium zinc telluride (CZT) gamma-ray sensor are used for radiation source localization, which can provide vital real-time data for the EM tasks. To achieve this goal, a fully autonomous system of multicopter-based UAV swarm in 3D tetrahedron formation is used for surveying the area of interest and performing radiation source localization. The CZT sensor used in this study is suitable for small-size multicopter UAVs due to its small size and ease of interfacing with the UAV’s onboard electronics for high-resolution gamma spectroscopy enabling the characterization of radiation hazards. The multicopter platform with a fully autonomous flight feature is suitable for low-altitude applications such as radiation contamination sites. The conventional approach uses a single UAV mapping in a predefined waypoint path to predict the relative location and strength of the source, which can be time-consuming for radiation localization tasks. The proposed UAV swarm-based approach can significantly improve its ability to search for and track radiation sources. In this paper, two approaches are developed using (a) 2D planar circular (3 UAVs) and (b) 3D tetrahedron formation (4 UAVs). In both approaches, accurate estimation of the gradient vector is crucial for heading angle calculation. Each UAV carries the CZT sensor; the real-time radiation data are used for the calculation of a bulk heading vector for the swarm to achieve a UAV swarm’s source-seeking behavior. Also, a spinning formation is studied for both cases to improve gradient estimation near a radiation source. In the 3D tetrahedron formation, a UAV located closest to the source is designated as a lead unit to maintain the tetrahedron formation in space. Such a formation demonstrated a collective and coordinated movement for estimating a gradient vector for the radiation source and determining an optimal heading direction of the swarm. The proposed radiation localization technique is studied by computer simulation and validated experimentally in the indoor flight testbed using gamma sources. The technology presented in this paper provides the capability to readily add/replace radiation sensors to the UAV platforms in the field conditions enabling extensive condition measurement and greatly improving situational awareness and event management. Furthermore, the proposed radiation localization approach allows long-term measurements to be efficiently performed at wide areas of interest to prevent disasters and reduce dose risks to people and infrastructure.Keywords: radiation, unmanned aerial system(UAV), source localization, UAV swarm, tetrahedron formation
Procedia PDF Downloads 9911250 Memetic Algorithm for Solving the One-To-One Shortest Path Problem
Authors: Omar Dib, Alexandre Caminada, Marie-Ange Manier
Abstract:
The purpose of this study is to introduce a novel approach to solve the one-to-one shortest path problem. A directed connected graph is assumed in which all edges’ weights are positive. Our method is based on a memetic algorithm in which we combine a genetic algorithm (GA) and a variable neighborhood search method (VNS). We compare our approximate method with two exact algorithms Dijkstra and Integer Programming (IP). We made experimentations using random generated, complete and real graph instances. In most case studies, numerical results show that our method outperforms exact methods with 5% average gap to the optimality. Our algorithm’s average speed is 20-times faster than Dijkstra and more than 1000-times compared to IP. The details of the experimental results are also discussed and presented in the paper.Keywords: shortest path problem, Dijkstra’s algorithm, integer programming, memetic algorithm
Procedia PDF Downloads 46711249 Mathematical Modeling and Algorithms for the Capacitated Facility Location and Allocation Problem with Emission Restriction
Authors: Sagar Hedaoo, Fazle Baki, Ahmed Azab
Abstract:
In supply chain management, network design for scalable manufacturing facilities is an emerging field of research. Facility location allocation assigns facilities to customers to optimize the overall cost of the supply chain. To further optimize the costs, capacities of these facilities can be changed in accordance with customer demands. A mathematical model is formulated to fully express the problem at hand and to solve small-to-mid range instances. A dedicated constraint has been developed to restrict emissions in line with the Kyoto protocol. This problem is NP-Hard; hence, a simulated annealing metaheuristic has been developed to solve larger instances. A case study on the USA-Canada cross border crossing is used.Keywords: emission, mixed integer linear programming, metaheuristic, simulated annealing
Procedia PDF Downloads 30911248 Conductivity-Depth Inversion of Large Loop Transient Electromagnetic Sounding Data over Layered Earth Models
Authors: Ravi Ande, Mousumi Hazari
Abstract:
One of the common geophysical techniques for mapping subsurface geo-electrical structures, extensive hydro-geological research, and engineering and environmental geophysics applications is the use of time domain electromagnetic (TDEM)/transient electromagnetic (TEM) soundings. A large transmitter loop for energising the ground and a small receiver loop or magnetometer for recording the transient voltage or magnetic field in the air or on the surface of the earth, with the receiver at the center of the loop or at any random point inside or outside the source loop, make up a large loop TEM system. In general, one can acquire data using one of the configurations with a large loop source, namely, with the receiver at the center point of the loop (central loop method), at an arbitrary in-loop point (in-loop method), coincident with the transmitter loop (coincidence-loop method), and at an arbitrary offset loop point (offset-loop method), respectively. Because of the mathematical simplicity associated with the expressions of EM fields, as compared to the in-loop and offset-loop systems, the central loop system (for ground surveys) and coincident loop system (for ground as well as airborne surveys) have been developed and used extensively for the exploration of mineral and geothermal resources, for mapping contaminated groundwater caused by hazardous waste and thickness of permafrost layer. Because a proper analytical expression for the TEM response over the layered earth model for the large loop TEM system does not exist, the forward problem used in this inversion scheme is first formulated in the frequency domain and then it is transformed in the time domain using Fourier cosine or sine transforms. Using the EMLCLLER algorithm, the forward computation is initially carried out in the frequency domain. As a result, the EMLCLLER modified the forward calculation scheme in NLSTCI to compute frequency domain answers before converting them to the time domain using Fourier Cosine and/or Sine transforms.Keywords: time domain electromagnetic (TDEM), TEM system, geoelectrical sounding structure, Fourier cosine
Procedia PDF Downloads 9211247 Knowledge-Driven Decision Support System Based on Knowledge Warehouse and Data Mining by Improving Apriori Algorithm with Fuzzy Logic
Authors: Pejman Hosseinioun, Hasan Shakeri, Ghasem Ghorbanirostam
Abstract:
In recent years, we have seen an increasing importance of research and study on knowledge source, decision support systems, data mining and procedure of knowledge discovery in data bases and it is considered that each of these aspects affects the others. In this article, we have merged information source and knowledge source to suggest a knowledge based system within limits of management based on storing and restoring of knowledge to manage information and improve decision making and resources. In this article, we have used method of data mining and Apriori algorithm in procedure of knowledge discovery one of the problems of Apriori algorithm is that, a user should specify the minimum threshold for supporting the regularity. Imagine that a user wants to apply Apriori algorithm for a database with millions of transactions. Definitely, the user does not have necessary knowledge of all existing transactions in that database, and therefore cannot specify a suitable threshold. Our purpose in this article is to improve Apriori algorithm. To achieve our goal, we tried using fuzzy logic to put data in different clusters before applying the Apriori algorithm for existing data in the database and we also try to suggest the most suitable threshold to the user automatically.Keywords: decision support system, data mining, knowledge discovery, data discovery, fuzzy logic
Procedia PDF Downloads 33511246 Asymmetries in Monetary Policy Response: The Role of Uncertainty in the Case of Nigeria
Authors: Elias Udeaja, Elijah Udoh
Abstract:
Exploring an extended SVAR model (SVAR-X), we use the case of Nigeria to hypothesize for the role of uncertainty as the underlying source of asymmetries in the response of monetary policy to output and inflation. Deciphered the empirical finding is the potential of monetary policy exhibiting greater sensitive to shocks due to output growth than they do to shocks due to inflation in recession periods, while the reverse appears to be the case for a contractionary monetary policy. We also find the asymmetric preference in the response of monetary policy to changes in output and inflation as relatively more pronounced when we control for uncertainty as the underlying source of asymmetries.Keywords: asymmetry response, developing economies, monetary policy shocks, uncertainty
Procedia PDF Downloads 14411245 Commercial Law Between Custom and Islamic Law
Authors: Shimaa Abdel-Rahman Amin El-Badawy
Abstract:
Commercial law is the set of legal rules that apply to business and regulates the trade of trade. The meaning of this is that the commercial law regulates certain relations only that arises as a result of carrying out certain businesses. which are business, as it regulates the activity of a specific sect, the sect of merchants, and the commercial law as other branches of the law has characteristics that distinguish it from other laws and various, and various sources from which its basis is derived from It is the objective or material source. the historical source, the official source and the interpretative source, and we are limited to official sources and explanatory sources. so what do you see what these sources are, and what is their degree and strength in taking it in commercial disputes. The first topic / characteristics of commercial law. Commercial law has become necessary for the world of trade and economics, which cannot be dispensed with, given the reasons that have been set as legal rules for commercial field.In fact, it is sufficient to refer to the stability and stability of the environment, and in exchange for the movement and the speed in which the commercial environment is in addition to confidence and credit. the characteristic of speed and the characteristic of trust, and credit are the ones that justify the existence of commercial law.Business is fast, while civil business is slow, stable and stability. The person concludes civil transactions in his life only a little. And before doing any civil action. he must have a period of thinking and scrutiny, and the investigation is the person who wants the husband, he must have a period of thinking and scrutiny. as if the person who wants to acquire a house to live with with his family, he must search and investigate. Discuss the price before the conclusion of a purchase contract. In the commercial field, transactions take place very quickly because the time factor has an important role in concluding deals and achieving profits. This is because the merchant in contracting about a specific deal would cause a loss to the merchant due to the linkage of the commercial law with the fluctuations of the economy and the market. The merchant may also conclude more than one deal in one and short time. And that is due to the absence of commercial law from the formalities and procedures that hinder commercial transactions.Keywords: law, commercial law, Islamic law, custom and Islamic law
Procedia PDF Downloads 7311244 Geochemical Study of Natural Bitumen, Condensate and Gas Seeps from Sousse Area, Central Tunisia
Authors: Belhaj Mohamed, M. Saidi, N. Boucherab, N. Ouertani, I. Bouazizi, M. Ben Jrad
Abstract:
Natural hydrocarbon seepage has helped petroleum exploration as a direct indicator of gas and/or oil subsurface accumulations. Surface macro-seeps are generally an indication of a fault in an active Petroleum Seepage System belonging to a Total Petroleum System. This paper describes a case study in which multiple analytical techniques were used to identify and characterize trace petroleum-related hydrocarbons and other volatile organic compounds in groundwater samples collected from Sousse aquifer (Central Tunisia). The analytical techniques used for analyses of water samples included gas chromatography-mass spectrometry (GC-MS), capillary GC with flame-ionization detection, Compund Specific Isotope Analysis, Rock Eval Pyrolysis. The objective of the study was to confirm the presence of gasoline and other petroleum products or other volatile organic pollutants in those samples in order to assess the respective implication of each of the potentially responsible parties to the contamination of the aquifer. In addition, the degree of contamination at different depths in the aquifer was also of interest. The oil and gas seeps have been investigated using biomarker and stable carbon isotope analyses to perform oil-oil and oil-source rock correlations. The seepage gases are characterized by high CH4 content, very low δ13CCH4 values (-71,9 ‰) and high C1/C1–5 ratios (0.95–1.0), light deuterium–hydrogen isotope ratios (-198 ‰) and light δ13CC2 and δ13CCO2 values (-23,8‰ and-23,8‰ respectively) indicating a thermogenic origin with the contribution of the biogenic gas. An organic geochemistry study was carried out on the more ten oil seep samples. This study includes light hydrocarbon and biomarkers analyses (hopanes, steranes, n-alkanes, acyclic isoprenoids, and aromatic steroids) using GC and GC-MS. The studied samples show at least two distinct families, suggesting two different types of crude oil origins: the first oil seeps appears to be highly mature, showing evidence of chemical and/or biological degradation and was derived from a clay-rich source rock deposited in suboxic conditions. It has been sourced mainly by the lower Fahdene (Albian) source rocks. The second oil seeps was derived from a carbonate-rich source rock deposited in anoxic conditions, well correlated with the Bahloul (Cenomanian-Turonian) source rock.Keywords: biomarkers, oil and gas seeps, organic geochemistry, source rock
Procedia PDF Downloads 44311243 A Polynomial Approach for a Graphical-based Integrated Production and Transport Scheduling with Capacity Restrictions
Authors: M. Ndeley
Abstract:
The performance of global manufacturing supply chains depends on the interaction of production and transport processes. Currently, the scheduling of these processes is done separately without considering mutual requirements, which leads to no optimal solutions. An integrated scheduling of both processes enables the improvement of supply chain performance. The integrated production and transport scheduling problem (PTSP) is NP-hard, so that heuristic methods are necessary to efficiently solve large problem instances as in the case of global manufacturing supply chains. This paper presents a heuristic scheduling approach which handles the integration of flexible production processes with intermodal transport, incorporating flexible land transport. The method is based on a graph that allows a reformulation of the PTSP as a shortest path problem for each job, which can be solved in polynomial time. The proposed method is applied to a supply chain scenario with a manufacturing facility in South Africa and shipments of finished product to customers within the Country. The obtained results show that the approach is suitable for the scheduling of large-scale problems and can be flexibly adapted to different scenarios.Keywords: production and transport scheduling problem, graph based scheduling, integrated scheduling
Procedia PDF Downloads 474