Search results for: robust optimization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4430

Search results for: robust optimization

1490 Land Suitability Analysis for Maize Production in Egbeda Local Government Area of Oyo State Using GIS Techniques

Authors: Abegunde Linda, Adedeji Oluwatayo, Tope-Ajayi Opeyemi

Abstract:

Maize constitutes a major agrarian production for use by the vast population but despite its economic importance, it has not been produced to meet the economic needs of the country. Achieving optimum yield in maize can meaningfully be supported by land suitability analysis in order to guarantee self-sufficiency for future production optimization. This study examines land suitability for maize production through the analysis of the physic-chemical variations in soil properties over space using a Geographic Information System (GIS) framework. Physic-chemical parameters of importance selected include slope, landuse, and physical and chemical properties of the soil. Landsat imagery was used to categorize the landuse, Shuttle Radar Topographic Mapping (SRTM) generated the slope and soil samples were analyzed for its physical and chemical components. Suitability was categorized into highly, moderately and marginally suitable based on Food and Agricultural Organisation (FAO) classification using the Analytical Hierarchy Process (AHP) technique of GIS. This result can be used by small scale farmers for efficient decision making in the allocation of land for maize production.

Keywords: AHP, GIS, MCE, suitability, Zea mays

Procedia PDF Downloads 382
1489 A Systematic Review of the Methodological and Reporting Quality of Case Series in Surgery

Authors: Riaz A. Agha, Alexander J. Fowler, Seon-Young Lee, Buket Gundogan, Katharine Whitehurst, Harkiran K. Sagoo, Kyung Jin Lee Jeong, Douglas G. Altman, Dennis P. Orgill

Abstract:

Introduction: Case Series are an important and common study type. Currently, no guideline exists for reporting case series and there is evidence of key data being missed from such reports. We propose to develop a reporting guideline for case series using a methodologically robust technique. The first step in this process is a systematic review of literature relevant to the reporting deficiencies of case series. Methods: A systematic review of methodological and reporting quality in surgical case series was performed. The electronic search strategy was developed by an information specialist and included MEDLINE, EMBASE, Cochrane Methods Register, Science Citation index and Conference Proceedings Citation index, from the start of indexing until 5th November 2014. Independent screening, eligibility assessments and data extraction was performed. Included articles were analyzed for five areas of deficiency: failure to use standardized definitions missing or selective data transparency or incomplete reporting whether alternate study designs were considered. Results: The database searching identified 2,205 records. Through the process of screening and eligibility assessments, 92 articles met inclusion criteria. Frequency of methodological and reporting issues identified was a failure to use standardized definitions (57%), missing or selective data (66%), transparency, or incomplete reporting (70%), whether alternate study designs were considered (11%) and other issues (52%). Conclusion: The methodological and reporting quality of surgical case series needs improvement. Our data shows that clear evidence-based guidelines for the conduct and reporting of a case series may be useful to those planning or conducting them.

Keywords: case series, reporting quality, surgery, systematic review

Procedia PDF Downloads 349
1488 Study of Operating Conditions Impact on Physicochemical and Functional Properties of Dairy Powder Produced by Spray-drying

Authors: Adeline Meriaux, Claire Gaiani, Jennifer Burgain, Frantz Fournier, Lionel Muniglia, Jérémy Petit

Abstract:

Spray-drying process is widely used for the production of dairy powders for food and pharmaceuticals industries. It involves the atomization of a liquid feed into fine droplets, which are subsequently dried through contact with a hot air flow. The resulting powders permit transportation cost reduction and shelf life increase but can also exhibit various interesting functionalities (flowability, solubility, protein modification or acid gelation), depending on operating conditions and milk composition. Indeed, particles porosity, surface composition, lactose crystallization, protein denaturation, protein association or crust formation may change. Links between spray-drying conditions and physicochemical and functional properties of powders were investigated by a design of experiment methodology and analyzed by principal component analysis. Quadratic models were developed, and multicriteria optimization was carried out by the use of genetic algorithm. At the time of abstract submission, verification spray-drying trials are ongoing. To perform experiments, milk from dairy farm was collected, skimmed, froze and spray-dried at different air pressure (between 1 and 3 bars) and outlet temperature (between 75 and 95 °C). Dry matter, minerals content and proteins content were determined by standard method. Solubility index, absorption index and hygroscopicity were determined by method found in literature. Particle size distribution were obtained by laser diffraction granulometry. Location of the powder color in the Cielab color space and water activity were characterized by a colorimeter and an aw-value meter, respectively. Flow properties were characterized with FT4 powder rheometer; in particular, compressibility and shearing test were performed. Air pressure and outlet temperature are key factors that directly impact the drying kinetics and powder characteristics during spray-drying process. It was shown that the air pressure affects the particle size distribution by impacting the size of droplet exiting the nozzle. Moreover, small particles lead to more cohesive powder and less saturated color of powders. Higher outlet temperature results in lower moisture level particles which are less sticky and can explain a spray-drying yield increase and the higher cohesiveness; it also leads to particle with low water activity because of the intense evaporation rate. However, it induces a high hygroscopicity, thus, powders tend to get wet rapidly if they are not well stored. On the other hand, high temperature provokes a decrease of native serum proteins, which is positively correlated to gelation properties (gel point and firmness). Partial denaturation of serum proteins can improve functional properties of powder. The control of air pressure and outlet temperature during the spray-drying process significantly affects the physicochemical and functional properties of powder. This study permitted to better understand the links between physicochemical and functional properties of powder to identify correlations between air pressure and outlet temperature. Therefore, mathematical models have been developed, and the use of genetic algorithm will allow the optimization of powder functionalities.

Keywords: dairy powders, spray-drying, powders functionalities, design of experiment

Procedia PDF Downloads 55
1487 A Monolithic Arbitrary Lagrangian-Eulerian Finite Element Strategy for Partly Submerged Solid in Incompressible Fluid with Mortar Method for Modeling the Contact Surface

Authors: Suman Dutta, Manish Agrawal, C. S. Jog

Abstract:

Accurate computation of hydrodynamic forces on floating structures and their deformation finds application in the ocean and naval engineering and wave energy harvesting. This manuscript presents a monolithic, finite element strategy for fluid-structure interaction involving hyper-elastic solids partly submerged in an incompressible fluid. A velocity-based Arbitrary Lagrangian-Eulerian (ALE) formulation has been used for the fluid and a displacement-based Lagrangian approach has been used for the solid. The flexibility of the ALE technique permits us to treat the free surface of the fluid as a Lagrangian entity. At the interface, the continuity of displacement, velocity and traction are enforced using the mortar method. In the mortar method, the constraints are enforced in a weak sense using the Lagrange multiplier method. In the literature, the mortar method has been shown to be robust in solving various contact mechanics problems. The time-stepping strategy used in this work reduces to the generalized trapezoidal rule in the Eulerian setting. In the Lagrangian limit, in the absence of external load, the algorithm conserves the linear and angular momentum and the total energy of the system. The use of monolithic coupling with an energy-conserving time-stepping strategy gives an unconditionally stable algorithm and allows the user to take large time steps. All the governing equations and boundary conditions have been mapped to the reference configuration. The use of the exact tangent stiffness matrix ensures that the algorithm converges quadratically within each time step. The robustness and good performance of the proposed method are demonstrated by solving benchmark problems from the literature.

Keywords: ALE, floating body, fluid-structure interaction, monolithic, mortar method

Procedia PDF Downloads 267
1486 Design Standardization in Aramco: Strategic Analysis

Authors: Mujahid S. Alharbi

Abstract:

The construction of process plants in oil and gas-producing countries, such as Saudi Arabia, necessitates substantial investment in design and building. Each new plant, while unique, includes common building types, suggesting an opportunity for design standardization. This study investigates the adoption of standardized Issue For Construction (IFC) packages for non-process buildings in Saudi Aramco. A SWOT analysis presents the strengths, weaknesses, opportunities, and threats of this approach. The approach's benefits are illustrated using the Hawiyah Unayzah Gas Reservoir Storage Program (HUGRSP) as a case study. Standardization not only offers significant cost savings and operational efficiencies but also expedites project timelines, reduces the potential for change orders, and fosters local economic growth by allocating building tasks to local contractors. Standardization also improves project management by easing interface constraints between different contractors and promoting adaptability to future industry changes. This research underscores the standardization of non-process buildings as a powerful strategy for cost optimization, efficiency enhancement, and local economic development in process plant construction within the oil and gas sector.

Keywords: building, construction, management, project, standardization

Procedia PDF Downloads 48
1485 Smart Beta Portfolio Optimization

Authors: Saud Al Mahdi

Abstract:

Traditionally,portfolio managers have been discouraged from timing the market. This means, for example, that equity managers have been forced to adhere strictly to a benchmark with static or relatively stable components, such as the SP 500 or the Russell 3000. This means that the portfolio’s exposures to all risk factors should mimic as closely as possible the corresponding exposures of the benchmark. The main risk factor, of course, is the market itself. Effectively, a long-only portfolio would be constrained to have a beta 1. More recently, however, managers have been given greater discretion to adjust their portfolio’s risk exposures (in particular, the beta of their portfolio) dynamically to match the manager’s beliefs about future performance of the risk factors themselves. This freedom translates into the manager’s ability to adjust the portfolio’s beta dynamically. These strategies have come to be known as smart beta strategies. Adjusting beta dynamically amounts to attempting to "time" the market; that is, to increase exposure when one anticipates that the market will rise, and to decrease it when one anticipates that the market will fall. Traditionally, market timing has been believed to be impossible to perform effectively and consistently. Moreover, if a majority of market participants do it, their combined actions could destabilize the market. The aim of this project is to investigate so-called smart beta strategies to determine if they really can add value, or if they are merely marketing gimmicks used to sell dubious investment strategies.

Keywords: beta, alpha, active portfolio management, trading strategies

Procedia PDF Downloads 343
1484 Detecting HCC Tumor in Three Phasic CT Liver Images with Optimization of Neural Network

Authors: Mahdieh Khalilinezhad, Silvana Dellepiane, Gianni Vernazza

Abstract:

The aim of the present work is to build a model based on tissue characterization that is able to discriminate pathological and non-pathological regions from three-phasic CT images. Based on feature selection in different phases, in this research, we design a neural network system that has optimal neuron number in a hidden layer. Our approach consists of three steps: feature selection, feature reduction, and classification. For each ROI, 6 distinct set of texture features are extracted such as first order histogram parameters, absolute gradient, run-length matrix, co-occurrence matrix, autoregressive model, and wavelet, for a total of 270 texture features. We show that with the injection of liquid and the analysis of more phases the high relevant features in each region changed. Our results show that for detecting HCC tumor phase3 is the best one in most of the features that we apply to the classification algorithm. The percentage of detection between these two classes according to our method, relates to first order histogram parameters with the accuracy of 85% in phase 1, 95% phase 2, and 95% in phase 3.

Keywords: multi-phasic liver images, texture analysis, neural network, hidden layer

Procedia PDF Downloads 251
1483 A Sociological Investigation on the Population and Public Spaces of Nguyen Cong Tru, a Soviet-Style Collective Housing Complex in Hanoi in Regards to Its New Community-Focused Architectural Design

Authors: Duy Nguyen Do, Bart Julien Dewancker

Abstract:

Many Soviet-style collective housing complexes (also known as KTT) were built since the 1960s in Hanoi to support the post-war population growth. Those low-rise buildings have created well-knitted, robust communities, so much to the point that in most complexes, all families in one housing block would know each other, occasionally interact and provide supports in need. To understand how the community of collective housing complexes have developed and maintained in order to adapt their advantages into modern housing designs, the study is executed on the site of Nguyen Cong Tru KTT. This is one of the oldest KTT in Hanoi, completed in 1954. The complex also has an unique characteristic that is closely related to its community: the symbiotic relationship with Hom – a flea market that has been co-developing with Nguyen Cong Tru KTT since its beginning. The research consists of three phases: the first phase is a sociological investigation with Nguyen Cong Tru KTT’s current residents and a site survey on the complex’s economic and architectural characteristics. In the second phase, the collected data is analyzed to find out people’s opinions with the KTT’s concerning their satisfaction with the current housing status, floor plan organization, community, the relationship between the KTT’s dedicated public spaces with the flea market and their usage. Simultaneously, the master plan and gathered information regarding current architectural characteristics of the complex are also inspected. On the third phase, the analyses’ results will provide information regarding the issues, positive trends and significant historical features of the complex’s architecture in order to generate suitable proposals for the redesigning project of Nguyen Cong Tru KTT, a design focused on vitalizing modern apartments’ communities.

Keywords: collective house community, collective house public space, community-focused, redesigning Nguyen Cong Tru KTT, sociological investigation

Procedia PDF Downloads 345
1482 A Deep Learning Approach to Real Time and Robust Vehicular Traffic Prediction

Authors: Bikis Muhammed, Sehra Sedigh Sarvestani, Ali R. Hurson, Lasanthi Gamage

Abstract:

Vehicular traffic events have overly complex spatial correlations and temporal interdependencies and are also influenced by environmental events such as weather conditions. To capture these spatial and temporal interdependencies and make more realistic vehicular traffic predictions, graph neural networks (GNN) based traffic prediction models have been extensively utilized due to their capability of capturing non-Euclidean spatial correlation very effectively. However, most of the already existing GNN-based traffic prediction models have some limitations during learning complex and dynamic spatial and temporal patterns due to the following missing factors. First, most GNN-based traffic prediction models have used static distance or sometimes haversine distance mechanisms between spatially separated traffic observations to estimate spatial correlation. Secondly, most GNN-based traffic prediction models have not incorporated environmental events that have a major impact on the normal traffic states. Finally, most of the GNN-based models did not use an attention mechanism to focus on only important traffic observations. The objective of this paper is to study and make real-time vehicular traffic predictions while incorporating the effect of weather conditions. To fill the previously mentioned gaps, our prediction model uses a real-time driving distance between sensors to build a distance matrix or spatial adjacency matrix and capture spatial correlation. In addition, our prediction model considers the effect of six types of weather conditions and has an attention mechanism in both spatial and temporal data aggregation. Our prediction model efficiently captures the spatial and temporal correlation between traffic events, and it relies on the graph attention network (GAT) and Bidirectional bidirectional long short-term memory (Bi-LSTM) plus attention layers and is called GAT-BILSTMA.

Keywords: deep learning, real time prediction, GAT, Bi-LSTM, attention

Procedia PDF Downloads 59
1481 The Optimization of Topical Antineoplastic Therapy Using Controlled Release Systems Based on Amino-functionalized Mesoporous Silica

Authors: Lacramioara Ochiuz, Aurelia Vasile, Iulian Stoleriu, Cristina Ghiciuc, Maria Ignat

Abstract:

Topical administration of chemotherapeutic agents (eg. carmustine, bexarotene, mechlorethamine etc.) in local treatment of cutaneous T-cell lymphoma (CTCL) is accompanied by multiple side effects, such as contact hypersensitivity, pruritus, skin atrophy or even secondary malignancies. A known method of reducing the side effects of anticancer agent is the development of modified drug release systems using drug incapsulation in biocompatible nanoporous inorganic matrices, such as mesoporous MCM-41 silica. Mesoporous MCM-41 silica is characterized by large specific surface, high pore volume, uniform porosity, and stable dispersion in aqueous medium, excellent biocompatibility, in vivo biodegradability and capacity to be functionalized with different organic groups. Therefore, MCM-41 is an attractive candidate for a wide range of biomedical applications, such as controlled drug release, bone regeneration, protein immobilization, enzymes, etc. The main advantage of this material lies in its ability to host a large amount of the active substance in uniform pore system with adjustable size in a mesoscopic range. Silanol groups allow surface controlled functionalization leading to control of drug loading and release. This study shows (I) the amino-grafting optimization of mesoporous MCM-41 silica matrix by means of co-condensation during synthesis and post-synthesis using APTES (3-aminopropyltriethoxysilane); (ii) loading the therapeutic agent (carmustine) obtaining a modified drug release systems; (iii) determining the profile of in vitro carmustine release from these systems; (iv) assessment of carmustine release kinetics by fitting on four mathematical models. Obtained powders have been described in terms of structure, texture, morphology thermogravimetric analysis. The concentration of the therapeutic agent in the dissolution medium has been determined by HPLC method. In vitro dissolution tests have been done using cell Enhancer in a 12 hours interval. Analysis of carmustine release kinetics from mesoporous systems was made by fitting to zero-order model, first-order model Higuchi model and Korsmeyer-Peppas model, respectively. Results showed that both types of highly ordered mesoporous silica (amino grafted by co-condensation process or post-synthesis) are thermally stable in aqueous medium. In what regards the degree of loading and efficiency of loading with the therapeutic agent, there has been noticed an increase of around 10% in case of co-condensation method application. This result shows that direct co-condensation leads to even distribution of amino groups on the pore walls while in case of post-synthesis grafting many amino groups are concentrated near the pore opening and/or on external surface. In vitro dissolution tests showed an extended carmustine release (more than 86% m/m) both from systems based on silica functionalized directly by co-condensation and after synthesis. Assessment of carmustine release kinetics revealed a release through diffusion from all studied systems as a result of fitting to Higuchi model. The results of this study proved that amino-functionalized mesoporous silica may be used as a matrix for optimizing the anti-cancer topical therapy by loading carmustine and developing prolonged-release systems.

Keywords: carmustine, silica, controlled, release

Procedia PDF Downloads 250
1480 Optimization of Groundwater Utilization in Fish Aquaculture

Authors: M. Ahmed Eldesouky, S. Nasr, A. Beltagy

Abstract:

Groundwater is generally considered as the best source for aquaculture as it is well protected from contamination. The most common problem limiting the use of groundwater in Egypt is its high iron, manganese and ammonia content. This problem is often overcome by applying the treatment before use. Aeration in many cases is not enough to oxidize iron and manganese in complex forms with organics. Most of the treatment we use potassium permanganate as an oxidizer followed by a pressurized closed green sand filter. The aim of present study is to investigate the optimum characteristics of groundwater to give lowest iron, manganese and ammonia, maximum production and quality of fish in aquaculture in El-Max Research Station. The major design goal of the system was determined the optimum time for harvesting the treated water, pH, and Glauconite weight to use it for aquaculture process in the research site and achieve the Egyptian law (48/1982) and EPA level required for aquaculture. The water characteristics are [Fe = 0.116 mg/L, Mn = 1.36 mg/L,TN = 0.44 mg/L , TP = 0.07 mg/L , Ammonia = 0.386 mg/L] by using the glauconite filter we obtained high efficiency for removal for [(Fe, Mn and Ammonia] ,but in the Lab we obtained result for (Fe, 43-97), ( Mn,92-99 ), and ( Ammonia, 66-88 )]. We summarized the results to show the optimum time, pH, Glauconite weight, and the best model for design in the region.

Keywords: aquaculture, ammonia in groundwater, groundwater, iron and manganese in water, groundwater treatment

Procedia PDF Downloads 217
1479 Optimization in Friction Stir Processing Method with Emphasis on Optimized Process Parameters Laboratory Research

Authors: Atabak Rahimzadeh Ilkhch

Abstract:

Friction stir processing (FSP) has promised for application of thermo-mechanical processing techniques where aims to change the micro structural and mechanical properties of materials in order to obtain high performance and reducing the production time and cost. There are lots of studies focused on the microstructure of friction stir welded aluminum alloys. The main focus of this research is on the grain size obtained in the weld zone. Moreover in second part focused on temperature distribution effect over the entire weld zone and its effects on the microstructure. Also, there is a need to have more efforts on investigating to obtain the optimal value of effective parameters such as rotational speed on microstructure and to use the optimum tool designing method. the final results of this study will be present the variation of structural and mechanical properties of materials in the base of applying Friction stir processing and effect of (FSP) processing and tensile testing on surface quality. in the hand, this research addresses the FSP f AA-7020 aluminum and variation f ration of rotation and translational speeds.

Keywords: friction stir processing, AA-7020, thermo-mechanical, microstructure, temperature

Procedia PDF Downloads 266
1478 Investigation of Ductile Failure Mechanisms in SA508 Grade 3 Steel via X-Ray Computed Tomography and Fractography Analysis

Authors: Suleyman Karabal, Timothy L. Burnett, Egemen Avcu, Andrew H. Sherry, Philip J. Withers

Abstract:

SA508 Grade 3 steel is widely used in the construction of nuclear pressure vessels, where its fracture toughness plays a critical role in ensuring operational safety and reliability. Understanding the ductile failure mechanisms in this steel grade is crucial for designing robust pressure vessels that can withstand severe nuclear environment conditions. In the present study, round bar specimens of SA508 Grade 3 steel with four distinct notch geometries were subjected to tensile loading while capturing continuous 2D images at 5-second intervals in order to monitor any alterations in their geometries to construct true stress-strain curves of the specimens. 3D reconstructions of X-ray computed tomography (CT) images at high-resolution (a spatial resolution of 0.82 μm) allowed for a comprehensive assessment of the influences of second-phase particles (i.e., manganese sulfide inclusions and cementite particles) on ductile failure initiation as a function of applied plastic strain. Additionally, based on 2D and 3D images, plasticity modeling was executed, and the results were compared to experimental data. A specific ‘two-parameter criterion’ was established and calibrated based on the correlation between stress triaxiality and equivalent plastic strain at failure initiation. The proposed criterion demonstrated substantial agreement with the experimental results, thus enhancing our knowledge of ductile fracture behavior in this steel grade. The implementation of X-ray CT and fractography analysis provided new insights into the diverse roles played by different populations of second-phase particles in fracture initiation under varying stress triaxiality conditions.

Keywords: ductile fracture, two-parameter criterion, x-ray computed tomography, stress triaxiality

Procedia PDF Downloads 74
1477 Optimization of Monascus Orange Pigments Production Using pH-Controlled Fed-Batch Fermentation

Authors: Young Min Kim, Deokyeong Choe, Chul Soo Shin

Abstract:

Monascus pigments, commonly used as a natural colorant in Asia, have many biological activities, such as cholesterol level control, anti-obesity, anti-cancer, and anti-oxidant, that have recently been elucidated. Especially, amino acid derivatives of Monascus pigments are receiving much attention because they have higher biological activities than original Monascus pigments. Previously, there have been two ways to produce amino acid derivatives: one-step production and two-step production. However, the one-step production has low purity, and the two-step production—precursor(orange pigments) fermentation and derivatives synthesis—has low productivity and growth rate during its precursor fermentation step. In this study, it was verified that pH is a key factor that affects the stability of orange pigments and the growth rate of Monascus. With an optimal pH profile obtained by pH-stat fermentation, we designed a process of precursor(orange pigments) fermentation that is a pH-controlled fed-batch fermentation. The final concentration of orange pigments in this process increased to 5.5g/L which is about 30% higher than the concentration produced from the previously used precursor fermentation step.

Keywords: cultivation process, fed-batch fermentation, monascus pigments, pH stability

Procedia PDF Downloads 290
1476 Better Defined WHO International Classification of Disease Codes for Relapsing Fever Borreliosis, and Lyme Disease Education Aiding Diagnosis, Treatment Improving Human Right to Health

Authors: Mualla McManus, Jenna Luche Thaye

Abstract:

World Health Organisation International Classification of Disease codes were created to define disease including infections in order to guide and educate diagnosticians. Most infectious diseases such as syphilis are clearly defined by their ICD 10 codes and aid/help to educate the clinicians in syphilis diagnosis and treatment globally. However, current ICD 10 codes for relapsing fever Borreliosis and Lyme disease are less clearly defined and can impede appropriate diagnosis especially if the clinician is not familiar with the symptoms of these infectious diseases. This is despite substantial number of scientific articles published in peer-reviewed journals about relapsing fever and Lyme disease. In the USA there are estimated 380,000 people annually contacting Lyme disease, more cases than breast cancer and 6x HIV/AIDS cases. This represents estimated 0.09% of the USA population. If extrapolated to the global population (7billion), 0.09% equates to 63 million people contracting relapsing fever or Lyme disease. In many regions, the rate of contracting some form of infection from tick bite may be even higher. Without accurate and appropriate diagnostic codes, physicians are impeded in their ability to properly care for their patients, leaving those patients invisible and marginalized within the medical system and to those guiding public policy. This results in great personal hardship, pain, disability, and expense. This unnecessarily burdens health care systems, governments, families, and society as a whole. With accurate diagnostic codes in place, robust data can guide medical and public health research, health policy, track mortality and save health care dollars. Better defined ICD codes are the way forward in educating the diagnosticians about relapsing fever and Lyme diseases.

Keywords: WHO ICD codes, relapsing fever, Lyme diseases, World Health Organisation

Procedia PDF Downloads 178
1475 Empirical Analysis of Forensic Accounting Practices for Tackling Persistent Fraud and Financial Irregularities in the Nigerian Public Sector

Authors: Sani AbdulRahman Bala

Abstract:

This empirical study delves into the realm of forensic accounting practices within the Nigerian Public Sector, seeking to quantitatively analyze their efficacy in addressing the persistent challenges of fraud and financial irregularities. With a focus on empirical data, this research employs a robust methodology to assess the current state of fraud in the Nigerian Public Sector and evaluate the performance of existing forensic accounting measures. Through quantitative analyses, including statistical models and data-driven insights, the study aims to identify patterns, trends, and correlations associated with fraudulent activities. The research objectives include scrutinizing documented fraud cases, examining the effectiveness of established forensic accounting practices, and proposing data-driven strategies for enhancing fraud detection and prevention. Leveraging quantitative methodologies, the study seeks to measure the impact of technological advancements on forensic accounting accuracy and efficiency. Additionally, the research explores collaborative mechanisms among government agencies, regulatory bodies, and the private sector by quantifying the effects of information sharing on fraud prevention. The empirical findings from this study are expected to provide a nuanced understanding of the challenges and opportunities in combating fraud within the Nigerian Public Sector. The quantitative insights derived from real-world data will contribute to the refinement of forensic accounting strategies, ensuring their effectiveness in addressing the unique complexities of financial irregularities in the public sector. The study's outcomes aim to inform policymakers, practitioners, and stakeholders, fostering evidence-based decision-making and proactive measures for a more resilient and fraud-resistant financial governance system in Nigeria.

Keywords: fraud, financial irregularities, nigerian public sector, quantitative investigation

Procedia PDF Downloads 44
1474 New Approach for Minimizing Wavelength Fragmentation in Wavelength-Routed WDM Networks

Authors: Sami Baraketi, Jean Marie Garcia, Olivier Brun

Abstract:

Wavelength Division Multiplexing (WDM) is the dominant transport technology used in numerous high capacity backbone networks, based on optical infrastructures. Given the importance of costs (CapEx and OpEx) associated to these networks, resource management is becoming increasingly important, especially how the optical circuits, called “lightpaths”, are routed throughout the network. This requires the use of efficient algorithms which provide routing strategies with the lowest cost. We focus on the lightpath routing and wavelength assignment problem, known as the RWA problem, while optimizing wavelength fragmentation over the network. Wavelength fragmentation poses a serious challenge for network operators since it leads to the misuse of the wavelength spectrum, and then to the refusal of new lightpath requests. In this paper, we first establish a new Integer Linear Program (ILP) for the problem based on a node-link formulation. This formulation is based on a multilayer approach where the original network is decomposed into several network layers, each corresponding to a wavelength. Furthermore, we propose an efficient heuristic for the problem based on a greedy algorithm followed by a post-treatment procedure. The obtained results show that the optimal solution is often reached. We also compare our results with those of other RWA heuristic methods.

Keywords: WDM, lightpath, RWA, wavelength fragmentation, optimization, linear programming, heuristic

Procedia PDF Downloads 511
1473 Improvement of Process Competitiveness Using Intelligent Reference Models

Authors: Julio Macedo

Abstract:

Several methodologies are now available to conceive the improvements of a process so that it becomes competitive as for example total quality, process reengineering, six sigma, define measure analysis improvement control method. These improvements are of different nature and can be external to the process represented by an optimization model or a discrete simulation model. In addition, the process stakeholders are several and have different desired performances for the process. Hence, the methodologies above do not have a tool to aid in the conception of the required improvements. In order to fill this void we suggest the use of intelligent reference models. A reference model is a set of qualitative differential equations and an objective function that minimizes the gap between the current and the desired performance indexes of the process. The reference models are intelligent so when they receive the current state of the problematic process and the desired performance indexes they generate the required improvements for the problematic process. The reference models are fuzzy cognitive maps added with an objective function and trained using the improvements implemented by the high performance firms. Experiments done in a set of students show the reference models allow them to conceive more improvements than students that do not use these models.

Keywords: continuous improvement, fuzzy cognitive maps, process competitiveness, qualitative simulation, system dynamics

Procedia PDF Downloads 74
1472 A QoE-driven Cross-layer Resource Allocation Scheme for High Traffic Service over Open Wireless Network Downlink

Authors: Liya Shan, Qing Liao, Qinyue Hu, Shantao Jiang, Tao Wang

Abstract:

In this paper, a Quality of Experience (QoE)-driven cross-layer resource allocation scheme for high traffic service over Open Wireless Network (OWN) downlink is proposed, and the related problem about the users in the whole cell including the users in overlap region of different cells has been solved.A method, in which assess models of the BestEffort service and the no-reference assess algorithm for video service are adopted, to calculate the Mean Opinion Score (MOS) value for high traffic service has been introduced. The cross-layer architecture considers the parameters in application layer, media access control layer and physical layer jointly. Based on this architecture and the MOS value, the Binary Constrained Particle Swarm Optimization (B_CPSO) algorithm is used to solve the cross-layer resource allocation problem. In addition,simulationresults show that the proposed scheme significantly outperforms other schemes in terms of maximizing average users’ MOS value for the whole system as well as maintaining fairness among users.

Keywords: high traffic service, cross-layer resource allocation, QoE, B_CPSO, OWN

Procedia PDF Downloads 530
1471 A Genetic Algorithm Approach to Solve a Weaving Job Scheduling Problem, Aiming Tardiness Minimization

Authors: Carolina Silva, João Nuno Oliveira, Rui Sousa, João Paulo Silva

Abstract:

This study uses genetic algorithms to solve a job scheduling problem in a weaving factory. The underline problem regards an NP-Hard problem concerning unrelated parallel machines, with sequence-dependent setup times. This research uses real data regarding a weaving industry located in the North of Portugal, with a capacity of 96 looms and a production, on average, of 440000 meters of fabric per month. Besides, this study includes a high level of complexity once most of the real production constraints are applied, and several real data instances are tested. Topics such as data analyses and algorithm performance are addressed and tested, to offer a solution that can generate reliable and due date results. All the approaches will be tested in the operational environment, and the KPIs monitored, to understand the solution's impact on the production, with a particular focus on the total number of weeks of late deliveries to clients. Thus, the main goal of this research is to develop a solution that allows for the production of automatically optimized production plans, aiming to the tardiness minimizing.

Keywords: genetic algorithms, textile industry, job scheduling, optimization

Procedia PDF Downloads 142
1470 The Advantages of Using DNA-Barcoding for Determining the Fraud in Seafood

Authors: Elif Tugce Aksun Tumerkan

Abstract:

Although seafood is an important part of human diet and categorized highly traded food industry internationally, it is remain overlooked generally in the global food security aspect. Food product authentication is the main interest in the aim of both avoids commercial fraud and to consider the risks that might be harmful to human health safety. In recent years, with increasing consumer demand for regarding food content and it's transparency, there are some instrumental analyses emerging for determining food fraud depend on some analytical methodologies such as proteomic and metabolomics. While, fish and seafood consumed as fresh previously, within advanced technology, processed or packaged seafood consumption have increased. After processing or packaging seafood, morphological identification is impossible when some of the external features have been removed. The main fish and seafood quality-related issues are the authentications of seafood contents such as mislabelling products which may be contaminated and replacement partly or completely, by lower quality or cheaper ones. For all mentioned reasons, truthful consistent and easily applicable analytical methods are needed for assurance the correct labelling and verifying of seafood products. DNA-barcoding methods become popular robust that used in taxonomic research for endangered or cryptic species in recent years; they are used for determining food traceability also. In this review, when comparing the other proteomic and metabolic analysis, DNA-based methods are allowing a chance to identification all type of food even as raw, spiced and processed products. This privilege caused by DNA is a comparatively stable molecule than protein and other molecules. Furthermore showing variations in sequence based on different species and founding in all organisms, make DNA-based analysis more preferable. This review was performed to clarify the main advantages of using DNA-barcoding for determining seafood fraud among other techniques.

Keywords: DNA-barcoding, genetic analysis, food fraud, mislabelling, packaged seafood

Procedia PDF Downloads 155
1469 Constant Factor Approximation Algorithm for p-Median Network Design Problem with Multiple Cable Types

Authors: Chaghoub Soraya, Zhang Xiaoyan

Abstract:

This research presents the first constant approximation algorithm to the p-median network design problem with multiple cable types. This problem was addressed with a single cable type and there is a bifactor approximation algorithm for the problem. To the best of our knowledge, the algorithm proposed in this paper is the first constant approximation algorithm for the p-median network design with multiple cable types. The addressed problem is a combination of two well studied problems which are p-median problem and network design problem. The introduced algorithm is a random sampling approximation algorithm of constant factor which is conceived by using some random sampling techniques form the literature. It is based on a redistribution Lemma from the literature and a steiner tree problem as a subproblem. This algorithm is simple, and it relies on the notions of random sampling and probability. The proposed approach gives an approximation solution with one constant ratio without violating any of the constraints, in contrast to the one proposed in the literature. This paper provides a (21 + 2)-approximation algorithm for the p-median network design problem with multiple cable types using random sampling techniques.

Keywords: approximation algorithms, buy-at-bulk, combinatorial optimization, network design, p-median

Procedia PDF Downloads 186
1468 Target and Biomarker Identification Platform to Design New Drugs against Aging and Age-Related Diseases

Authors: Peter Fedichev

Abstract:

We studied fundamental aspects of aging to develop a mathematical model of gene regulatory network. We show that aging manifests itself as an inherent instability of gene network leading to exponential accumulation of regulatory errors with age. To validate our approach we studied age-dependent omic data such as transcriptomes, metabolomes etc. of different model organisms and humans. We build a computational platform based on our model to identify the targets and biomarkers of aging to design new drugs against aging and age-related diseases. As biomarkers of aging, we choose the rate of aging and the biological age since they completely determine the state of the organism. Since rate of aging rapidly changes in response to an external stress, this kind of biomarker can be useful as a tool for quantitative efficacy assessment of drugs, their combinations, dose optimization, chronic toxicity estimate, personalized therapies selection, clinical endpoints achievement (within clinical research), and death risk assessments. According to our model, we propose a method for targets identification for further interventions against aging and age-related diseases. Being a biotech company, we offer a complete pipeline to develop an anti-aging drug-candidate.

Keywords: aging, longevity, biomarkers, senescence

Procedia PDF Downloads 266
1467 Optimal Tamping for Railway Tracks, Reducing Railway Maintenance Expenditures by the Use of Integer Programming

Authors: Rui Li, Min Wen, Kim Bang Salling

Abstract:

For the modern railways, maintenance is critical for ensuring safety, train punctuality and overall capacity utilization. The cost of railway maintenance in Europe is high, on average between 30,000 – 100,000 Euros per kilometer per year. In order to reduce such maintenance expenditures, this paper presents a mixed 0-1 linear mathematical model designed to optimize the predictive railway tamping activities for ballast track in the planning horizon of three to four years. The objective function is to minimize the tamping machine actual costs. The approach of the research is using the simple dynamic model for modelling condition-based tamping process and the solution method for finding optimal condition-based tamping schedule. Seven technical and practical aspects are taken into account to schedule tamping: (1) track degradation of the standard deviation of the longitudinal level over time; (2) track geometrical alignment; (3) track quality thresholds based on the train speed limits; (4) the dependency of the track quality recovery on the track quality after tamping operation; (5) Tamping machine operation practices (6) tamping budgets and (7) differentiating the open track from the station sections. A Danish railway track between Odense and Fredericia with 42.6 km of length is applied for a time period of three and four years in the proposed maintenance model. The generated tamping schedule is reasonable and robust. Based on the result from the Danish railway corridor, the total costs can be reduced significantly (50%) than the previous model which is based on optimizing the number of tamping. The different maintenance strategies have been discussed in the paper. The analysis from the results obtained from the model also shows a longer period of predictive tamping planning has more optimal scheduling of maintenance actions than continuous short term preventive maintenance, namely yearly condition-based planning.

Keywords: integer programming, railway tamping, predictive maintenance model, preventive condition-based maintenance

Procedia PDF Downloads 427
1466 The Tariffs of Water Service for Productive Users: A Model for Defining Fare Classes

Authors: M. Macchiaroli, V. Pellecchia, L. Dolores

Abstract:

The water supply for production users (craft, commercial, industrial), understood as the set of water supply and wastewater collection services becomes an increasingly felt problem in a water scarcity regime. In fact, disputes are triggered between the different social parties for the fair and efficient use of water resources. Within this aspect, the problem arises of the different pricing of services between civil users and production users. Of particular interest is the question of defining the tariff classes depending on consumption levels. If for civil users, this theme is strongly permeated by social profiles (a topic dealt with by the author in a forthcoming research contribution) connected with the inalienability of the right to have water and with the reconciliation of the needs of the weakest groups of the population, for consumers in the production sector the logic adopted by the manager may be inspired by criteria of greater corporate rationality. This work illustrates the Italian regulatory framework and shows an optimization model of tariff classes in the production sector that reconciles the public objective of sustainable use of the resource and the needs of a production system in search of recovery after the depressing effects caused by COVID-19 pandemic.

Keywords: decision making, economic evaluation, urban water management, water tariff

Procedia PDF Downloads 102
1465 Hydrogen Production Using an Anion-Exchange Membrane Water Electrolyzer: Mathematical and Bond Graph Modeling

Authors: Hugo Daneluzzo, Christelle Rabbat, Alan Jean-Marie

Abstract:

Water electrolysis is one of the most advanced technologies for producing hydrogen and can be easily combined with electricity from different sources. Under the influence of electric current, water molecules can be split into oxygen and hydrogen. The production of hydrogen by water electrolysis favors the integration of renewable energy sources into the energy mix by compensating for their intermittence through the storage of the energy produced when production exceeds demand and its release during off-peak production periods. Among the various electrolysis technologies, anion exchange membrane (AEM) electrolyser cells are emerging as a reliable technology for water electrolysis. Modeling and simulation are effective tools to save time, money, and effort during the optimization of operating conditions and the investigation of the design. The modeling and simulation become even more important when dealing with multiphysics dynamic systems. One of those systems is the AEM electrolysis cell involving complex physico-chemical reactions. Once developed, models may be utilized to comprehend the mechanisms to control and detect flaws in the systems. Several modeling methods have been initiated by scientists. These methods can be separated into two main approaches, namely equation-based modeling and graph-based modeling. The former approach is less user-friendly and difficult to update as it is based on ordinary or partial differential equations to represent the systems. However, the latter approach is more user-friendly and allows a clear representation of physical phenomena. In this case, the system is depicted by connecting subsystems, so-called blocks, through ports based on their physical interactions, hence being suitable for multiphysics systems. Among the graphical modelling methods, the bond graph is receiving increasing attention as being domain-independent and relying on the energy exchange between the components of the system. At present, few studies have investigated the modelling of AEM systems. A mathematical model and a bond graph model were used in previous studies to model the electrolysis cell performance. In this study, experimental data from literature were simulated using OpenModelica using bond graphs and mathematical approaches. The polarization curves at different operating conditions obtained by both approaches were compared with experimental ones. It was stated that both models predicted satisfactorily the polarization curves with error margins lower than 2% for equation-based models and lower than 5% for the bond graph model. The activation polarization of hydrogen evolution reactions (HER) and oxygen evolution reactions (OER) were behind the voltage loss in the AEM electrolyzer, whereas ion conduction through the membrane resulted in the ohmic loss. Therefore, highly active electro-catalysts are required for both HER and OER while high-conductivity AEMs are needed for effectively lowering the ohmic losses. The bond graph simulation of the polarisation curve for operating conditions at various temperatures has illustrated that voltage increases with temperature owing to the technology of the membrane. Simulation of the polarisation curve can be tested virtually, hence resulting in reduced cost and time involved due to experimental testing and improved design optimization. Further improvements can be made by implementing the bond graph model in a real power-to-gas-to-power scenario.

Keywords: hydrogen production, anion-exchange membrane, electrolyzer, mathematical modeling, multiphysics modeling

Procedia PDF Downloads 74
1464 Models, Resources and Activities of Project Scheduling Problems

Authors: Jorge A. Ruiz-Vanoye, Ocotlán Díaz-Parra, Alejandro Fuentes-Penna, José J. Hernández-Flores, Edith Olaco Garcia

Abstract:

The Project Scheduling Problem (PSP) is a generic name given to a whole class of problems in which the best form, time, resources and costs for project scheduling are necessary. The PSP is an application area related to the project management. This paper aims at being a guide to understand PSP by presenting a survey of the general parameters of PSP: the Resources (those elements that realize the activities of a project), and the Activities (set of operations or own tasks of a person or organization); the mathematical models of the main variants of PSP and the algorithms used to solve the variants of the PSP. The project scheduling is an important task in project management. This paper contains mathematical models, resources, activities, and algorithms of project scheduling problems. The project scheduling problem has attracted researchers of the automotive industry, steel manufacturer, medical research, pharmaceutical research, telecommunication, industry, aviation industry, development of the software, manufacturing management, innovation and technology management, construction industry, government project management, financial services, machine scheduling, transportation management, and others. The project managers need to finish a project with the minimum cost and the maximum quality.

Keywords: PSP, Combinatorial Optimization Problems, Project Management; Manufacturing Management, Technology Management.

Procedia PDF Downloads 406
1463 Passively Q-Switched 914 nm Microchip Laser for LIDAR Systems

Authors: Marco Naegele, Klaus Stoppel, Thomas Dekorsy

Abstract:

Passively Q-switched microchip lasers enable the great potential for sophisticated LiDAR systems due to their compact overall system design, excellent beam quality, and scalable pulse energies. However, many near-infrared solid-state lasers show emitting wavelengths > 1000 nm, which are not compatible with state-of-the-art silicon detectors. Here we demonstrate a passively Q-switched microchip laser operating at 914 nm. The microchip laser consists of a 3 mm long Nd:YVO₄ crystal as a gain medium, while Cr⁴⁺:YAG with an initial transmission of 98% is used as a saturable absorber. Quasi-continuous pumping enables single pulse operation, and low duty cycles ensure low overall heat generation and power consumption. Thus, thermally induced instabilities are minimized, and operation without active cooling is possible while ambient temperature changes are compensated by adjustment of the pump laser current only. Single-emitter diode pumping at 808 nm leads to a compact overall system design and robust setup. Utilization of a microchip cavity approach ensures single-longitudinal mode operation with spectral bandwidths in the picometer regime and results in short laser pulses with pulse durations below 10 ns. Beam quality measurements reveal an almost diffraction-limited beam and enable conclusions concerning the thermal lens, which is essential to stabilize the plane-plane resonator. A 7% output coupler transmissivity is used to generate pulses with energies in the microjoule regime and peak powers of more than 600 W. Long-term pulse duration, pulse energy, central wavelength, and spectral bandwidth measurements emphasize the excellent system stability and facilitate the utilization of this laser in the context of a LiDAR system.

Keywords: diode-pumping, LiDAR system, microchip laser, Nd:YVO4 laser, passively Q-switched

Procedia PDF Downloads 118
1462 Non-Invasive Imaging of Tissue Using Near Infrared Radiations

Authors: Ashwani Kumar Aggarwal

Abstract:

NIR Light is non-ionizing and can pass easily through living tissues such as breast without any harmful effects. Therefore, use of NIR light for imaging the biological tissue and to quantify its optical properties is a good choice over other invasive methods. Optical tomography involves two steps. One is the forward problem and the other is the reconstruction problem. The forward problem consists of finding the measurements of transmitted light through the tissue from source to detector, given the spatial distribution of absorption and scattering properties. The second step is the reconstruction problem. In X-ray tomography, there is standard method for reconstruction called filtered back projection method or the algebraic reconstruction methods. But this method cannot be applied as such, in optical tomography due to highly scattering nature of biological tissue. A hybrid algorithm for reconstruction has been implemented in this work which takes into account the highly scattered path taken by photons while back projecting the forward data obtained during Monte Carlo simulation. The reconstructed image suffers from blurring due to point spread function. This blurred reconstructed image has been enhanced using a digital filter which is optimal in mean square sense.

Keywords: least-squares optimization, filtering, tomography, laser interaction, light scattering

Procedia PDF Downloads 303
1461 Climate Change and Urban Flooding: The Need to Rethinking Urban Flood Management through Resilience

Authors: Suresh Hettiarachchi, Conrad Wasko, Ashish Sharma

Abstract:

The ever changing and expanding urban landscape increases the stress on urban systems to support and maintain safe and functional living spaces. Flooding presents one of the more serious threats to this safety, putting a larger number of people in harm’s way in congested urban settings. Climate change is adding to this stress by creating a dichotomy in the urban flood response. On the one hand, climate change is causing storms to intensify, resulting in more destructive, rarer floods, while on the other hand, longer dry periods are decreasing the severity of more frequent, less intense floods. This variability is creating a need to be more agile and innovative in how we design for and manage urban flooding. Here, we argue that to cope with this challenge climate change brings, we need to move towards urban flood management through resilience rather than flood prevention. We also argue that dealing with the larger variation in flood response to climate change means that we need to look at flooding from all aspects rather than the single-dimensional focus of flood depths and extents. In essence, we need to rethink how we manage flooding in the urban space. This change in our thought process and approach to flood management requires a practical way to assess and quantify resilience that is built into the urban landscape so that informed decision-making can support the required changes in planning and infrastructure design. Towards that end, we propose a Simple Urban Flood Resilience Index (SUFRI) based on a robust definition of resilience as a tool to assess flood resilience. The application of a simple resilience index such as the SUFRI can provide a practical tool that considers urban flood management in a multi-dimensional way and can present solutions that were not previously considered. When such an index is grounded on a clear and relevant definition of resilience, it can be a reliable and defensible way to assess and assist the process of adapting to the increasing challenges in urban flood management with climate change.

Keywords: urban flood resilience, climate change, flood management, flood modelling

Procedia PDF Downloads 35