Search results for: multiple Reynolds number
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 14072

Search results for: multiple Reynolds number

12572 Dietary Pattern and Risk of Breast Cancer Among Women:a Case Control Study

Authors: Huma Naqeeb

Abstract:

Epidemiological studies have shown the robust link between breast cancer and dietary pattern. There has been no previous study conducted in Pakistan, which specifically focuses on dietary patterns among breast cancer women. This study aims to examine the association of breast cancer with dietary patterns among Pakistani women. This case-control research was carried in multiple tertiary care facilities. Newly diagnosed primary breast cancer patients were recruited as cases (n = 408); age matched controls (n = 408) were randomly selected from the general population. Data on required parameters were systematically collected using subjective and objective tools. Factor and Principal Component Analysis (PCA) techniques were used to extract women’s dietary patterns. Four dietary patterns were identified based on eigenvalue >1; (i) veg-ovo-fish, (ii) meat-fat-sweet, (iii) mix (milk and its products, and gourds vegetables) and (iv) lentils - spices. Results of the multiple regressions were displayed as adjusted odds ratio (Adj. OR) and their respective confidence intervals (95% CI). After adjusted for potential confounders, veg-ovo-fish dietary pattern was found to be robustly associated with a lower risk of breast cancer among women (Adj. OR: 0.68, 95%CI: (0.46-0.99, p<0.01). The study findings concluded that attachment to the diets majorly composed of fresh vegetables, and high quality protein sources may contribute in lowering the risk of breast cancer among women.

Keywords: breast cancer, dietary pattern, women, principal component analysis

Procedia PDF Downloads 118
12571 Decision Tree Based Scheduling for Flexible Job Shops with Multiple Process Plans

Authors: H.-H. Doh, J.-M. Yu, Y.-J. Kwon, J.-H. Shin, H.-W. Kim, S.-H. Nam, D.-H. Lee

Abstract:

This paper suggests a decision tree based approach for flexible job shop scheduling with multiple process plans, i. e. each job can be processed through alternative operations, each of which can be processed on alternative machines. The main decision variables are: (a) selecting operation/machine pair; and (b) sequencing the jobs assigned to each machine. As an extension of the priority scheduling approach that selects the best priority rule combination after many simulation runs, this study suggests a decision tree based approach in which a decision tree is used to select a priority rule combination adequate for a specific system state and hence the burdens required for developing simulation models and carrying out simulation runs can be eliminated. The decision tree based scheduling approach consists of construction and scheduling modules. In the construction module, a decision tree is constructed using a four-stage algorithm, and in the scheduling module, a priority rule combination is selected using the decision tree. To show the performance of the decision tree based approach suggested in this study, a case study was done on a flexible job shop with reconfigurable manufacturing cells and a conventional job shop, and the results are reported by comparing it with individual priority rule combinations for the objectives of minimizing total flow time and total tardiness.

Keywords: flexible job shop scheduling, decision tree, priority rules, case study

Procedia PDF Downloads 351
12570 Environmental Performance Improvement of Additive Manufacturing Processes with Part Quality Point of View

Authors: Mazyar Yosofi, Olivier Kerbrat, Pascal Mognol

Abstract:

Life cycle assessment of additive manufacturing processes has evolved significantly since these past years. A lot of existing studies mainly focused on energy consumption. Nowadays, new methodologies of life cycle inventory acquisition came through the literature and help manufacturers to take into account all the input and output flows during the manufacturing step of the life cycle of products. Indeed, the environmental analysis of the phenomena that occur during the manufacturing step of additive manufacturing processes is going to be well known. Now it becomes possible to count and measure accurately all the inventory data during the manufacturing step. Optimization of the environmental performances of processes can now be considered. Environmental performance improvement can be made by varying process parameters. However, a lot of these parameters (such as manufacturing speed, the power of the energy source, quantity of support materials) affect directly the mechanical properties, surface finish and the dimensional accuracy of a functional part. This study aims to improve the environmental performance of an additive manufacturing process without deterioration of the part quality. For that purpose, the authors have developed a generic method that has been applied on multiple parts made by additive manufacturing processes. First, a complete analysis of the process parameters is made in order to identify which parameters affect only the environmental performances of the process. Then, multiple parts are manufactured by varying the identified parameters. The aim of the second step is to find the optimum value of the parameters that decrease significantly the environmental impact of the process and keep the part quality as desired. Finally, a comparison between the part made by initials parameters and changed parameters is made. In this study, the major finding claims by authors is to reduce the environmental impact of an additive manufacturing process while respecting the three quality criterion of parts, mechanical properties, dimensional accuracy and surface roughness. Now that additive manufacturing processes can be seen as mature from a technical point of view, environmental improvement of these processes can be considered while respecting the part properties. The first part of this study presents the methodology applied to multiple academic parts. Then, the validity of the methodology is demonstrated on functional parts.

Keywords: additive manufacturing, environmental impact, environmental improvement, mechanical properties

Procedia PDF Downloads 282
12569 Improvement of Electric Aircraft Endurance through an Optimal Propeller Design Using Combined BEM, Vortex and CFD Methods

Authors: Jose Daniel Hoyos Giraldo, Jesus Hernan Jimenez Giraldo, Juan Pablo Alvarado Perilla

Abstract:

Range and endurance are the main limitations of electric aircraft due to the nature of its source of power. The improvement of efficiency on this kind of systems is extremely meaningful to encourage the aircraft operation with less environmental impact. The propeller efficiency highly affects the overall efficiency of the propulsion system; hence its optimization can have an outstanding effect on the aircraft performance. An optimization method is applied to an aircraft propeller in order to maximize its range and endurance by estimating the best combination of geometrical parameters such as diameter and airfoil, chord and pitch distribution for a specific aircraft design at a certain cruise speed, then the rotational speed at which the propeller operates at minimum current consumption is estimated. The optimization is based on the Blade Element Momentum (BEM) method, additionally corrected to account for tip and hub losses, Mach number and rotational effects; furthermore an airfoil lift and drag coefficients approximation is implemented from Computational Fluid Dynamics (CFD) simulations supported by preliminary studies of grid independence and suitability of different turbulence models, to feed the BEM method, with the aim of achieve more reliable results. Additionally, Vortex Theory is employed to find the optimum pitch and chord distribution to achieve a minimum induced loss propeller design. Moreover, the optimization takes into account the well-known brushless motor model, thrust constraints for take-off runway limitations, maximum allowable propeller diameter due to aircraft height and maximum motor power. The BEM-CFD method is validated by comparing its predictions for a known APC propeller with both available experimental tests and APC reported performance curves which are based on Vortex Theory fed with the NASA Transonic Airfoil code, showing a adequate fitting with experimental data even more than reported APC data. Optimal propeller predictions are validated by wind tunnel tests, CFD propeller simulations and a study of how the propeller will perform if it replaces the one of on known aircraft. Some tendency charts relating a wide range of parameters such as diameter, voltage, pitch, rotational speed, current, propeller and electric efficiencies are obtained and discussed. The implementation of CFD tools shows an improvement in the accuracy of BEM predictions. Results also showed how a propeller has higher efficiency peaks when it operates at high rotational speed due to the higher Reynolds at which airfoils present lower drag. On the other hand, the behavior of the current consumption related to the propulsive efficiency shows counterintuitive results, the best range and endurance is not necessary achieved in an efficiency peak.

Keywords: BEM, blade design, CFD, electric aircraft, endurance, optimization, range

Procedia PDF Downloads 103
12568 The Mediating Role of Store Personality in the Relationship Between Self-Congruity and Manifestations of Loyalty

Authors: María de los Ángeles Crespo López, Carmen García García

Abstract:

The highly competitive nature of today's globalised marketplace requires that brands and stores develop effective commercial strategies to ensure their economic survival. Maintaining the loyalty of existing customers constitutes one key strategy that yields the best results. Although the relationship between consumers' self-congruity and their manifestations of loyalty towards a store has been investigated, the role of store personality in this relationship remains unclear. In this study, multiple parallel mediation analysis was used to examine the effect of Store Personality on the relationship between Self-Congruity of consumers and their Manifestations of Loyalty. For this purpose, 457 Spanish consumers of the Fnac store completed three self-report questionnaires assessing Store Personality, Self-Congruity, and Store Loyalty. The data were analyzed using the SPSS macro PROCESS. The results revealed that three dimensions of Store Personality, namely Exciting, Close and Competent Store, positively and significantly mediated the relationship between Self-Congruity and Manifestations of Loyalty. The indirect effect of Competent Store was the greatest. This means that a consumer with higher levels of Self-Congruity with the store will exhibit more Manifestations of Loyalty when the store is perceived as Exciting, Close or Competent. These findings suggest that more attention should be paid to the perceived personality of stores for the development of effective marketing strategies to maintain or increase consumers' manifestations of loyalty towards stores.

Keywords: multiple parallel mediation, PROCESS, self-congruence, store loyalty, store personality

Procedia PDF Downloads 150
12567 Investigating the Potential for Introduction of Warm Mix Asphalt in Kuwait Using the Volcanic Ash

Authors: H. Al-Baghli, F. Al-Asfour

Abstract:

The current applied asphalt technology for Kuwait roads pavement infrastructure is the hot mix asphalt (HMA) pavement, including both pen grade and polymer modified bitumen (PMBs), that is produced and compacted at high temperature levels ranging from 150 to 180 °C. There are no current specifications for warm and cold mix asphalts in Kuwait’s Ministry of Public Works (MPW) asphalt standard and specifications. The process of the conventional HMA is energy intensive and directly responsible for the emission of greenhouse gases and other environmental hazards into the atmosphere leading to significant environmental impacts and raising health risk to labors at site. Warm mix asphalt (WMA) technology, a sustainable alternative preferred in multiple countries, has many environmental advantages because it requires lower production temperatures than HMA by 20 to 40 °C. The reduction of temperatures achieved by WMA originates from multiple technologies including foaming and chemical or organic additives that aim to reduce bitumen and improve mix workability. This paper presents a literature review of WMA technologies and techniques followed by an experimental study aiming to compare the results of produced WMA samples, using a water containing additive (foaming process), at different compaction temperatures with the HMA control volumetric properties mix designed in accordance to the new MPW’s specifications and guidelines.

Keywords: warm-mix asphalt, water-bearing additives, foaming-based process, chemical additives, organic additives

Procedia PDF Downloads 118
12566 A Large Ion Collider Experiment (ALICE) Diffractive Detector Control System for RUN-II at the Large Hadron Collider

Authors: J. C. Cabanillas-Noris, M. I. Martínez-Hernández, I. León-Monzón

Abstract:

The selection of diffractive events in the ALICE experiment during the first data taking period (RUN-I) of the Large Hadron Collider (LHC) was limited by the range over which rapidity gaps occur. It would be possible to achieve better measurements by expanding the range in which the production of particles can be detected. For this purpose, the ALICE Diffractive (AD0) detector has been installed and commissioned for the second phase (RUN-II). Any new detector should be able to take the data synchronously with all other detectors and be operated through the ALICE central systems. One of the key elements that must be developed for the AD0 detector is the Detector Control System (DCS). The DCS must be designed to operate safely and correctly this detector. Furthermore, the DCS must also provide optimum operating conditions for the acquisition and storage of physics data and ensure these are of the highest quality. The operation of AD0 implies the configuration of about 200 parameters, from electronics settings and power supply levels to the archiving of operating conditions data and the generation of safety alerts. It also includes the automation of procedures to get the AD0 detector ready for taking data in the appropriate conditions for the different run types in ALICE. The performance of AD0 detector depends on a certain number of parameters such as the nominal voltages for each photomultiplier tube (PMT), their threshold levels to accept or reject the incoming pulses, the definition of triggers, etc. All these parameters define the efficiency of AD0 and they have to be monitored and controlled through AD0 DCS. Finally, AD0 DCS provides the operator with multiple interfaces to execute these tasks. They are realized as operating panels and scripts running in the background. These features are implemented on a SCADA software platform as a distributed control system which integrates to the global control system of the ALICE experiment.

Keywords: AD0, ALICE, DCS, LHC

Procedia PDF Downloads 300
12565 Numerical and Sensitivity Analysis of Modeling the Newcastle Disease Dynamics

Authors: Nurudeen Oluwasola Lasisi

Abstract:

Newcastle disease is a highly contagious disease of birds caused by a para-myxo virus. In this paper, we presented Novel quarantine-adjusted incident and linear incident of Newcastle disease model equations. We considered the dynamics of transmission and control of Newcastle disease. The existence and uniqueness of the solutions were obtained. The existence of disease-free points was shown, and the model threshold parameter was examined using the next-generation operator method. The sensitivity analysis was carried out in order to identify the most sensitive parameters of the disease transmission. This revealed that as parameters β,ω, and ᴧ increase while keeping other parameters constant, the effective reproduction number R_ev increases. This implies that the parameters increase the endemicity of the infection of individuals. More so, when the parameters μ,ε,γ,δ_1, and α increase, while keeping other parameters constant, the effective reproduction number R_ev decreases. This implies the parameters decrease the endemicity of the infection as they have negative indices. Analytical results were numerically verified by the Differential Transformation Method (DTM) and quantitative views of the model equations were showcased. We established that as contact rate (β) increases, the effective reproduction number R_ev increases, as the effectiveness of drug usage increases, the R_ev decreases and as the quarantined individual decreases, the R_ev decreases. The results of the simulations showed that the infected individual increases when the susceptible person approaches zero, also the vaccination individual increases when the infected individual decreases and simultaneously increases the recovery individual.

Keywords: disease-free equilibrium, effective reproduction number, endemicity, Newcastle disease model, numerical, Sensitivity analysis

Procedia PDF Downloads 39
12564 The Relationship between Corporate Governance and Intellectual Capital Disclosure: Malaysian Evidence

Authors: Rabiaal Adawiyah Shazali, Corina Joseph

Abstract:

The disclosure of Intellectual Capital (IC) information is getting more vital in today’s era of a knowledge-based economy. Companies are advised by accounting bodies to enhance IC disclosure which complements the conventional financial disclosures. There are no accounting standards for Intellectual Capital Disclosure (ICD), therefore the disclosure is entirely voluntary. Hence, this study aims to investigate the extent of ICD and to examine the relationship between corporate governance and ICD in Malaysia. This study employed content analysis of 100 annual reports by the top 100 public listed companies in Malaysia during 2012. The uniqueness of this study lies on its underpinning theory used where it applies the institutional isomorphism theory to support the effect of the attributes of corporate governance towards ICD. In order to achieve the stated objective, multiple regression analysis were employed to conduct this study. From the descriptive statistics, it was concluded that public listed companies in Malaysia have increased their awareness towards the importance of ICD. Furthermore, results from the multiple regression analysis confirmed that corporate governance affects the company’s ICD where the frequency of audit committee meetings and the board size has positively influenced the level of ICD in companies. Findings from this study would provide an incentive for companies in Malaysia to enhance the disclosure of IC. In addition, this study would assist Bursa Malaysia and other regulatory bodies to come up with a proper guideline for the disclosure of IC.

Keywords: annual report, content analysis, corporate governance, intellectual capital disclosure

Procedia PDF Downloads 209
12563 Case of A Huge Retroperitoneal Abscess Spanning from the Diaphragm to the Pelvic Brim

Authors: Christopher Leung, Tony Kim, Rebecca Lendzion, Scott Mackenzie

Abstract:

Retroperitoneal abscesses are a rare but serious condition with often delayed diagnosis, non-specific symptoms, multiple causes and high morbidity/mortality. With the advent of more readily available cross-sectional imaging, retroperitoneal abscesses are treated earlier and better outcomes are achieved. Occasionally, a retroperitoneal abscess is present as a huge retroperitoneal abscess, as evident in this 53-year-old male. With a background of chronic renal disease and left partial nephrectomy, this gentleman presented with a one-month history of left flank pain without any other symptoms, including fevers or abdominal pain. CT abdomen and pelvis demonstrated a huge retroperitoneal abscess spanning from the diaphragm, abutting the spleen, down to the iliopsoas muscle and abutting the iliac vessels at the pelvic brim. This large retroperitoneal abscess required open drainage as well as drainage by interventional radiology. A long course of intravenous antibiotics and multiple drainages was required to drain the abscess. His blood culture and fluid culture grew Proteus species suggesting a urinary source, likely from his non-functioning kidney, which had a partial nephrectomy. Such a huge retroperitoneal abscess has rarely been described in the literature. The learning point here is that the basic principle of source control and antibiotics is paramount in treating retroperitoneal abscesses regardless of the size of the abscess.

Keywords: retroperitoneal abscess, retroperitoneal mass, sepsis, genitourinary infection

Procedia PDF Downloads 212
12562 Electrical Dault Detection of Photovoltaic System: A Short-Circuit Fault Case

Authors: Moustapha H. Ibrahim, Dahir Abdourahman

Abstract:

This document presents a short-circuit fault detection process in a photovoltaic (PV) system. The proposed method is developed in MATLAB/Simulink. It determines whatever the size of the installation number of the short circuit module. The proposed algorithm indicates the presence or absence of an abnormality on the power of the PV system through measures of hourly global irradiation, power output, and ambient temperature. In case a fault is detected, it displays the number of modules in a short circuit. This fault detection method has been successfully tested on two different PV installations.

Keywords: PV system, short-circuit, fault detection, modelling, MATLAB-Simulink

Procedia PDF Downloads 226
12561 Parallel Processing in near Absence of Attention: A Study Using Dual-Task Paradigm

Authors: Aarushi Agarwal, Tara Singh, I.L Singh, Anju Lata Singh, Trayambak Tiwari

Abstract:

Simple discrimination in near absence of attention has been widely observed. Dual-task studies with natural scenes studies have been claimed as being preattentive in nature that facilitated categorization simultaneously with the attentional demanding task. So in this study, multiple images at the periphery are presented, initiating parallel processing in near absence of attention. For the central demanding task rotated letters were presented in both conditions, while in periphery natural and animal images were presented. To understand the breakpoint of ability to perform in near absence of attention one, two and three peripheral images were presented simultaneously with central task and subjects had to respond when all belong to the same category. Individual participant performance did not show a significant difference in both conditions central and peripheral task when the single peripheral image was shown. In case of two images high-level parallel processing could take place with little attentional resources. The eye tracking results supports the evidence as no major saccade was made in a large number of trials. Three image presentations proved to be a breaking point of the capacities to perform outside attentional assistance as participants showed a confused eye gaze pattern which failed to make the natural and animal image discriminations. Thus, we can conclude attention and awareness being independent mechanisms having limited capacities.

Keywords: attention, dual task pardigm, parallel processing, break point, saccade

Procedia PDF Downloads 212
12560 Particle Swarm Optimization and Quantum Particle Swarm Optimization to Multidimensional Function Approximation

Authors: Diogo Silva, Fadul Rodor, Carlos Moraes

Abstract:

This work compares the results of multidimensional function approximation using two algorithms: the classical Particle Swarm Optimization (PSO) and the Quantum Particle Swarm Optimization (QPSO). These algorithms were both tested on three functions - The Rosenbrock, the Rastrigin, and the sphere functions - with different characteristics by increasing their number of dimensions. As a result, this study shows that the higher the function space, i.e. the larger the function dimension, the more evident the advantages of using the QPSO method compared to the PSO method in terms of performance and number of necessary iterations to reach the stop criterion.

Keywords: PSO, QPSO, function approximation, AI, optimization, multidimensional functions

Procedia PDF Downloads 576
12559 Neighbour Cell List Reduction in Multi-Tier Heterogeneous Networks

Authors: Mohanad Alhabo, Naveed Nawaz

Abstract:

The ongoing call or data session must be maintained to ensure a good quality of service. This can be accomplished by performing the handover procedure while the user is on the move. However, the dense deployment of small cells in 5G networks is a challenging issue due to the extensive number of handovers. In this paper, a neighbour cell list method is proposed to reduce the number of target small cells and hence minimizing the number of handovers. The neighbour cell list is built by omitting cells that could cause an unnecessary handover and handover failure because of short time of stay of the user in these cells. A multi-attribute decision making technique, simple additive weighting, is then applied to the optimized neighbour cell list. Multi-tier small cells network is considered in this work. The performance of the proposed method is analysed and compared with that of the existing methods. Results disclose that our method has decreased the candidate small cell list, unnecessary handovers, handover failure, and short time of stay cells compared to the competitive method.

Keywords: handover, HetNets, multi-attribute decision making, small cells

Procedia PDF Downloads 114
12558 The Contribution of the Lomé Charter to Combating Drugs Trafficking at Sea: Nigerian and South African Legal Perspectives

Authors: Obinna Emmanuel Nkomadu

Abstract:

The sea attracts many criminal activities including drug trafficking. The illicit traffic in narcotic drugs and psychotropic substances by sea poses a serious threat to maritime security globally. The seizure of drugs, particularly, on the African continent is on the raise. In terms of Southern Africa, South Africa is a major transit point for Latin American drugs and South Africa is the largest market for illicit drugs entering the Southern African region. Nigeria and South Africa have taken a number of steps to address this scourge, but, despite those steps, drugs trafficking at sea continues. For that reason and to combat a number of other threats to maritime security around the continent, a substantial number of AU members in 2016 adopted the African Charter on Maritime Security and Safety and Development in Africa (“the Charter”). However, the Charter is yet to come into force due to the number of States required to accede or ratify the Charter. This paper set out the pre-existing international instruments on drugs, to ascertain the domestic laws of Nigeria and South Africa relating to drugs with the relevant provisions of the Lomé Charter in order to establish whether any legal steps are required to ensure that Nigeria and South Africa comply with its obligations under the Charter. Indeed, should Nigeria and South Africa decide to ratify it and should it come into force, both States must cooperate with other relevant States in establishing policies, as well as a regional and continental institutions, and ensure the implementation of such policies. The paper urged the States to urgently ratify the Charter as it is a step in the right direction in the prevention and repression of drugs trafficking on the African maritime domain.

Keywords: cooperation against drugs trafficking at sea, Lomé Charter, maritime security, Nigerian and South Africa legislation on drugs

Procedia PDF Downloads 90
12557 Environmental Forensic Analysis of the Shoreline Microplastics Debris on the Limbe Coastline, Cameroon

Authors: Ndumbe Eric Esongami, Manga Veronica Ebot, Foba Josepha Tendo, Yengong Fabrice Lamfu, Tiku David Tambe

Abstract:

The prevalence and unpleasant nature of plastics pollution constantly observed on beach shore on stormy events has prompt researchers worldwide to thesis on sustainable economic and environmental designs on plastics, especially in Cameroon, a major touristic destination in the Central Africa Region. The inconsistent protocols develop by researchers has added to this burden, thus the morphological nature of microplastic remediation is a call for concerns. The prime aim of the study is to morphologically identify, quantify and forensically understands the distribution of each plastics polymer composition. Duplicates of 2×2 m (4m2) quadrants were sampled in each beach/month over 8 months period across five purposive beaches along the Limbe – Idenau coastline, Cameroon. Collected plastic samples were thoroughly washed and separation done using a 2 mm sieve. Only particles of size, < 2 mm, were considered and forward follow the microplastics laboratory analytical processes. Established step by step methodological procedures of particle filtration, organic matter digestion, density separation, particle extraction and polymer identification including microscope and were applied for the beach microplastics samples. Microplastics were observed in each sample/beach/month with an overall abundance of 241 particles/number weighs 89.15 g in total and with a mean abundance of 2 particles/m2 (0.69 g/m2) and 6 particles/month (2.0 g/m2). The accumulation of beach shoreline MPs rose dramatically towards decreasing size with microbeads and fiber only found in the < 1 mm size fraction. Approximately 75% of beach MPs contamination were found in LDB 2, LDB 1 and IDN beaches/average particles/number while the most dominant polymer type frequently observed also were PP, PE, and PS in all morphologically parameters analysed. Beach MPs accumulation significantly varied temporally and spatially at p = 0.05. ANOVA and Spearman’s rank correlation used shows linear relationships between the sizes categories considered in this study. In terms of polymer MPs analysis, the colour class recorded that white coloured MPs was dominant, 50 particles/number (22.25 g) with recorded abundance/number in PP (25), PE (15) and PS (5). The shape class also revealed that irregularly shaped MPs was dominant, 98 particles/number (30.5 g) with higher abundance/number in PP (39), PE (33), and PS (11). Similarly, MPs type class shows that fragmented MPs type was also dominant, 80 particles/number (25.25 g) with higher abundance/number in PP (30), PE (28) and PS (15). Equally, the sized class forward revealed that 1.5 – 1.99 mm sized ranged MPs had the highest abundance of 102 particles/number (51.77 g) with higher concentration observed in PP (47), PE (41), and PS (7) as well and finally, the weight class also show that 0.01 g weighs MPs was dominated by 98 particles/number (56.57 g) with varied numeric abundance seen in PP (49), PE (29) and PS (13). The forensic investigation of the pollution indicated that majority of the beach microplastic is sourced from the site/nearby area. The investigation could draw useful conclusions regarding the pathways of pollution. The fragmented microplastic, a significant component in the sample, was found to be sourced from recreational activities and partly from fishing boat installations and repairs activities carried out close to the shore.

Keywords: forensic analysis, beach MPs, particle/number, polymer composition, cameroon

Procedia PDF Downloads 72
12556 An Overbooking Model for Car Rental Service with Different Types of Cars

Authors: Naragain Phumchusri, Kittitach Pongpairoj

Abstract:

Overbooking is a very useful revenue management technique that could help reduce costs caused by either undersales or oversales. In this paper, we propose an overbooking model for two types of cars that can minimize the total cost for car rental service. With two types of cars, there is an upgrade possibility for lower type to upper type. This makes the model more complex than one type of cars scenario. We have found that convexity can be proved in this case. Sensitivity analysis of the parameters is conducted to observe the effects of relevant parameters on the optimal solution. Model simplification is proposed using multiple linear regression analysis, which can help estimate the optimal overbooking level using appropriate independent variables. The results show that the overbooking level from multiple linear regression model is relatively close to the optimal solution (with the adjusted R-squared value of at least 72.8%). To evaluate the performance of the proposed model, the total cost was compared with the case where the decision maker uses a naïve method for the overbooking level. It was found that the total cost from optimal solution is only 0.5 to 1 percent (on average) lower than the cost from regression model, while it is approximately 67% lower than the cost obtained by the naïve method. It indicates that our proposed simplification method using regression analysis can effectively perform in estimating the overbooking level.

Keywords: overbooking, car rental industry, revenue management, stochastic model

Procedia PDF Downloads 163
12555 A Non-Parametric Analysis of District Disaster Management Authorities in Punjab, Pakistan

Authors: Zahid Hussain

Abstract:

Provincial Disaster Management Authority (PDMA) Punjab was established under NDM Act 2010 and now working under Senior Member Board of Revenue, deals with the whole spectrum of disasters including preparedness, mitigation, early warning, response, relief, rescue, recovery and rehabilitation. The District Disaster Management Authorities (DDMA) are acting as implementing arms of PDMA in the districts to respond any disaster. DDMAs' role is very important in disaster mitigation, response and recovery as they are the first responder and closest tier to the community. Keeping in view the significant role of DDMAs, technical and human resource capacity are need to be checked. For calculating the technical efficiencies of District Disaster Management Authority (DDMA) in Punjab, three inputs like number of labour, the number of transportation and number of equipment, two outputs like relief assistance and the number of rescue and 25 districts as decision making unit have been selected. For this purpose, 8 years secondary data from 2005 to 2012 has been used. Data Envelopment Analysis technique has been applied. DEA estimates the relative efficiency of peer entities or entities performing the similar tasks. The findings show that all decision making unit (DMU) (districts) are inefficient on techonological and scale efficiency scale while technically efficient on pure and total factor productivity efficiency scale. All DMU are found technically inefficient only in the year 2006. Labour and equipment were not efficiently used in the year 2005, 2007, 2008, 2009 and 2012. Furthermore, only three years 2006, 2010 and 2011 show that districts could not efficiently use transportation in a disaster situation. This study suggests that all districts should curtail labour, transportation and equipment to be efficient. Similarly, overall all districts are not required to achieve number of rescue and relief assistant, these should be reduced.

Keywords: DEA, DMU, PDMA, DDMA

Procedia PDF Downloads 239
12554 Classification of Land Cover Usage from Satellite Images Using Deep Learning Algorithms

Authors: Shaik Ayesha Fathima, Shaik Noor Jahan, Duvvada Rajeswara Rao

Abstract:

Earth's environment and its evolution can be seen through satellite images in near real-time. Through satellite imagery, remote sensing data provide crucial information that can be used for a variety of applications, including image fusion, change detection, land cover classification, agriculture, mining, disaster mitigation, and monitoring climate change. The objective of this project is to propose a method for classifying satellite images according to multiple predefined land cover classes. The proposed approach involves collecting data in image format. The data is then pre-processed using data pre-processing techniques. The processed data is fed into the proposed algorithm and the obtained result is analyzed. Some of the algorithms used in satellite imagery classification are U-Net, Random Forest, Deep Labv3, CNN, ANN, Resnet etc. In this project, we are using the DeepLabv3 (Atrous convolution) algorithm for land cover classification. The dataset used is the deep globe land cover classification dataset. DeepLabv3 is a semantic segmentation system that uses atrous convolution to capture multi-scale context by adopting multiple atrous rates in cascade or in parallel to determine the scale of segments.

Keywords: area calculation, atrous convolution, deep globe land cover classification, deepLabv3, land cover classification, resnet 50

Procedia PDF Downloads 135
12553 Use of the Gas Chromatography Method for Hydrocarbons' Quality Evaluation in the Offshore Fields of the Baltic Sea

Authors: Pavel Shcherban, Vlad Golovanov

Abstract:

Currently, there is an active geological exploration and development of the subsoil shelf of the Kaliningrad region. To carry out a comprehensive and accurate assessment of the volumes and degree of extraction of hydrocarbons from open deposits, it is necessary to establish not only a number of geological and lithological characteristics of the structures under study, but also to determine the oil quality, its viscosity, density, fractional composition as accurately as possible. In terms of considered works, gas chromatography is one of the most capacious methods that allow the rapid formation of a significant amount of initial data. The aspects of the application of the gas chromatography method for determining the chemical characteristics of the hydrocarbons of the Kaliningrad shelf fields are observed in the article, as well as the correlation-regression analysis of these parameters in comparison with the previously obtained chemical characteristics of hydrocarbon deposits located on the land of the region. In the process of research, a number of methods of mathematical statistics and computer processing of large data sets have been applied, which makes it possible to evaluate the identity of the deposits, to specify the amount of reserves and to make a number of assumptions about the genesis of the hydrocarbons under analysis.

Keywords: computer processing of large databases, correlation-regression analysis, hydrocarbon deposits, method of gas chromatography

Procedia PDF Downloads 151
12552 Optimisation of the Input Layer Structure for Feedforward Narx Neural Networks

Authors: Zongyan Li, Matt Best

Abstract:

This paper presents an optimization method for reducing the number of input channels and the complexity of the feed-forward NARX neural network (NN) without compromising the accuracy of the NN model. By utilizing the correlation analysis method, the most significant regressors are selected to form the input layer of the NN structure. An application of vehicle dynamic model identification is also presented in this paper to demonstrate the optimization technique and the optimal input layer structure and the optimal number of neurons for the neural network is investigated.

Keywords: correlation analysis, F-ratio, levenberg-marquardt, MSE, NARX, neural network, optimisation

Procedia PDF Downloads 367
12551 The Impact of Missense Mutation in Phosphatidylinositol Glycan Class A Associated to Paroxysmal Nocturnal Hemoglobinuria and Multiple Congenital Anomalies-Hypotonia-Seizures Syndrome 2: A Computational Study

Authors: Ashish Kumar Agrahari, Amit Kumar

Abstract:

Paroxysmal nocturnal hemoglobinuria (PNH) is an acquired clonal blood disorder that manifests with hemolytic anemia, thrombosis, and peripheral blood cytopenias. The disease is caused by the deficiency of two glycosylphosphatidylinositols (GPI)-anchored proteins (CD55 and CD59) in the hemopoietic stem cells. The deficiency of GPI-anchored proteins has been associated with the somatic mutations in phosphatidylinositol glycan class A (PIGA). However, the mutations that do not cause PNH is associated with the multiple congenital anomalies-hypotonia-seizures syndrome 2 (MCAHS2). To best of our knowledge, no computational study has been performed to explore the atomistic level impact of PIGA mutations on the structure and dynamics of the protein. In the current work, we are mainly interested to get insights into the molecular mechanism of PIGA mutations. In the initial step, we screened the most pathogenic mutations from the pool of publicly available mutations. Further, to get a better understanding, pathogenic mutations were mapped to the modeled structure and subjected to 50ns molecular dynamics simulation. Our computational study suggests that four mutations are highly vulnerable to altering the structural conformation and stability of the PIGA protein, which illustrates its association with PNH and MCAHS2 phenotype.

Keywords: homology modeling, molecular dynamics simulation, missense mutations PNH, MCAHS2, PIGA

Procedia PDF Downloads 141
12550 Sparse Modelling of Cancer Patients’ Survival Based on Genomic Copy Number Alterations

Authors: Khaled M. Alqahtani

Abstract:

Copy number alterations (CNA) are variations in the structure of the genome, where certain regions deviate from the typical two chromosomal copies. These alterations are pivotal in understanding tumor progression and are indicative of patients' survival outcomes. However, effectively modeling patients' survival based on their genomic CNA profiles while identifying relevant genomic regions remains a statistical challenge. Various methods, such as the Cox proportional hazard (PH) model with ridge, lasso, or elastic net penalties, have been proposed but often overlook the inherent dependencies between genomic regions, leading to results that are hard to interpret. In this study, we enhance the elastic net penalty by incorporating an additional penalty that accounts for these dependencies. This approach yields smooth parameter estimates and facilitates variable selection, resulting in a sparse solution. Our findings demonstrate that this method outperforms other models in predicting survival outcomes, as evidenced by our simulation study. Moreover, it allows for a more meaningful interpretation of genomic regions associated with patients' survival. We demonstrate the efficacy of our approach using both real data from a lung cancer cohort and simulated datasets.

Keywords: copy number alterations, cox proportional hazard, lung cancer, regression, sparse solution

Procedia PDF Downloads 39
12549 Determinants of Life Satisfaction in Canada: A Causal Modelling Approach

Authors: Rose Branch-Allen, John Jayachandran

Abstract:

Background and purpose: Canada is a pluralistic, multicultural society with an ethno-cultural composition that has been shaped over time by immigrants and their descendants. Although Canada welcomes these immigrants, many will endure hardship and assimilation difficulties. Despite these life hurdles, surveys consistently disclose high life satisfaction for all Canadians. Most research studies on Life Satisfaction/ Subjective Wellbeing (SWB) have focused on one main determinant and a variety of social demographic variables to delineate the determinants of life satisfaction. However, very few research studies examine life satisfaction from a holistic approach. In addition, we need to understand the causal pathways leading to life satisfaction, and develop theories that explain why certain variables differentially influence the different components of SWB. The aim this study was to utilize a holistic approach to construct a causal model and identify major determinants of life satisfaction. Data and measures: This study utilized data from the General Social Survey, with a sample size of 19, 597. The exogenous concepts included age, gender, marital status, household size, socioeconomic status, ethnicity, location, immigration status, religiosity, and neighborhood. The intervening concepts included health, social contact, leisure, enjoyment, work-family balance, quality time, domestic labor, and sense of belonging. The endogenous concept life satisfaction was measured by multiple indicators (Cronbach’s alpha = .83). Analysis: Several multiple regression models were run sequentially to estimate path coefficients for the causal model. Results: Overall, above average satisfaction with life was reported for respondents with specific socio-economic, demographic and lifestyle characteristics. With regard to exogenous factors, respondents who were female, younger, married, from high socioeconomic status background, born in Canada, very religious, and demonstrated high level of neighborhood interaction had greater satisfaction with life. Similarly, intervening concepts suggested respondents had greater life satisfaction if they had better health, more social contact, less time on passive leisure activities and more time on active leisure activities, more time with family and friends, more enjoyment with volunteer activities, less time on domestic labor and a greater sense of belonging to the community. Conclusions and Implications: Our results suggest that a holistic approach is necessary for establishing determinants of life satisfaction, and that life satisfaction is not merely comprised of positive or negative affect rather understanding the causal process of life satisfaction. Even though, most of our findings are consistent with previous studies, a significant number of causal connections contradict some of the findings in literature today. We have provided possible explanation for these anomalies researchers encounter in studying life satisfaction and policy implications.

Keywords: causal model, holistic approach, life satisfaction, socio-demographic variables, subjective well-being

Procedia PDF Downloads 351
12548 Complementary Mathematical Model for Underwater Vehicles under Load Variation Test Conditions

Authors: Erim Koyun

Abstract:

This paper aim to construct a mathematical model for Underwater vehicles under load variation test conditions. Propeller effects on underwater vehicle are investigated. Body with counter rotating propeller model is analyzed by CFD methods, thus forces and moment are obtained. Propeller effects of vehicle’s hydrodynamic performance under load variation conditions will be investigated. Additionally, pressure contour is examined for differences between different load conditions. Axial force equation is established using hydrodynamic coefficients, which contains resistance, thrust, and additional coefficients occurs due to load variations. Additional coefficients helps to express completely axial force on underwater vehicle. When the vehicle accelerates, additional force occurs besides thrust force increment. This is propeller effect on the body. Hence, mathematical model cover this effect. For CFD analysis, the incompressible, three-dimensional, and unsteady Reynolds Averaged Navier-Stokes equations will be used Numerical results is verified with experimental results for verification. The overall goal of this study is to present complementary mathematical model for body with counter rotating propeller.

Keywords: counter rotating propeller, CFD, hydrodynamic mathematic model, hydrodynamics analysis, thrust deduction

Procedia PDF Downloads 132
12547 Semantic Differences between Bug Labeling of Different Repositories via Machine Learning

Authors: Pooja Khanal, Huaming Zhang

Abstract:

Labeling of issues/bugs, also known as bug classification, plays a vital role in software engineering. Some known labels/classes of bugs are 'User Interface', 'Security', and 'API'. Most of the time, when a reporter reports a bug, they try to assign some predefined label to it. Those issues are reported for a project, and each project is a repository in GitHub/GitLab, which contains multiple issues. There are many software project repositories -ranging from individual projects to commercial projects. The labels assigned for different repositories may be dependent on various factors like human instinct, generalization of labels, label assignment policy followed by the reporter, etc. While the reporter of the issue may instinctively give that issue a label, another person reporting the same issue may label it differently. This way, it is not known mathematically if a label in one repository is similar or different to the label in another repository. Hence, the primary goal of this research is to find the semantic differences between bug labeling of different repositories via machine learning. Independent optimal classifiers for individual repositories are built first using the text features from the reported issues. The optimal classifiers may include a combination of multiple classifiers stacked together. Then, those classifiers are used to cross-test other repositories which leads the result to be deduced mathematically. The produce of this ongoing research includes a formalized open-source GitHub issues database that is used to deduce the similarity of the labels pertaining to the different repositories.

Keywords: bug classification, bug labels, GitHub issues, semantic differences

Procedia PDF Downloads 191
12546 Assessing the Socio-Economic Problems and Environmental Implications of Green Revolution In Uttar Pradesh, India

Authors: Naima Umar

Abstract:

Mid-1960’s has been landmark in the history of Indian agriculture. It was in 1966-67 when a New Agricultural Strategy was put into practice to tide over chronic shortages of food grains in the country. This strategy adopted was the use High-Yielding Varieties (HYV) of seeds (wheat and rice), which was popularly known as the Green Revolution. This phase of agricultural development has saved us from hunger and starvation and made the peasants more confident than ever before, but it has also created a number of socio-economic and environmental implications such as the reduction in area under forest, salinization, waterlogging, soil erosion, lowering of underground water table, soil, water and air pollution, decline in soil fertility, silting of rivers and emergence of several diseases and health hazards. The state of Uttar Pradesh in the north is bounded by the country of Nepal, the states of Uttrakhand on the northwest, Haryana on the west, Rajasthan on the southwest, Madhya Pradesh on the south and southwest, and Bihar on the east. It is situated between 23052´N and 31028´N latitudes and 7703´ and 84039´E longitudes. It is the fifth largest state of the country in terms of area, and first in terms of population. Forming the part of Ganga plain the state is crossed by a number of rivers which originate from the snowy peaks of Himalayas. The fertile plain of the Ganga has led to a high concentration of population with high density and the dominance of agriculture as an economic activity. Present paper highlights the negative impact of new agricultural technology on health of the people and environment and will attempt to find out factors which are responsible for these implications. Karl Pearson’s Correlation coefficient technique has been applied by selecting 1 dependent variable (i.e. Productivity Index) and some independent variables which may impact crop productivity in the districts of the state. These variables have categorized as: X1 (Cropping Intensity), X2 (Net irrigated area), X3 (Canal Irrigated area), X4 (Tube-well Irrigated area), X5 (Irrigated area by other sources), X6 (Consumption of chemical fertilizers (NPK) Kg. /ha.), X7 (Number of wooden plough), X8 (Number of iron plough), X9 (Number of harrows and cultivators), X10 (Number of thresher machines), X11(Number of sprayers), X12 (Number of sowing instruments), X13 (Number of tractors) and X14 (Consumption of insecticides and pesticides (in Kg. /000 ha.). The entire data during 2001-2005 and 2006- 2010 have been taken and 5 years average value is taken into consideration, based on secondary sources obtained from various government, organizations, master plan report, economic abstracts, district census handbooks and village and town directories etc,. put on a standard computer programmed SPSS and the results obtained have been properly tabulated.

Keywords: agricultural technology, environmental implications, health hazards, socio-economic problems

Procedia PDF Downloads 301
12545 Towards the Use of Software Product Metrics as an Indicator for Measuring Mobile Applications Power Consumption

Authors: Ching Kin Keong, Koh Tieng Wei, Abdul Azim Abd. Ghani, Khaironi Yatim Sharif

Abstract:

Maintaining factory default battery endurance rate over time in supporting huge amount of running applications on energy-restricted mobile devices has created a new challenge for mobile applications developer. While delivering customers’ unlimited expectations, developers are barely aware of efficient use of energy from the application itself. Thus developers need a set of valid energy consumption indicators in assisting them to develop energy saving applications. In this paper, we present a few software product metrics that can be used as an indicator to measure energy consumption of Android-based mobile applications in the early of design stage. In particular, Trepn Profiler (Power profiling tool for Qualcomm processor) has used to collect the data of mobile application power consumption, and then analyzed for the 23 software metrics in this preliminary study. The results show that McCabe cyclomatic complexity, number of parameters, nested block depth, number of methods, weighted methods per class, number of classes, total lines of code and method lines have direct relationship with power consumption of mobile application.

Keywords: battery endurance, software metrics, mobile application, power consumption

Procedia PDF Downloads 390
12544 Joint Modeling of Longitudinal and Time-To-Event Data with Latent Variable

Authors: Xinyuan Y. Song, Kai Kang

Abstract:

Joint models for analyzing longitudinal and survival data are widely used to investigate the relationship between a failure time process and time-variant predictors. A common assumption in conventional joint models in the survival analysis literature is that all predictors are observable. However, this assumption may not always be supported because unobservable traits, namely, latent variables, which are indirectly observable and should be measured through multiple observed variables, are commonly encountered in the medical, behavioral, and financial research settings. In this study, a joint modeling approach to deal with this feature is proposed. The proposed model comprises three parts. The first part is a dynamic factor analysis model for characterizing latent variables through multiple observed indicators over time. The second part is a random coefficient trajectory model for describing the individual trajectories of latent variables. The third part is a proportional hazard model for examining the effects of time-invariant predictors and the longitudinal trajectories of time-variant latent risk factors on hazards of interest. A Bayesian approach coupled with a Markov chain Monte Carlo algorithm to perform statistical inference. An application of the proposed joint model to a study on the Alzheimer's disease neuroimaging Initiative is presented.

Keywords: Bayesian analysis, joint model, longitudinal data, time-to-event data

Procedia PDF Downloads 138
12543 Study on the Effects of Grassroots Characteristics on Reinforced Soil Performance by Direct Shear Test

Authors: Zhanbo Cheng, Xueyu Geng

Abstract:

Vegetation slope protection technique is economic, aesthetic and practical. Herbs are widely used in practice because of rapid growth, strong erosion resistance, obvious slope protection and simple method, in which the root system of grass plays a very important role. In this paper, through changing the variables value of grassroots quantity, grassroots diameter, grassroots length and grassroots reinforce layers, the direct shear tests were carried out to discuss the change of shear strength indexes of grassroots reinforced soil under different reinforce situations, and analyse the effects of grassroots characteristics on reinforced soil performance. The laboratory test results show that: (1) in the certain number of grassroots diameter, grassroots length and grassroots reinforce layers, the value of shear strength, and cohesion first increase and then reduce with the increasing of grassroots quantity; (2) in the certain number of grassroots quantity, grassroots length and grassroots reinforce layers, the value of shear strength and cohesion rise with the increasing of grassroots diameter; (3) in the certain number of grassroots diameter, and grassroots reinforce layers, the value of shear strength and cohesion raise with the increasing of grassroots length in a certain range of grassroots quantity, while the value of shear strength and cohesion first rise and then decline with the increasing of grassroots length when the grassroots quantity reaches a certain value; (4) in the certain number of grassroots quantity, grassroots diameter, and grassroots length, the value of shear strength and cohesion first climb and then decline with the increasing of grassroots reinforced layers; (5) the change of internal friction angle is small in different parameters of grassroots. The research results are of importance for understanding the mechanism of vegetation protection for slopes and determining the parameters of grass planting.

Keywords: direct shear test, reinforced soil, grassroots characteristics, shear strength indexes

Procedia PDF Downloads 170