Search results for: accounting estimates
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1304

Search results for: accounting estimates

284 Effect of Wettability Alteration on Production Performance in Unconventional Tight Oil Reservoirs

Authors: Rashid S. Mohammad, Shicheng Zhang, Xinzhe Zhao

Abstract:

In tight oil reservoirs, wettability alteration has generally been considered as an effective way to remove fracturing fluid retention on the surface of the fracture and consequently improved oil production. However, there is a lack of a reliable productivity prediction model to show the relationship between the wettability and oil production in tight oil well. In this paper, a new oil productivity prediction model of immiscible oil-water flow and miscible CO₂-oil flow accounting for wettability is developed. This mathematical model is established by considering two different length scales: nonporous network and propped fractures. CO₂ flow diffuses in the nonporous network and high velocity non-Darcy flow in propped fractures are considered by taking into account the effect of wettability alteration on capillary pressure and relative permeability. A laboratory experiment is also conducted here to validate this model. Laboratory experiments have been designed to compare the water saturation profiles for different contact angle, revealing the fluid retention in rock pores that affects capillary force and relative permeability. Four kinds of brines with different concentrations are selected here to create different contact angles. In water-wet porous media, as the system becomes more oil-wet, water saturation decreases. As a result, oil relative permeability increases. On the other hand, capillary pressure which is the resistance for the oil flow increases as well. The oil production change due to wettability alteration is the result of the comprehensive changes of oil relative permeability and capillary pressure. The results indicate that wettability is a key factor for fracturing fluid retention removal and oil enhancement in tight reservoirs. By incorporating laboratory test into a mathematical model, this work shows the relationship between wettability and oil production is not a simple linear pattern but a parabolic one. Additionally, it can be used for a better understanding of optimization design of fracturing fluids.

Keywords: wettability, relative permeability, fluid retention, oil production, unconventional and tight reservoirs

Procedia PDF Downloads 226
283 The Usefulness of Premature Chromosome Condensation Scoring Module in Cell Response to Ionizing Radiation

Authors: K. Rawojć, J. Miszczyk, A. Możdżeń, A. Panek, J. Swakoń, M. Rydygier

Abstract:

Due to the mitotic delay, poor mitotic index and disappearance of lymphocytes from peripheral blood circulation, assessing the DNA damage after high dose exposure is less effective. Conventional chromosome aberration analysis or cytokinesis-blocked micronucleus assay do not provide an accurate dose estimation or radiosensitivity prediction in doses higher than 6.0 Gy. For this reason, there is a need to establish reliable methods allowing analysis of biological effects after exposure in high dose range i.e., during particle radiotherapy. Lately, Premature Chromosome Condensation (PCC) has become an important method in high dose biodosimetry and a promising treatment modality to cancer patients. The aim of the study was to evaluate the usefulness of drug-induced PCC scoring procedure in an experimental mode, where 100 G2/M cells were analyzed in different dose ranges. To test the consistency of obtained results, scoring was performed by 3 independent persons in the same mode and following identical scoring criteria. Whole-body exposure was simulated in an in vitro experiment by irradiating whole blood collected from healthy donors with 60 MeV protons and 250 keV X-rays, in the range of 4.0 – 20.0 Gy. Drug-induced PCC assay was performed on human peripheral blood lymphocytes (HPBL) isolated after in vitro exposure. Cells were cultured for 48 hours with PHA. Then to achieve premature condensation, calyculin A was added. After Giemsa staining, chromosome spreads were photographed and manually analyzed by scorers. The dose-effect curves were derived by counting the excess chromosome fragments. The results indicated adequate dose estimates for the whole-body exposure scenario in the high dose range for both studied types of radiation. Moreover, compared results revealed no significant differences between scores, which has an important meaning in reducing the analysis time. These investigations were conducted as a part of an extended examination of 60 MeV protons from AIC-144 isochronous cyclotron, at the Institute of Nuclear Physics in Kraków, Poland (IFJ PAN) by cytogenetic and molecular methods and were partially supported by grant DEC-2013/09/D/NZ7/00324 from the National Science Centre, Poland.

Keywords: cell response to radiation exposure, drug induced premature chromosome condensation, premature chromosome condensation procedure, proton therapy

Procedia PDF Downloads 342
282 A Multi-Regional Structural Path Analysis of Virtual Water Flows Caused by Coal Consumption in China

Authors: Cuiyang Feng, Xu Tang, Yi Jin

Abstract:

Coal is the most important primary energy source in China, which exerts a significant influence on the rapid economic growth. However, it makes the water resources to be a constraint on coal industry development, on account of the reverse geographical distribution between coal and water. To ease the pressure on water shortage, the ‘3 Red Lines’ water policies were announced by the Chinese government, and then ‘water for coal’ plan was added to that policies in 2013. This study utilized a structural path analysis (SPA) based on the multi-regional input-output table to quantify the virtual water flows caused by coal consumption in different stages. Results showed that the direct water input (the first stage) was the highest amount in all stages of coal consumption, accounting for approximately 30% of total virtual water content. Regional analysis demonstrated that virtual water trade alleviated the pressure on water use for coal consumption in water shortage areas, but the import of virtual water was not from the areas which are rich in water. Sectoral analysis indicated that the direct inputs from the sectors of ‘production and distribution of electric power and heat power’ and ‘Smelting and pressing of metals’ took up the major virtual water flows, while the sectors of ‘chemical industry’ and ‘manufacture of non-metallic mineral products’ importantly but indirectly consumed the water. With the population and economic growth in China, the water demand-and-supply gap in coal consumption would be more remarkable. In additional to water efficiency improvement measures, the central government should adjust the strategies of the virtual water trade to address local water scarcity issues. Water resource as the main constraints should be highly considered in coal policy to promote the sustainable development of the coal industry.

Keywords: coal consumption, multi-regional input-output model, structural path analysis, virtual water

Procedia PDF Downloads 293
281 An Approach for Estimating Open Education Resources Textbook Savings: A Case Study

Authors: Anna Ching-Yu Wong

Abstract:

Introduction: Textbooks play a sizable portion of the overall cost of higher education students. It is a board consent that open education resources (OER) reduce the te4xtbook costs and provide students a way to receive high-quality learning materials at little or no cost to them. However, there is less agreement over exactly how much. This study presents an approach for calculating OER savings by using SUNY Canton NON-OER courses (N=233) to estimate the potentially textbook savings for one semester – Fall 2022. The purpose in collecting data is to understand how much potentially saved from using OER materials and to have a record for future further studies. Literature Reviews: In the past years, researchers identified the rising cost of textbooks disproportionately harm students in higher education institutions and how much an average cost of a textbook. For example, Nyamweya (2018) found that on average students save $116.94 per course when OER adopted in place of traditional commercial textbooks by using a simple formula. Student PIRGs (2015) used reports of per-course savings when transforming a course from using a commercial textbook to OER to reach an estimate of $100 average cost savings per course. Allen and Wiley (2016) presented at the 2016 Open Education Conference on multiple cost-savings studies and concluded $100 was reasonable per-course savings estimates. Ruth (2018) calculated an average cost of a textbook was $79.37 per-course. Hilton, et al (2014) conducted a study with seven community colleges across the nation and found the average textbook cost to be $90.61. There is less agreement over exactly how much would be saved by adopting an OER course. This study used SUNY Canton as a case study to create an approach for estimating OER savings. Methodology: Step one: Identify NON-OER courses from UcanWeb Class Schedule. Step two: View textbook lists for the classes (Campus bookstore prices). Step three: Calculate the average textbook prices by averaging the new book and used book prices. Step four: Multiply the average textbook prices with the number of students in the course. Findings: The result of this calculation was straightforward. The average of a traditional textbooks is $132.45. Students potentially saved $1,091,879.94. Conclusion: (1) The result confirms what we have known: Adopting OER in place of traditional textbooks and materials achieves significant savings for students, as well as the parents and taxpayers who support them through grants and loans. (2) The average textbook savings for adopting an OER course is variable depending on the size of the college and as well as the number of enrollment students.

Keywords: textbook savings, open textbooks, textbook costs assessment, open access

Procedia PDF Downloads 67
280 Work Related and Psychosocial Risk Factors for Musculoskeletal Disorders among Workers in an Automated flexible Assembly Line in India

Authors: Rohin Rameswarapu, Sameer Valsangkar

Abstract:

Background: Globally, musculoskeletal disorders are the largest single cause of work-related illnesses accounting for over 33% of all newly reported occupational illnesses. Risk factors for MSD need to be delineated to suggest means for amelioration. Material and methods: In this current cross-sectional study, the prevalence of MSDs among workers in an electrical company assembly line, the socio-demographic and job characteristics associated with MSD were obtained through a semi-structured questionnaire. A quantitative assessment of the physical risk factors through the Rapid Upper Limb Assessment (RULA) tool, and measurement of psychosocial risk factors through a Likert scale was obtained. Statistical analysis was conducted using Epi-info software and descriptive and inferential statistics including chi-square and unpaired t test were obtained. Results: A total of 263 workers consented and participated in the study. Among these workers, 200 (76%) suffered from MSD. Most of the workers were aged between 18–27 years and majority of the workers were women with 198 (75.2%) of the 263 workers being women. A chi square test was significant for association between male gender and MSD with a P value of 0.007. Among the MSD positive group, 4 (2%) had a grand score of 5, 10 (5%) had a grand score of 6 and 186 (93%) had a grand score of 7 on RULA. There were significant differences between the non-MSD and MSD group on five out of the seven psychosocial domains, namely job demand, job monotony, co-worker support, decision control and family and environment domains. Discussion: The current cross-sectional study demonstrates a high prevalence of MSD among assembly line works with inherent physical and psychosocial risk factors and recommends that not only physical risk factors, addressing psychosocial risk factors through proper ergonomic means is also essential to the well-being of the employee.

Keywords: musculoskeletal disorders, India, occupational health, Rapid Upper Limb Assessment (RULA)

Procedia PDF Downloads 338
279 Characterization of Petrophysical Properties of Reservoirs in Bima Formation, Northeastern Nigeria: Implication for Hydrocarbon Exploration

Authors: Gabriel Efomeh Omolaiye, Jimoh Ajadi, Olatunji Seminu, Yusuf Ayoola Jimoh, Ubulom Daniel

Abstract:

Identification and characterization of petrophysical properties of reservoirs in the Bima Formation were undertaken to understand their spatial distribution and impacts on hydrocarbon saturation in the highly heterolithic siliciclastic sequence. The study was carried out using nine well logs from Maiduguri and Baga/Lake sub-basins within the Borno Basin. The different log curves were combined to decipher the lithological heterogeneity of the serrated sand facies and to aid the geologic correlation of sand bodies within the sub-basins. Evaluation of the formation reveals largely undifferentiated to highly serrated and lenticular sand bodies from which twelve reservoirs named Bima Sand-1 to Bima Sand-12 were identified. The reservoir sand bodies are bifurcated by shale beds, which reduced their thicknesses variably from 0.61 to 6.1 m. The shale content in the sand bodies ranged from 11.00% (relatively clean) to high shale content of 88.00%. The formation also has variable porosity values, with calculated total porosity ranged as low as 10.00% to as high as 35.00%. Similarly, effective porosity values spanned between 2.00 to 24.00%. The irregular porosity values also accounted for a wide range of field average permeability estimates computed for the formation, which measured between 0.03 to 319.49 mD. Hydrocarbon saturation (Sh) in the thin lenticular sand bodies also varied from 40.00 to 78.00%. Hydrocarbon was encountered in three intervals in Ga-1, four intervals in Da-1, two intervals in Ar-1, and one interval in Ye-1. Ga-1 well encountered 30.78 m thick of hydrocarbon column in 14 thin sand lobes in Bima Sand-1, with thicknesses from 0.60 m to 5.80 m and average saturation of 51.00%, while Bima Sand-2 intercepted 45.11 m thick of hydrocarbon column in 12 thin sand lobes with an average saturation of 61.00% and Bima Sand-9 has 6.30 m column in 4 thin sand lobes. Da-1 has hydrocarbon in Bima Sand-8 (5.30 m, Sh of 58.00% in 5 sand lobes), Bima Sand-10 (13.50 m, Sh of 52.00% in 6 sand lobes), Bima Sand-11 (6.20 m, Sh of 58.00% in 2 sand lobes) and Bima Sand-12 (16.50 m, Sh of 66% in 6 sand lobes). In the Ar-1 well, hydrocarbon occurs in Bima Sand-3 (2.40 m column, Sh of 48% in a sand lobe) and Bima Sand-9 (6.0 m, Sh of 58% in a sand lobe). Ye-1 well only intersected 0.5 m hydrocarbon in Bima Sand-1 with 78% saturation. Although Bima Formation has variable saturation of hydrocarbon, mainly gas in Maiduguri, and Baga/Lake sub-basins of the research area, its highly thin serrated sand beds, coupled with very low effective porosity and permeability in part, would pose a significant exploitation challenge. The sediments were deposited in a fluvio-lacustrine environment, resulting in a very thinly laminated or serrated alternation of sand and shale beds lithofacies.

Keywords: Bima, Chad Basin, fluvio-lacustrine, lithofacies, serrated sand

Procedia PDF Downloads 154
278 Statistical Correlation between Ply Mechanical Properties of Composite and Its Effect on Structure Reliability

Authors: S. Zhang, L. Zhang, X. Chen

Abstract:

Due to the large uncertainty on the mechanical properties of FRP (fibre reinforced plastic), the reliability evaluation of FRP structures are currently receiving much attention in industry. However, possible statistical correlation between ply mechanical properties has been so far overlooked, and they are mostly assumed to be independent random variables. In this study, the statistical correlation between ply mechanical properties of uni-directional and plain weave composite is firstly analyzed by a combination of Monte-Carlo simulation and finite element modeling of the FRP unit cell. Large linear correlation coefficients between the in-plane mechanical properties are observed, and the correlation coefficients are heavily dependent on the uncertainty of the fibre volume ratio. It is also observed that the correlation coefficients related to Poisson’s ratio are negative while others are positive. To experimentally achieve the statistical correlation coefficients between in-plane mechanical properties of FRP, all concerned in-plane mechanical properties of the same specimen needs to be known. In-plane shear modulus of FRP is experimentally derived by the approach suggested in the ASTM standard D5379M. Tensile tests are conducted using the same specimens used for the shear test, and due to non-uniform tensile deformation a modification factor is derived by a finite element modeling. Digital image correlation is adopted to characterize the specimen non-uniform deformation. The preliminary experimental results show a good agreement with the numerical analysis on the statistical correlation. Then, failure probability of laminate plates is calculated in cases considering and not considering the statistical correlation, using the Monte-Carlo and Markov Chain Monte-Carlo methods, respectively. The results highlight the importance of accounting for the statistical correlation between ply mechanical properties to achieve accurate failure probability of laminate plates. Furthermore, it is found that for the multi-layer laminate plate, the statistical correlation between the ply elastic properties significantly affects the laminate reliability while the effect of statistical correlation between the ply strength is minimal.

Keywords: failure probability, FRP, reliability, statistical correlation

Procedia PDF Downloads 152
277 Extraction, Recovery and Bioactivities of Chlorogenic Acid from Unripe Green Coffee Cherry Waste of Coffee Processing Industry

Authors: Akkasit Jongjareonrak, Supansa Namchaiya

Abstract:

Unripe green coffee cherry (UGCC) accounting about 5 % of total raw material weight receiving to the coffee bean production process and is, in general, sorting out and dump as waste. The UGCC is known to rich in phenolic compounds such as caffeoylquinic acids, feruloylquinic acids, chlorogenic acid (CGA), etc. CGA is one of the potent bioactive compounds using in the nutraceutical and functional food industry. Therefore, this study aimed at optimization the extraction condition of CGA from UGCC using Accelerated Solvent Extractor (ASE). The ethanol/water mixture at various ethanol concentrations (50, 60 and 70 % (v/v)) was used as an extraction solvent at elevated pressure (10.34 MPa) and temperatures (90, 120 and 150 °C). The recovery yield of UGCC crude extract, total phenolic content, CGA content and some bioactivities of UGCC extract were investigated. Using of ASE at lower temperature with higher ethanol concentration provided higher CGA content in the UGCC crude extract. The maximum CGA content was observed at the ethanol concentration of 70% ethanol and 90 °C. The further purification of UGCC crude extract gave a higher purity of CGA with a purified CGA yield of 4.28 % (w/w, of dried UGCC sample) containing 72.52 % CGA equivalent. The antioxidant activity and antimicrobial activity of purified CGA extract were determined. The purified CGA exhibited the 2,2-Diphenyl-1-picrylhydrazyl (DPPH) radical scavenging activity at 0.88 mg Trolox equivalent/mg purified CGA sample. The antibacterial activity against Escherichia coli was observed with the minimum inhibitory concentration (MIC) at 3.12 mg/ml and minimum bactericidal concentration (MBC) at 12.5 mg/ml. These results suggested that using of high concentration of ethanol and low temperature under elevated pressure of ASE condition could accelerate the extraction of CGA from UGCC. The purified CGA extract could be a promising alternative source of bioactive compound using for nutraceutical and functional food industry.

Keywords: bioactive, chlorogenic acid, coffee, extraction

Procedia PDF Downloads 250
276 Measuring the Economic Impact of Cultural Heritage: Comparative Analysis of the Multiplier Approach and the Value Chain Approach

Authors: Nina Ponikvar, Katja Zajc Kejžar

Abstract:

While the positive impacts of heritage on a broad societal spectrum have long been recognized and measured, the economic effects of the heritage sector are often less visible and frequently underestimated. At macro level, economic effects are usually studied based on one of the two mainstream approach, i.e. either the multiplier approach or the value chain approach. Consequently, there is limited comparability of the empirical results due to the use of different methodological approach in the literature. Furthermore, it is also not clear on which criteria the used approach was selected. Our aim is to bring the attention to the difference in the scope of effects that are encompassed by the two most frequent methodological approaches to valuation of economic effects of cultural heritage on macroeconomic level, i.e. the multiplier approach and the value chain approach. We show that while the multiplier approach provides a systematic, theory-based view of economic impacts but requires more data and analysis, the value chain approach has less solid theoretical foundations and depends on the availability of appropriate data to identify the contribution of cultural heritage to other sectors. We conclude that the multiplier approach underestimates the economic impact of cultural heritage, mainly due to the narrow definition of cultural heritage in the statistical classification and the inability to identify part of the contribution of cultural heritage that is hidden in other sectors. Yet it is not possible to clearly determine whether the value chain method overestimates or underestimates the actual economic impact of cultural heritage since there is a risk that the direct effects are overestimated and double counted, but not all indirect and induced effects are considered. Accordingly, these two approaches are not substitutes but rather complementary. Consequently, a direct comparison of the estimated impacts is not possible and should not be done due to the different scope. To illustrate the difference of the impact assessment of the cultural heritage, we apply both approaches to the case of Slovenia in the 2015-2022 period and measure the economic impact of cultural heritage sector in terms of turnover, gross value added and employment. The empirical results clearly show that the estimation of the economic impact of a sector using the multiplier approach is more conservative, while the estimates based on value added capture a much broader range of impacts. According to the multiplier approach, each euro in cultural heritage sector generates an additional 0.14 euros in indirect effects and an additional 0.44 euros in induced effects. Based on the value-added approach, the indirect economic effect of the “narrow” heritage sectors is amplified by the impact of cultural heritage activities on other sectors. Accordingly, every euro of sales and every euro of gross value added in the cultural heritage sector generates approximately 6 euros of sales and 4 to 5 euros of value added in other sectors. In addition, each employee in the cultural heritage sector is linked to 4 to 5 jobs in other sectors.

Keywords: economic value of cultural heritage, multiplier approach, value chain approach, indirect effects, slovenia

Procedia PDF Downloads 65
275 Size Optimization of Microfluidic Polymerase Chain Reaction Devices Using COMSOL

Authors: Foteini Zagklavara, Peter Jimack, Nikil Kapur, Ozz Querin, Harvey Thompson

Abstract:

The invention and development of the Polymerase Chain Reaction (PCR) technology have revolutionised molecular biology and molecular diagnostics. There is an urgent need to optimise their performance of those devices while reducing the total construction and operation costs. The present study proposes a CFD-enabled optimisation methodology for continuous flow (CF) PCR devices with serpentine-channel structure, which enables the trade-offs between competing objectives of DNA amplification efficiency and pressure drop to be explored. This is achieved by using a surrogate-enabled optimisation approach accounting for the geometrical features of a CF μPCR device by performing a series of simulations at a relatively small number of Design of Experiments (DoE) points, with the use of COMSOL Multiphysics 5.4. The values of the objectives are extracted from the CFD solutions, and response surfaces created using the polyharmonic splines and neural networks. After creating the respective response surfaces, genetic algorithm, and a multi-level coordinate search optimisation function are used to locate the optimum design parameters. Both optimisation methods produced similar results for both the neural network and the polyharmonic spline response surfaces. The results indicate that there is the possibility of improving the DNA efficiency by ∼2% in one PCR cycle when doubling the width of the microchannel to 400 μm while maintaining the height at the value of the original design (50μm). Moreover, the increase in the width of the serpentine microchannel is combined with a decrease in its total length in order to obtain the same residence times in all the simulations, resulting in a smaller total substrate volume (32.94% decrease). A multi-objective optimisation is also performed with the use of a Pareto Front plot. Such knowledge will enable designers to maximise the amount of DNA amplified or to minimise the time taken throughout thermal cycling in such devices.

Keywords: PCR, optimisation, microfluidics, COMSOL

Procedia PDF Downloads 147
274 Predicting Costs in Construction Projects with Machine Learning: A Detailed Study Based on Activity-Level Data

Authors: Soheila Sadeghi

Abstract:

Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.

Keywords: cost prediction, machine learning, project management, random forest, neural networks

Procedia PDF Downloads 26
273 A Machine Learning Approach for Efficient Resource Management in Construction Projects

Authors: Soheila Sadeghi

Abstract:

Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.

Keywords: resource allocation, machine learning, optimization, data-driven decision-making, project management

Procedia PDF Downloads 23
272 An Evaluation of Discontinuities in Rock Mass Using Coupled Hydromechanical Finite Element and Discrete Element Analyses

Authors: Mohammad Moridzadeh, Aaron Gallant

Abstract:

The paper will present the design and construction of the underground excavations of a pump station forebay and its related components including connector tunnels, access shaft, riser shaft and well shafts. The underground openings include an 8 m-diameter riser shaft, an 8-m-diameter access shaft, 34 2.4-m-diameter well shafts, a 107-m-long forebay with a cross section having a height of 11 m and width of 10 m, and a 6 m by 6 m stub connector tunnel between the access shaft and a future forebay extension. The riser shaft extends down from the existing forebay connector tunnel at elevation 247 m to the crown of the forebay at elevation 770.0 feet. The access shaft will extend from the platform at the surface down to El. 223.5 m. The pump station will have the capacity to deliver 600 million gallons per day. The project is located on an uplifted horst consisting of a mass of Precambrian metamorphic rock trending in a north-south direction. The eastern slope of the area is very steep and pronounced and is likely the result of high-angle normal faulting. Toward the west, the area is bordered by a high angle normal fault and recent alluvial, lacustrine, and colluvial deposits. An evaluation of rock mass properties, fault and discontinuities, foliation and joints, and in situ stresses was performed. The response of the rock mass was evaluated in 3DEC using Discrete Element Method (DEM) by explicitly accounting for both major and minor discontinuities within the rock mass (i.e. joints, shear zones, faults). Moreover, the stability of the entire subsurface structure including the forebay, access and riser shafts, future forebay, well shafts, and connecting tunnels and their interactions with each other were evaluated using a 3D coupled hydromechanical Finite Element Analysis (FEA).

Keywords: coupled hydromechanical analysis, discontinuities, discrete element, finite element, pump station

Procedia PDF Downloads 253
271 The Correlation between Emotional Intelligence and Locus of Control: Empirical Study on Lithuanian Youth

Authors: Dalia Antiniene, Rosita Lekaviciene

Abstract:

The qualitative methodology based study is designed to reveal a connection between emotional intelligence (EI) and locus of control (LC) within the population of Lithuanian youth. In the context of emotional problems, the locus of control reflects how one estimates the causes of his/her emotions: internals (internal locus of control) associate their emotions with their manner of thinking, whereas externals (external locus of control) consider emotions to be evoked by external circumstances. On the other hand, there is little empirical data about this connection, and the results in disposition are often contradictory. In the conducted study 1430 young people, aged 17 to 27, from various regions of Lithuania were surveyed. The subjects were selected by quota sampling, maintaining natural proportions of the general Lithuanian youth population. To assess emotional intelligence the EI-DARL test (i.e. self-report questionnaire consisting of 75 items) was implemented. The emotional intelligence test, created applying exploratory factor analysis, reveals four main dimensions of EI: understanding of one’s own emotions, regulation of one’s own emotions, understanding other’s emotions, and regulation of other’s emotions (subscale reliability coefficients fluctuate between 0,84 and 0,91). An original 16-item internality/externality scale was used to examine the locus of control (internal consistency of the Externality subscale - 0,75; Internality subscale - 0,65). The study has determined that the youth understands and regulates other people’s emotions better than their own. Using the K-mean cluster analysis method, it was established that there are three groups of subjects according to their EI level – people with low, medium and high EI. After comparing means of subjects’ favorability of statements on the Internality/Externality scale, a predominance of internal locus of control in the young population was established. The multiple regression models has shown that a rather strong statistically significant correlation exists between total EI, EI subscales and LC. People who tend to attribute responsibility for the outcome of their actions to their own abilities and efforts have higher EI and, conversely, the tendency to attribute responsibility to external forces is related more with lower EI. While pursuing their goals, young people with high internality have a predisposition to analyze perceived emotions and, therefore, gain emotional experience: they learn to control their natural reactions and to act adequately in a situation at hand. Thus the study unfolds, that a person’s locus of control and emotional intelligence are related phenomena and allows us to draw a conclusion, that a person’s internality/externality is a reliable predictor of total EI and its components.

Keywords: emotional intelligence, externality, internality, locus of control

Procedia PDF Downloads 211
270 Cancer Survivor’s Adherence to Healthy Lifestyle Behaviours; Meeting the World Cancer Research Fund/American Institute of Cancer Research Recommendations, a Systematic Review and Meta-Analysis

Authors: Daniel Nigusse Tollosa, Erica James, Alexis Hurre, Meredith Tavener

Abstract:

Introduction: Lifestyle behaviours such as healthy diet, regular physical activity and maintaining a healthy weight are essential for cancer survivors to improve the quality of life and longevity. However, there is no study that synthesis cancer survivor’s adherence to healthy lifestyle recommendations. The purpose of this review was to collate existing data on the prevalence of adherence to healthy behaviours and produce the pooled estimate among adult cancer survivors. Method: Multiple databases (Embase, Medline, Scopus, Web of Science and Google Scholar) were searched for relevant articles published since 2007, reporting cancer survivors adherence to more than two lifestyle behaviours based on the WCRF/AICR recommendations. The pooled prevalence of adherence to single and multiple behaviours (operationalized as adherence to more than 75% (3/4) of health behaviours included in a particular study) was calculated using a random effects model. Subgroup analysis adherence to multiple behaviours was undertaken corresponding to the mean survival years and year of publication. Results: A total of 3322 articles were generated through our search strategies. Of these, 51 studies matched our inclusion criteria, which presenting data from 2,620,586 adult cancer survivors. The highest prevalence of adherence was observed for smoking (pooled estimate: 87%, 95% CI: 85%, 88%) and alcohol intake (pooled estimate 83%, 95% CI: 81%, 86%), and the lowest was for fiber intake (pooled estimate: 31%, 95% CI: 21%, 40%). Thirteen studies were reported the proportion of cancer survivors (all used a simple summative index method) to multiple healthy behaviours, whereby the prevalence of adherence was ranged from 7% to 40% (pooled estimate 23%, 95% CI: 17% to 30%). Subgroup analysis suggest that short-term survivors ( < 5 years survival time) had relatively a better adherence to multiple behaviours (pooled estimate: 31%, 95% CI: 27%, 35%) than long-term ( > 5 years survival time) cancer survivors (pooled estimate: 25%, 95% CI: 14%, 36%). Pooling of estimates according to the year of publication (since 2007) also suggests an increasing trend of adherence to multiple behaviours over time. Conclusion: Overall, the adherence to multiple lifestyle behaviors was poor (not satisfactory), and relatively, it is a major concern for long-term than the short-term cancer survivor. Cancer survivors need to obey with healthy lifestyle recommendations related to physical activity, fruit and vegetable, fiber, red/processed meat and sodium intake.

Keywords: adherence, lifestyle behaviours, cancer survivors, WCRF/AICR

Procedia PDF Downloads 173
269 Boko Haram Insurrection and Religious Revolt in Nigeria: An Impact Assessment-{2009-2015}

Authors: Edwin Dankano

Abstract:

Evident by incessant and sporadic attacks on Nigerians poise a serious threat to the unity of Nigeria, and secondly, the single biggest security nightmare to confront Nigeria since after amalgamation of the Southern and Northern protectorates by the British colonialist in 1914 is “Boko Haram” a terrorist organization also known as “Jama’atul Ahli Sunnah Lidda’wati wal Jihad”, or “people committed to the propagation of the Prophet’s teachings and jihad”. The sect also upholds an ideology translated as “Western Education is forbidden”, or rejection of Western civilization and institutions. By some estimates, more than 5,500 people were killed in Boko Haram attacks in 2014, and Boko Haram attacks have already claimed hundreds of lives and territories {caliphates}in early 2015. In total, the group may have killed more than 10,000 people since its emergence in the early 2000s. More than 1 million Nigerians have been displaced internally by the violence, and Nigerian refugee figures in neighboring countries continue to rise. This paper is predicated on secondary sources of data and anchored on the Huntington’s theory of clash of civilization. As such, the paper argued that the rise of Boko Haram with its violent disposition against Western values is a counter response to Western civilization that is fast eclipsing other civilizations. The paper posits that the Boko Haram insurrection going by its teachings, and destruction of churches is a validation of the propagation of the sect as a religious revolt which has resulted in dire humanitarian situation in Adamawa, Borno, Yobe, Bauchi, and Gombe states all in north eastern Nigeria as evident in human casualties, human right abuses, population displacement, refugee debacle, livelihood crisis, and public insecurity. The paper submits that the Nigerian state should muster the needed political will in terms of a viable anti-terrorism measures and build strong legitimate institutions that can adequately curb the menace of corruption that has engulfed the military hierarchy, respond proactively to the challenge of terrorism in Nigeria and should embrace a strategic paradigm shift from anti-terrorism to counter-terrorism as a strategy for containing the crisis that today threatens the secular status of Nigeria.

Keywords: Boko Haram, civilization, fundamentalism, Islam, religion revolt, terror

Procedia PDF Downloads 385
268 Implications of Stakeholder Theory as a Critical Theory

Authors: Louis Hickman

Abstract:

Stakeholder theory is a powerful conception of the firm based on the notion that a primary focus on shareholders is inadequate and, in fact, detrimental to the long-term health of the firm. As such it represents a departure from prevalent business school teachings with their focus on accounting and cost controls. Herein, it is argued that stakeholder theory can be better conceptualized as a critical theory, or one which represents a fundamental change in business behavior and can transform the behavior of businesses if accepted. By arguing that financial interests underdetermine the success of the firm, stakeholder theory further democratizes business by endorsing an increased awareness of the importance of non-shareholder stakeholders. Stakeholder theory requires new, non-financial, measures of success that provide a new consciousness for management and businesses when conceiving their actions and place in society. Thereby, stakeholder theory can show individuals through self-reflection that the capitalist impulses to generate wealth cannot act as primary drivers of business behavior, but rather, that we would choose to support interests outside ourselves if we made the decision in free discussion. This is due to the false consciousness embedded in our capitalism that the firm’s finances are the foremost concern of modern organizations at the expense of other goals. A focus on non-shareholder stakeholders in addition to shareholders generates greater benefits for society by improving the state of customers, employees, suppliers, the community, and shareholders alike. These positive effects generate further positive gains in well-being for stakeholders and translate into increased health for the future firm. Additionally, shareholders are the only stakeholder group that does not provide long-term firm value since there are not always communities with qualified employees, suppliers capable of providing the quality of product needed, or persons with purchasing power for all conceivable products. Therefore, the firm’s long-term health is benefited most greatly by improving the greatest possible parts of the society in which it inhabits, rather than solely the shareholder.

Keywords: capitalism, critical theory, self-reflection, stakeholder theory

Procedia PDF Downloads 330
267 Mechanical Characterization and CNC Rotary Ultrasonic Grinding of Crystal Glass

Authors: Ricardo Torcato, Helder Morais

Abstract:

The manufacture of crystal glass parts is based on obtaining the rough geometry by blowing and/or injection, generally followed by a set of manual finishing operations using cutting and grinding tools. The forming techniques used do not allow the obtainment, with repeatability, of parts with complex shapes and the finishing operations use intensive specialized labor resulting in high cycle times and production costs. This work aims to explore the digital manufacture of crystal glass parts by investigating new subtractive techniques for the automated, flexible finishing of these parts. Finishing operations are essential to respond to customer demands in terms of crystal feel and shine. It is intended to investigate the applicability of different computerized finishing technologies, namely milling and grinding in a CNC machining center with or without ultrasonic assistance, to crystal processing. Research in the field of grinding hard and brittle materials, despite not being extensive, has increased in recent years, and scientific knowledge about the machinability of crystal glass is still very limited. However, it can be said that the unique properties of glass, such as high hardness and very low toughness, make any glass machining technology a very challenging process. This work will measure the performance improvement brought about by the use of ultrasound compared to conventional crystal grinding. This presentation is focused on the mechanical characterization and analysis of the cutting forces in CNC machining of superior crystal glass (Pb ≥ 30%). For the mechanical characterization, the Vickers hardness test provides an estimate of the material hardness (Hv) and the fracture toughness based on cracks that appear in the indentation. Mechanical impulse excitation test estimates the Young’s Modulus, shear modulus and Poisson ratio of the material. For the cutting forces, it a dynamometer was used to measure the forces in the face grinding process. The tests were made based on the Taguchi method to correlate the input parameters (feed rate, tool rotation speed and depth of cut) with the output parameters (surface roughness and cutting forces) to optimize the process (better roughness using the cutting forces that do not compromise the material structure and the tool life) using ANOVA. This study was conducted for conventional grinding and for the ultrasonic grinding process with the same cutting tools. It was possible to determine the optimum cutting parameters for minimum cutting forces and for minimum surface roughness in both grinding processes. Ultrasonic-assisted grinding provides a better surface roughness than conventional grinding.

Keywords: CNC machining, crystal glass, cutting forces, hardness

Procedia PDF Downloads 144
266 Improving Food Security and Commercial Development through Promotion of High Value Medicinal and Industrial Plants in the Swat Valley of Pakistan

Authors: Hassan Sher

Abstract:

Agriculture has a pivotal role in Pakistan’s economy, accounting for about one-fourth of the GDP and employing almost half the population. However, the competitiveness, productivity, growth, employment potential, export opportunity, and contribution to GDP of the sector is significantly hampered by agriculture marketing laws/regulations at the provincial level that reward rent seeking behavior, promote monopoly power, artificially reduce farmer incomes while inflating prices to consumers, and act as disincentives to investment. Although of more recent vintage than some other provincial agricultural marketing laws, the NWFP Agricultural and Livestock Produce Markets Act, 2007 is a throwback to a colonial paradigm, where restrictions on agricultural produce marketing and Government control of distribution channels is the norm. The Swat Valley (in which we include its tributary valleys) is an area of Pakistan in which there is poverty is both extreme and pervasive. For many, a significant portion of the family’s income comes from selling plants that are used as herbs, medicines, and perfumes. Earlier studies have shown that the benefit they derive from this work is less than they might because of: Lack of knowledge concerning which plants and which plant parts are valuable, Lack of knowledge concerning optimal preservation and storage of material, illiteracy. Another concern that much of the plant material sold from the valley is collected in the wild, without an appreciation of the negative impact continued collecting has on wild populations. We propose: Creating colored cards to help inhabitants recognize the 25 most valuable plants in their area; Developing and sharing protocols for growing the 25 most valuable plants in a home garden; Developing and sharing efficient mechanisms for drying plants so they do not lose value; Encouraging increased literacy by incorporating numbers and a few words in the handouts.

Keywords: food security, medicinal plants, industrial plants, economic development

Procedia PDF Downloads 318
265 Other Cancers in Patients With Head and Neck Cancer

Authors: Kim Kennedy, Daren Gibson, Stephanie Flukes, Chandra Diwakarla, Lisa Spalding, Leanne Pilkington, Andrew Redfern

Abstract:

Introduction: Head and neck cancers (HNC) are often associated with the development of non-HNC primaries, as the risk factors that predispose patients to HNC are often risk factors for other cancers. Aim: We sought to evaluate whether there was an increased risk of smoking and alcohol-related cancers and also other cancers in HNC patients and to evaluate whether there is a difference between the rates of non-HNC primaries in Aboriginal compared with non-Aboriginal HNC patients. Methods: We performed a retrospective cohort analysis of 320 HNC patients from a single center in Western Australia, identifying 80 Aboriginal and 240 non-Aboriginal patients matched on a 1:3 ratio by sites, histology, rurality, and age. We collected data on the patient characteristics, tumour features, treatments, outcomes, and past and subsequent HNCs and non-HNC primaries. Results: In the overall study population, there were 86 patients (26.9%) with a metachronous or synchronous non-HNC primary. Non-HNC primaries were actually significantly more common in the non-Aboriginal population compared with the Aboriginal population (30% vs. 17.5%, p=0.02); however, half of these were patients with cutaneous squamous or basal cell carcinomas (cSCC/BCC) only. When cSCC/BCCs were excluded, non-Aboriginal patients had a similar rate as Aboriginal patients (16.7% vs. 15%, p=0.73). There were clearly more cSCC/BCCs in non-Aboriginal patients compared with Aboriginal patients (16.7% vs. 2.5%, p=0.001) and more patients with melanoma (2.5% vs. 0%, p value not significant (p=NS). Rates of most cancers were similar between non-Aboriginal and Aboriginal patients, including prostate (2.9% vs. 3.8%), colorectal (2.9% vs. 2.5%), kidney (1.2% vs. 1.2%), and these rates appeared comparable to Australian Age Standardised Incidence Rates (ASIR) in the general community. Oesophageal cancer occurred at double the rate in Aboriginal patients (3.8%) compared with non-Aboriginal patients (1.7%), which was far in excess of ASIRs which estimated a lifetime risk of 0.59% in the general population. Interestingly lung cancer rates did not appear to be significantly increased in our cohort, with 2.5% of Aboriginal patients and 3.3% of non-Aboriginal patients having lung cancer, which is in line with ASIRs which estimates a lifetime risk of 5% (by age 85yo). Interestingly the rate of Glioma in the non-Aboriginal population was higher than the ASIR, with 0.8% of non-Aboriginal patients developing Glioma, with Australian averages predicting a 0.6% lifetime risk in the general population. As these are small numbers, this finding may well be due to chance. Unsurprisingly, second HNCs occurred at an increased incidence in our cohort, in 12.5% of Aboriginal patients and 11.2% of non-Aboriginal patients, compared to an ASIR of 17 cases per 100,000 persons, estimating a lifetime risk of 1.70%. Conclusions: Overall, 26.9% of patients had a non-HNC primary. When cSCC/BCCs were excluded, Aboriginal and non-Aboriginal patients had similar rates of non-HNC primaries, although non-Aboriginal patients had a significantly higher rate of cSCC/BCCs. Aboriginal patients had double the rate of oesophageal primaries; however, this was not statistically significant, possibly due to small case numbers.

Keywords: head and neck cancer, synchronous and metachronous primaries, other primaries, Aboriginal

Procedia PDF Downloads 58
264 Comparison of Parametric and Bayesian Survival Regression Models in Simulated and HIV Patient Antiretroviral Therapy Data: Case Study of Alamata Hospital, North Ethiopia

Authors: Zeytu G. Asfaw, Serkalem K. Abrha, Demisew G. Degefu

Abstract:

Background: HIV/AIDS remains a major public health problem in Ethiopia and heavily affecting people of productive and reproductive age. We aimed to compare the performance of Parametric Survival Analysis and Bayesian Survival Analysis using simulations and in a real dataset application focused on determining predictors of HIV patient survival. Methods: A Parametric Survival Models - Exponential, Weibull, Log-normal, Log-logistic, Gompertz and Generalized gamma distributions were considered. Simulation study was carried out with two different algorithms that were informative and noninformative priors. A retrospective cohort study was implemented for HIV infected patients under Highly Active Antiretroviral Therapy in Alamata General Hospital, North Ethiopia. Results: A total of 320 HIV patients were included in the study where 52.19% females and 47.81% males. According to Kaplan-Meier survival estimates for the two sex groups, females has shown better survival time in comparison with their male counterparts. The median survival time of HIV patients was 79 months. During the follow-up period 89 (27.81%) deaths and 231 (72.19%) censored individuals registered. The average baseline cluster of differentiation 4 (CD4) cells count for HIV/AIDS patients were 126.01 but after a three-year antiretroviral therapy follow-up the average cluster of differentiation 4 (CD4) cells counts were 305.74, which was quite encouraging. Age, functional status, tuberculosis screen, past opportunistic infection, baseline cluster of differentiation 4 (CD4) cells, World Health Organization clinical stage, sex, marital status, employment status, occupation type, baseline weight were found statistically significant factors for longer survival of HIV patients. The standard error of all covariate in Bayesian log-normal survival model is less than the classical one. Hence, Bayesian survival analysis showed better performance than classical parametric survival analysis, when subjective data analysis was performed by considering expert opinions and historical knowledge about the parameters. Conclusions: Thus, HIV/AIDS patient mortality rate could be reduced through timely antiretroviral therapy with special care on the potential factors. Moreover, Bayesian log-normal survival model was preferable than the classical log-normal survival model for determining predictors of HIV patients survival.

Keywords: antiretroviral therapy (ART), Bayesian analysis, HIV, log-normal, parametric survival models

Procedia PDF Downloads 176
263 Analyzing the Impact of Board Diversity on Firm Performance: Case Study of the Nigerian Banking Sector

Authors: Data Collete Bob-Manuel

Abstract:

In light of global financial crisis in 2007-2008 various factors including board diversity, succession planning and board evaluation have been identified as essential ingredients in ensuring board effectiveness. The composition and structure of the board is of outmost importance in assessing a board’s ability and success in achieving its objectives. Following the corporate frauds and accounting scandals such as Enron, WorldCom, Parmalat, Oceanic Bank Nigeria and AfriBank Nigeria, there has been a notable amount of research about the effectiveness of the board of directors in the corporate governance of firms. The need to have an effective board cannot be over emphasized as it results in a more stable and thriving company. There has been an overarching need in the business world for a more diverse workforce and board of directors. Big corporations like Texaco, Ford Motors and DuPont have stated how diversity at every level of the workforce including the board of directors has been cited as a vital element for a company to succeed. Developed countries are also seeking for companies to have a more diverse board. For instance Norway has implemented a 60:40 board ratio to all companies. In West Africa, particularly Nigeria, the topic of diversity has received little attention as most studies conducted have focused on the gender aspect of diversity, which results found to have a negative impact on firm performance. This paper seeks to examine four variables of diversity; age, ethnicity, gender and skills to weigh the positive or negative impact the variables have on firm performance, based on evidence from the Nigerian Financial sector. Information used for this study will be gathered from financial statements and annual reports so as to enable the researcher to reflect on past years to know what is being done differently today. The findings of this study will help the researcher to develop a working definition for ethnicity with regards to the West African context where the issue of “tribe” is a sensitive topic.

Keywords: Board of Directors, Board Diversity, Firm Performance, Nigeria

Procedia PDF Downloads 369
262 Mother-Child Conversations about Emotions and Socio-Emotional Education in Children with Autism Spectrum Disorder

Authors: Beaudoin Marie-Joelle, Poirier Nathalie

Abstract:

Introduction: Children with autism spectrum disorder (ASD) tend to lack socio-emotional skills (e.g., emotional regulation and theory of mind). Eisenberg’s theoretical model on emotion-related socialization behaviors suggests that mothers of children with ASD could play a central role in fostering the acquisition of socio-emotional skills by engaging in frequent educational conversations about emotions. Although, mothers’ perceptions of their own emotional skills and their child’s personality traits and social deficits could mitigate the benefit of their educative role. Objective: Our study aims to explore the association between mother-child conversations about emotions and the socio-emotional skills of their children when accounting for the moderating role of the mothers’ perceptions. Forty-nine mothers completed five questionnaires about emotionally related conversations, self-openness to emotions, and perceptions of personality and socio-emotional skills of their children with ASD. Results: Regression analyses showed that frequent mother-child conversations about emotions predicted better emotional regulation and theory of mind skills in children with ASD (p < 0.01). The children’s theory of mind was moderated by mothers’ perceptions of their own emotional openness (p < 0.05) and their perceptions of their children’s openness to experience (p < 0.01) and conscientiousness (p < 0.05). Conclusion: Mothers likely play an important role in the socio-emotional education of children with ASD. Further, mothers may be most helpful when they perceive that their interventions improve their child’s behaviors. Our findings corroborate those of the Eisenberg model, which claims that mother-child conversations about emotions predict socio-emotional development skills in children with ASD. Our results also help clarify the moderating role of mothers’ perceptions, which could mitigate their willingness to engage in educational conversations about emotions with their children. Therefore, in special needs' children education, school professionals could collaborate with mothers to increase the frequency of emotion-related conversations in ASD's students with emotion dysregulation or theory of mind problems.

Keywords: autism, parental socialization of emotion, emotional regulation, theory of mind

Procedia PDF Downloads 72
261 Comparing Xbar Charts: Conventional versus Reweighted Robust Estimation Methods for Univariate Data Sets

Authors: Ece Cigdem Mutlu, Burak Alakent

Abstract:

Maintaining the quality of manufactured products at a desired level depends on the stability of process dispersion and location parameters and detection of perturbations in these parameters as promptly as possible. Shewhart control chart is the most widely used technique in statistical process monitoring to monitor the quality of products and control process mean and variability. In the application of Xbar control charts, sample standard deviation and sample mean are known to be the most efficient conventional estimators in determining process dispersion and location parameters, respectively, based on the assumption of independent and normally distributed datasets. On the other hand, there is no guarantee that the real-world data would be normally distributed. In the cases of estimated process parameters from Phase I data clouded with outliers, efficiency of traditional estimators is significantly reduced, and performance of Xbar charts are undesirably low, e.g. occasional outliers in the rational subgroups in Phase I data set may considerably affect the sample mean and standard deviation, resulting a serious delay in detection of inferior products in Phase II. For more efficient application of control charts, it is required to use robust estimators against contaminations, which may exist in Phase I. In the current study, we present a simple approach to construct robust Xbar control charts using average distance to the median, Qn-estimator of scale, M-estimator of scale with logistic psi-function in the estimation of process dispersion parameter, and Harrell-Davis qth quantile estimator, Hodge-Lehmann estimator and M-estimator of location with Huber psi-function and logistic psi-function in the estimation of process location parameter. Phase I efficiency of proposed estimators and Phase II performance of Xbar charts constructed from these estimators are compared with the conventional mean and standard deviation statistics both under normality and against diffuse-localized and symmetric-asymmetric contaminations using 50,000 Monte Carlo simulations on MATLAB. Consequently, it is found that robust estimators yield parameter estimates with higher efficiency against all types of contaminations, and Xbar charts constructed using robust estimators have higher power in detecting disturbances, compared to conventional methods. Additionally, utilizing individuals charts to screen outlier subgroups and employing different combination of dispersion and location estimators on subgroups and individual observations are found to improve the performance of Xbar charts.

Keywords: average run length, M-estimators, quality control, robust estimators

Procedia PDF Downloads 178
260 Strategic Public Procurement: A Lever for Social Entrepreneurship and Innovation

Authors: B. Orser, A. Riding, Y. Li

Abstract:

To inform government about how gender gaps in SME ( small and medium-sized enterprise) contracting might be redressed, the research question was: What are the key obstacles to, and response strategies for, increasing the engagement of women business owners among SME suppliers to the government of Canada? Thirty-five interviews with senior policymakers, supplier diversity organization executives, and expert witnesses to the Canadian House of Commons, Standing Committee on Government Operations and Estimates. Qualitative data were conducted and analysed using N’Vivo 11 software. High order response categories included: (a) SME risk mitigation strategies, (b) SME procurement program design, and (c) performance measures. Primary obstacles cited were government red tape and long and complicated requests for proposals (RFPs). The majority of 'common' complaints occur when SMEs have questions about the federal procurement process. Witness responses included use of outcome-based rather than prescriptive procurement practices, more agile procurement, simplified RFPs, making payment within 30 days a procurement priority. Risk mitigation strategies included provision of procurement officers to assess risks and opportunities for businesses and development of more agile procurement procedures and processes. Recommendations to enhance program design included: improved definitional consistency of qualifiers and selection criteria, better co-ordination across agencies; clarification about how SME suppliers benefit from federal contracting; goal setting; specification of categories that are most suitable for women-owned businesses; and, increasing primary contractor awareness about the importance of subcontract relationships. Recommendations also included third-party certification of eligible firms and the need to enhance SMEs’ financial literacy to reduce financial errors. Finally, there remains the need for clear and consistent pre-program statistics to establish baselines (by sector, issuing department) performance measures, targets based on percentage of contracts granted, value of contract, percentage of target employee (women, indigenous), and community benefits including hiring local employees. The study advances strategies to enhance federal procurement programs to facilitate socio-economic policy objectives.

Keywords: procurement, small business, policy, women

Procedia PDF Downloads 103
259 Developing Scaffolds for Tissue Regeneration using Low Temperature Plasma (LTP)

Authors: Komal Vig

Abstract:

Cardiovascular disease (CVD)-related deaths occur in 17.3 million people globally each year, accounting for 30% of all deaths worldwide, with a predicted annual incidence of deaths to reach 23.3 million globally by 2030. Autologous bypass grafts remain an important therapeutic option for the treatment of CVD, but the poor quality of the donor patient’s blood vessels, the invasiveness of the resection surgery, and postoperative movement restrictions create issues. The present study is aimed to improve the endothelialization of intimal surface of graft by using low temperature plasma (LTP) to increase the cell attachment and proliferation. Polytetrafluoroethylene (PTFE) was treated with LTP. Air was used as the feed-gas, and the pressure in the plasma chamber was kept at 800 mTorr. Scaffolds were also modified with gelatin and collagen by dipping method. Human umbilical vein endothelial cells (HUVEC) were plated on the developed scaffolds, and cell proliferation was determined by the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl tetrazolium bromide (MTT) assay and by microscopy. mRNA expressions levels of different cell markers were investigated using quantitative real-time PCR (qPCR). XPS confirmed the introduction of oxygenated functionalities from LTP. HUVEC cells showed 80% seeding efficiency on the scaffold. Microscopic and MTT assays indicated increase in cell viability in LTP treated scaffolds, especially when treated with gelatin or collagen, compared to untreated scaffolds. Gene expression studies shows enhanced expression of cell adhesion marker Integrin- α 5 gene after LTP treatment. LTP treated scaffolds exhibited better cell proliferation and viability compared to untreated scaffolds. Protein treatment of scaffold increased cell proliferation. Based on our initial results, more scaffolds alternatives will be developed and investigated for cell growth and vascularization studies. Acknowledgments: This work is supported by the NSF EPSCoR RII-Track-1 Cooperative Agreement OIA-2148653.

Keywords: LTP, HUVEC cells, vascular graft, endothelialization

Procedia PDF Downloads 59
258 Tracing Sources of Sediment in an Arid River, Southern Iran

Authors: Hesam Gholami

Abstract:

Elevated suspended sediment loads in riverine systems resulting from accelerated erosion due to human activities are a serious threat to the sustainable management of watersheds and ecosystem services therein worldwide. Therefore, mitigation of deleterious sediment effects as a distributed or non-point pollution source in the catchments requires reliable provenance information. Sediment tracing or sediment fingerprinting, as a combined process consisting of sampling, laboratory measurements, different statistical tests, and the application of mixing or unmixing models, is a useful technique for discriminating the sources of sediments. From 1996 to the present, different aspects of this technique, such as grouping the sources (spatial and individual sources), discriminating the potential sources by different statistical techniques, and modification of mixing and unmixing models, have been introduced and modified by many researchers worldwide, and have been applied to identify the provenance of fine materials in agricultural, rural, mountainous, and coastal catchments, and in large catchments with numerous lakes and reservoirs. In the last two decades, efforts exploring the uncertainties associated with sediment fingerprinting results have attracted increasing attention. The frameworks used to quantify the uncertainty associated with fingerprinting estimates can be divided into three groups comprising Monte Carlo simulation, Bayesian approaches and generalized likelihood uncertainty estimation (GLUE). Given the above background, the primary goal of this study was to apply geochemical fingerprinting within the GLUE framework in the estimation of sub-basin spatial sediment source contributions in the arid Mehran River catchment in southern Iran, which drains into the Persian Gulf. The accuracy of GLUE predictions generated using four different sets of statistical tests for discriminating three sub-basin spatial sources was evaluated using 10 virtual sediments (VS) samples with known source contributions using the root mean square error (RMSE) and mean absolute error (MAE). Based on the results, the contributions modeled by GLUE for the western, central and eastern sub-basins are 1-42% (overall mean 20%), 0.5-30% (overall mean 12%) and 55-84% (overall mean 68%), respectively. According to the mean absolute fit (MAF; ≥ 95% for all target sediment samples) and goodness-of-fit (GOF; ≥ 99% for all samples), our suggested modeling approach is an accurate technique to quantify the source of sediments in the catchments. Overall, the estimated source proportions can help watershed engineers plan the targeting of conservation programs for soil and water resources.

Keywords: sediment source tracing, generalized likelihood uncertainty estimation, virtual sediment mixtures, Iran

Procedia PDF Downloads 64
257 A Model of the Universe without Expansion of Space

Authors: Jia-Chao Wang

Abstract:

A model of the universe without invoking space expansion is proposed to explain the observed redshift-distance relation and the cosmic microwave background radiation (CMB). The main hypothesized feature of the model is that photons traveling in space interact with the CMB photon gas. This interaction causes the photons to gradually lose energy through dissipation and, therefore, experience redshift. The interaction also causes some of the photons to be scattered off their track toward an observer and, therefore, results in beam intensity attenuation. As observed, the CMB exists everywhere in space and its photon density is relatively high (about 410 per cm³). The small average energy of the CMB photons (about 6.3×10⁻⁴ eV) can reduce the energies of traveling photons gradually and will not alter their momenta drastically as in, for example, Compton scattering, to totally blur the images of distant objects. An object moving through a thermalized photon gas, such as the CMB, experiences a drag. The cause is that the object sees a blue shifted photon gas along the direction of motion and a redshifted one in the opposite direction. An example of this effect can be the observed CMB dipole: The earth travels at about 368 km/s (600 km/s) relative to the CMB. In the all-sky map from the COBE satellite, radiation in the Earth's direction of motion appears 0.35 mK hotter than the average temperature, 2.725 K, while radiation on the opposite side of the sky is 0.35 mK colder. The pressure of a thermalized photon gas is given by Pγ = Eγ/3 = αT⁴/3, where Eγ is the energy density of the photon gas and α is the Stefan-Boltzmann constant. The observed CMB dipole, therefore, implies a pressure difference between the two sides of the earth and results in a CMB drag on the earth. By plugging in suitable estimates of quantities involved, such as the cross section of the earth and the temperatures on the two sides, this drag can be estimated to be tiny. But for a photon traveling at the speed of light, 300,000 km/s, the drag can be significant. In the present model, for the dissipation part, it is assumed that a photon traveling from a distant object toward an observer has an effective interaction cross section pushing against the pressure of the CMB photon gas. For the attenuation part, the coefficient of the typical attenuation equation is used as a parameter. The values of these two parameters are determined by fitting the 748 µ vs. z data points compiled from 643 supernova and 105 γ-ray burst observations with z values up to 8.1. The fit is as good as that obtained from the lambda cold dark matter (ΛCDM) model using online cosmological calculators and Planck 2015 results. The model can be used to interpret Hubble's constant, Olbers' paradox, the origin and blackbody nature of the CMB radiation, the broadening of supernova light curves, and the size of the observable universe.

Keywords: CMB as the lowest energy state, model of the universe, origin of CMB in a static universe, photon-CMB photon gas interaction

Procedia PDF Downloads 120
256 Robust Processing of Antenna Array Signals under Local Scattering Environments

Authors: Ju-Hong Lee, Ching-Wei Liao

Abstract:

An adaptive array beamformer is designed for automatically preserving the desired signals while cancelling interference and noise. Providing robustness against model mismatches and tracking possible environment changes calls for robust adaptive beamforming techniques. The design criterion yields the well-known generalized sidelobe canceller (GSC) beamformer. In practice, the knowledge of the desired steering vector can be imprecise, which often occurs due to estimation errors in the DOA of the desired signal or imperfect array calibration. In these situations, the SOI is considered as interference, and the performance of the GSC beamformer is known to degrade. This undesired behavior results in a reduction of the array output signal-to-interference plus-noise-ratio (SINR). Therefore, it is worth developing robust techniques to deal with the problem due to local scattering environments. As to the implementation of adaptive beamforming, the required computational complexity is enormous when the array beamformer is equipped with massive antenna array sensors. To alleviate this difficulty, a generalized sidelobe canceller (GSC) with partially adaptivity for less adaptive degrees of freedom and faster adaptive response has been proposed in the literature. Unfortunately, it has been shown that the conventional GSC-based adaptive beamformers are usually very sensitive to the mismatch problems due to local scattering situations. In this paper, we present an effective GSC-based beamformer against the mismatch problems mentioned above. The proposed GSC-based array beamformer adaptively estimates the actual direction of the desired signal by using the presumed steering vector and the received array data snapshots. We utilize the predefined steering vector and a presumed angle tolerance range to carry out the required estimation for obtaining an appropriate steering vector. A matrix associated with the direction vector of signal sources is first created. Then projection matrices related to the matrix are generated and are utilized to iteratively estimate the actual direction vector of the desired signal. As a result, the quiescent weight vector and the required signal blocking matrix required for performing adaptive beamforming can be easily found. By utilizing the proposed GSC-based beamformer, we find that the performance degradation due to the considered local scattering environments can be effectively mitigated. To further enhance the beamforming performance, a signal subspace projection matrix is also introduced into the proposed GSC-based beamformer. Several computer simulation examples show that the proposed GSC-based beamformer outperforms the existing robust techniques.

Keywords: adaptive antenna beamforming, local scattering, signal blocking, steering mismatch

Procedia PDF Downloads 102
255 Genetic Structure Analysis through Pedigree Information in a Closed Herd of the New Zealand White Rabbits

Authors: M. Sakthivel, A. Devaki, D. Balasubramanyam, P. Kumarasamy, A. Raja, R. Anilkumar, H. Gopi

Abstract:

The New Zealand White breed of rabbit is one of the most commonly used, well adapted exotic breeds in India. Earlier studies were limited only to analyze the environmental factors affecting the growth and reproductive performance. In the present study, the population of the New Zealand White rabbits in a closed herd was evaluated for its genetic structure. Data on pedigree information (n=2508) for 18 years (1995-2012) were utilized for the study. Pedigree analysis and the estimates of population genetic parameters based on gene origin probabilities were performed using the software program ENDOG (version 4.8). The analysis revealed that the mean values of generation interval, coefficients of inbreeding and equivalent inbreeding were 1.489 years, 13.233 percent and 17.585 percent, respectively. The proportion of population inbred was 100 percent. The estimated mean values of average relatedness and the individual increase in inbreeding were 22.727 and 3.004 percent, respectively. The percent increase in inbreeding over generations was 1.94, 3.06 and 3.98 estimated through maximum generations, equivalent generations, and complete generations, respectively. The number of ancestors contributing the most of 50% genes (fₐ₅₀) to the gene pool of reference population was 4 which might have led to the reduction in genetic variability and increased amount of inbreeding. The extent of genetic bottleneck assessed by calculating the effective number of founders (fₑ) and the effective number of ancestors (fₐ), as expressed by the fₑ/fₐ ratio was 1.1 which is indicative of the absence of stringent bottlenecks. Up to 5th generation, 71.29 percent pedigree was complete reflecting the well-maintained pedigree records. The maximum known generations were 15 with an average of 7.9 and the average equivalent generations traced were 5.6 indicating of a fairly good depth in pedigree. The realized effective population size was 14.93 which is very critical, and with the increasing trend of inbreeding, the situation has been assessed to be worse in future. The proportion of animals with the genetic conservation index (GCI) greater than 9 was 39.10 percent which can be used as a scale to use such animals with higher GCI to maintain balanced contribution from the founders. From the study, it was evident that the herd was completely inbred with very high inbreeding coefficient and the effective population size was critical. Recommendations were made to reduce the probability of deleterious effects of inbreeding and to improve the genetic variability in the herd. The present study can help in carrying out similar studies to meet the demand for animal protein in developing countries.

Keywords: effective population size, genetic structure, pedigree analysis, rabbit genetics

Procedia PDF Downloads 283