Search results for: sieve extremum estimates
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 752

Search results for: sieve extremum estimates

122 Predicting Costs in Construction Projects with Machine Learning: A Detailed Study Based on Activity-Level Data

Authors: Soheila Sadeghi

Abstract:

Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.

Keywords: cost prediction, machine learning, project management, random forest, neural networks

Procedia PDF Downloads 30
121 A Machine Learning Approach for Efficient Resource Management in Construction Projects

Authors: Soheila Sadeghi

Abstract:

Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.

Keywords: resource allocation, machine learning, optimization, data-driven decision-making, project management

Procedia PDF Downloads 26
120 The Correlation between Emotional Intelligence and Locus of Control: Empirical Study on Lithuanian Youth

Authors: Dalia Antiniene, Rosita Lekaviciene

Abstract:

The qualitative methodology based study is designed to reveal a connection between emotional intelligence (EI) and locus of control (LC) within the population of Lithuanian youth. In the context of emotional problems, the locus of control reflects how one estimates the causes of his/her emotions: internals (internal locus of control) associate their emotions with their manner of thinking, whereas externals (external locus of control) consider emotions to be evoked by external circumstances. On the other hand, there is little empirical data about this connection, and the results in disposition are often contradictory. In the conducted study 1430 young people, aged 17 to 27, from various regions of Lithuania were surveyed. The subjects were selected by quota sampling, maintaining natural proportions of the general Lithuanian youth population. To assess emotional intelligence the EI-DARL test (i.e. self-report questionnaire consisting of 75 items) was implemented. The emotional intelligence test, created applying exploratory factor analysis, reveals four main dimensions of EI: understanding of one’s own emotions, regulation of one’s own emotions, understanding other’s emotions, and regulation of other’s emotions (subscale reliability coefficients fluctuate between 0,84 and 0,91). An original 16-item internality/externality scale was used to examine the locus of control (internal consistency of the Externality subscale - 0,75; Internality subscale - 0,65). The study has determined that the youth understands and regulates other people’s emotions better than their own. Using the K-mean cluster analysis method, it was established that there are three groups of subjects according to their EI level – people with low, medium and high EI. After comparing means of subjects’ favorability of statements on the Internality/Externality scale, a predominance of internal locus of control in the young population was established. The multiple regression models has shown that a rather strong statistically significant correlation exists between total EI, EI subscales and LC. People who tend to attribute responsibility for the outcome of their actions to their own abilities and efforts have higher EI and, conversely, the tendency to attribute responsibility to external forces is related more with lower EI. While pursuing their goals, young people with high internality have a predisposition to analyze perceived emotions and, therefore, gain emotional experience: they learn to control their natural reactions and to act adequately in a situation at hand. Thus the study unfolds, that a person’s locus of control and emotional intelligence are related phenomena and allows us to draw a conclusion, that a person’s internality/externality is a reliable predictor of total EI and its components.

Keywords: emotional intelligence, externality, internality, locus of control

Procedia PDF Downloads 214
119 Cancer Survivor’s Adherence to Healthy Lifestyle Behaviours; Meeting the World Cancer Research Fund/American Institute of Cancer Research Recommendations, a Systematic Review and Meta-Analysis

Authors: Daniel Nigusse Tollosa, Erica James, Alexis Hurre, Meredith Tavener

Abstract:

Introduction: Lifestyle behaviours such as healthy diet, regular physical activity and maintaining a healthy weight are essential for cancer survivors to improve the quality of life and longevity. However, there is no study that synthesis cancer survivor’s adherence to healthy lifestyle recommendations. The purpose of this review was to collate existing data on the prevalence of adherence to healthy behaviours and produce the pooled estimate among adult cancer survivors. Method: Multiple databases (Embase, Medline, Scopus, Web of Science and Google Scholar) were searched for relevant articles published since 2007, reporting cancer survivors adherence to more than two lifestyle behaviours based on the WCRF/AICR recommendations. The pooled prevalence of adherence to single and multiple behaviours (operationalized as adherence to more than 75% (3/4) of health behaviours included in a particular study) was calculated using a random effects model. Subgroup analysis adherence to multiple behaviours was undertaken corresponding to the mean survival years and year of publication. Results: A total of 3322 articles were generated through our search strategies. Of these, 51 studies matched our inclusion criteria, which presenting data from 2,620,586 adult cancer survivors. The highest prevalence of adherence was observed for smoking (pooled estimate: 87%, 95% CI: 85%, 88%) and alcohol intake (pooled estimate 83%, 95% CI: 81%, 86%), and the lowest was for fiber intake (pooled estimate: 31%, 95% CI: 21%, 40%). Thirteen studies were reported the proportion of cancer survivors (all used a simple summative index method) to multiple healthy behaviours, whereby the prevalence of adherence was ranged from 7% to 40% (pooled estimate 23%, 95% CI: 17% to 30%). Subgroup analysis suggest that short-term survivors ( < 5 years survival time) had relatively a better adherence to multiple behaviours (pooled estimate: 31%, 95% CI: 27%, 35%) than long-term ( > 5 years survival time) cancer survivors (pooled estimate: 25%, 95% CI: 14%, 36%). Pooling of estimates according to the year of publication (since 2007) also suggests an increasing trend of adherence to multiple behaviours over time. Conclusion: Overall, the adherence to multiple lifestyle behaviors was poor (not satisfactory), and relatively, it is a major concern for long-term than the short-term cancer survivor. Cancer survivors need to obey with healthy lifestyle recommendations related to physical activity, fruit and vegetable, fiber, red/processed meat and sodium intake.

Keywords: adherence, lifestyle behaviours, cancer survivors, WCRF/AICR

Procedia PDF Downloads 174
118 Boko Haram Insurrection and Religious Revolt in Nigeria: An Impact Assessment-{2009-2015}

Authors: Edwin Dankano

Abstract:

Evident by incessant and sporadic attacks on Nigerians poise a serious threat to the unity of Nigeria, and secondly, the single biggest security nightmare to confront Nigeria since after amalgamation of the Southern and Northern protectorates by the British colonialist in 1914 is “Boko Haram” a terrorist organization also known as “Jama’atul Ahli Sunnah Lidda’wati wal Jihad”, or “people committed to the propagation of the Prophet’s teachings and jihad”. The sect also upholds an ideology translated as “Western Education is forbidden”, or rejection of Western civilization and institutions. By some estimates, more than 5,500 people were killed in Boko Haram attacks in 2014, and Boko Haram attacks have already claimed hundreds of lives and territories {caliphates}in early 2015. In total, the group may have killed more than 10,000 people since its emergence in the early 2000s. More than 1 million Nigerians have been displaced internally by the violence, and Nigerian refugee figures in neighboring countries continue to rise. This paper is predicated on secondary sources of data and anchored on the Huntington’s theory of clash of civilization. As such, the paper argued that the rise of Boko Haram with its violent disposition against Western values is a counter response to Western civilization that is fast eclipsing other civilizations. The paper posits that the Boko Haram insurrection going by its teachings, and destruction of churches is a validation of the propagation of the sect as a religious revolt which has resulted in dire humanitarian situation in Adamawa, Borno, Yobe, Bauchi, and Gombe states all in north eastern Nigeria as evident in human casualties, human right abuses, population displacement, refugee debacle, livelihood crisis, and public insecurity. The paper submits that the Nigerian state should muster the needed political will in terms of a viable anti-terrorism measures and build strong legitimate institutions that can adequately curb the menace of corruption that has engulfed the military hierarchy, respond proactively to the challenge of terrorism in Nigeria and should embrace a strategic paradigm shift from anti-terrorism to counter-terrorism as a strategy for containing the crisis that today threatens the secular status of Nigeria.

Keywords: Boko Haram, civilization, fundamentalism, Islam, religion revolt, terror

Procedia PDF Downloads 388
117 Mechanical Characterization and CNC Rotary Ultrasonic Grinding of Crystal Glass

Authors: Ricardo Torcato, Helder Morais

Abstract:

The manufacture of crystal glass parts is based on obtaining the rough geometry by blowing and/or injection, generally followed by a set of manual finishing operations using cutting and grinding tools. The forming techniques used do not allow the obtainment, with repeatability, of parts with complex shapes and the finishing operations use intensive specialized labor resulting in high cycle times and production costs. This work aims to explore the digital manufacture of crystal glass parts by investigating new subtractive techniques for the automated, flexible finishing of these parts. Finishing operations are essential to respond to customer demands in terms of crystal feel and shine. It is intended to investigate the applicability of different computerized finishing technologies, namely milling and grinding in a CNC machining center with or without ultrasonic assistance, to crystal processing. Research in the field of grinding hard and brittle materials, despite not being extensive, has increased in recent years, and scientific knowledge about the machinability of crystal glass is still very limited. However, it can be said that the unique properties of glass, such as high hardness and very low toughness, make any glass machining technology a very challenging process. This work will measure the performance improvement brought about by the use of ultrasound compared to conventional crystal grinding. This presentation is focused on the mechanical characterization and analysis of the cutting forces in CNC machining of superior crystal glass (Pb ≥ 30%). For the mechanical characterization, the Vickers hardness test provides an estimate of the material hardness (Hv) and the fracture toughness based on cracks that appear in the indentation. Mechanical impulse excitation test estimates the Young’s Modulus, shear modulus and Poisson ratio of the material. For the cutting forces, it a dynamometer was used to measure the forces in the face grinding process. The tests were made based on the Taguchi method to correlate the input parameters (feed rate, tool rotation speed and depth of cut) with the output parameters (surface roughness and cutting forces) to optimize the process (better roughness using the cutting forces that do not compromise the material structure and the tool life) using ANOVA. This study was conducted for conventional grinding and for the ultrasonic grinding process with the same cutting tools. It was possible to determine the optimum cutting parameters for minimum cutting forces and for minimum surface roughness in both grinding processes. Ultrasonic-assisted grinding provides a better surface roughness than conventional grinding.

Keywords: CNC machining, crystal glass, cutting forces, hardness

Procedia PDF Downloads 146
116 Other Cancers in Patients With Head and Neck Cancer

Authors: Kim Kennedy, Daren Gibson, Stephanie Flukes, Chandra Diwakarla, Lisa Spalding, Leanne Pilkington, Andrew Redfern

Abstract:

Introduction: Head and neck cancers (HNC) are often associated with the development of non-HNC primaries, as the risk factors that predispose patients to HNC are often risk factors for other cancers. Aim: We sought to evaluate whether there was an increased risk of smoking and alcohol-related cancers and also other cancers in HNC patients and to evaluate whether there is a difference between the rates of non-HNC primaries in Aboriginal compared with non-Aboriginal HNC patients. Methods: We performed a retrospective cohort analysis of 320 HNC patients from a single center in Western Australia, identifying 80 Aboriginal and 240 non-Aboriginal patients matched on a 1:3 ratio by sites, histology, rurality, and age. We collected data on the patient characteristics, tumour features, treatments, outcomes, and past and subsequent HNCs and non-HNC primaries. Results: In the overall study population, there were 86 patients (26.9%) with a metachronous or synchronous non-HNC primary. Non-HNC primaries were actually significantly more common in the non-Aboriginal population compared with the Aboriginal population (30% vs. 17.5%, p=0.02); however, half of these were patients with cutaneous squamous or basal cell carcinomas (cSCC/BCC) only. When cSCC/BCCs were excluded, non-Aboriginal patients had a similar rate as Aboriginal patients (16.7% vs. 15%, p=0.73). There were clearly more cSCC/BCCs in non-Aboriginal patients compared with Aboriginal patients (16.7% vs. 2.5%, p=0.001) and more patients with melanoma (2.5% vs. 0%, p value not significant (p=NS). Rates of most cancers were similar between non-Aboriginal and Aboriginal patients, including prostate (2.9% vs. 3.8%), colorectal (2.9% vs. 2.5%), kidney (1.2% vs. 1.2%), and these rates appeared comparable to Australian Age Standardised Incidence Rates (ASIR) in the general community. Oesophageal cancer occurred at double the rate in Aboriginal patients (3.8%) compared with non-Aboriginal patients (1.7%), which was far in excess of ASIRs which estimated a lifetime risk of 0.59% in the general population. Interestingly lung cancer rates did not appear to be significantly increased in our cohort, with 2.5% of Aboriginal patients and 3.3% of non-Aboriginal patients having lung cancer, which is in line with ASIRs which estimates a lifetime risk of 5% (by age 85yo). Interestingly the rate of Glioma in the non-Aboriginal population was higher than the ASIR, with 0.8% of non-Aboriginal patients developing Glioma, with Australian averages predicting a 0.6% lifetime risk in the general population. As these are small numbers, this finding may well be due to chance. Unsurprisingly, second HNCs occurred at an increased incidence in our cohort, in 12.5% of Aboriginal patients and 11.2% of non-Aboriginal patients, compared to an ASIR of 17 cases per 100,000 persons, estimating a lifetime risk of 1.70%. Conclusions: Overall, 26.9% of patients had a non-HNC primary. When cSCC/BCCs were excluded, Aboriginal and non-Aboriginal patients had similar rates of non-HNC primaries, although non-Aboriginal patients had a significantly higher rate of cSCC/BCCs. Aboriginal patients had double the rate of oesophageal primaries; however, this was not statistically significant, possibly due to small case numbers.

Keywords: head and neck cancer, synchronous and metachronous primaries, other primaries, Aboriginal

Procedia PDF Downloads 62
115 Comparison of Parametric and Bayesian Survival Regression Models in Simulated and HIV Patient Antiretroviral Therapy Data: Case Study of Alamata Hospital, North Ethiopia

Authors: Zeytu G. Asfaw, Serkalem K. Abrha, Demisew G. Degefu

Abstract:

Background: HIV/AIDS remains a major public health problem in Ethiopia and heavily affecting people of productive and reproductive age. We aimed to compare the performance of Parametric Survival Analysis and Bayesian Survival Analysis using simulations and in a real dataset application focused on determining predictors of HIV patient survival. Methods: A Parametric Survival Models - Exponential, Weibull, Log-normal, Log-logistic, Gompertz and Generalized gamma distributions were considered. Simulation study was carried out with two different algorithms that were informative and noninformative priors. A retrospective cohort study was implemented for HIV infected patients under Highly Active Antiretroviral Therapy in Alamata General Hospital, North Ethiopia. Results: A total of 320 HIV patients were included in the study where 52.19% females and 47.81% males. According to Kaplan-Meier survival estimates for the two sex groups, females has shown better survival time in comparison with their male counterparts. The median survival time of HIV patients was 79 months. During the follow-up period 89 (27.81%) deaths and 231 (72.19%) censored individuals registered. The average baseline cluster of differentiation 4 (CD4) cells count for HIV/AIDS patients were 126.01 but after a three-year antiretroviral therapy follow-up the average cluster of differentiation 4 (CD4) cells counts were 305.74, which was quite encouraging. Age, functional status, tuberculosis screen, past opportunistic infection, baseline cluster of differentiation 4 (CD4) cells, World Health Organization clinical stage, sex, marital status, employment status, occupation type, baseline weight were found statistically significant factors for longer survival of HIV patients. The standard error of all covariate in Bayesian log-normal survival model is less than the classical one. Hence, Bayesian survival analysis showed better performance than classical parametric survival analysis, when subjective data analysis was performed by considering expert opinions and historical knowledge about the parameters. Conclusions: Thus, HIV/AIDS patient mortality rate could be reduced through timely antiretroviral therapy with special care on the potential factors. Moreover, Bayesian log-normal survival model was preferable than the classical log-normal survival model for determining predictors of HIV patients survival.

Keywords: antiretroviral therapy (ART), Bayesian analysis, HIV, log-normal, parametric survival models

Procedia PDF Downloads 180
114 Comparing Xbar Charts: Conventional versus Reweighted Robust Estimation Methods for Univariate Data Sets

Authors: Ece Cigdem Mutlu, Burak Alakent

Abstract:

Maintaining the quality of manufactured products at a desired level depends on the stability of process dispersion and location parameters and detection of perturbations in these parameters as promptly as possible. Shewhart control chart is the most widely used technique in statistical process monitoring to monitor the quality of products and control process mean and variability. In the application of Xbar control charts, sample standard deviation and sample mean are known to be the most efficient conventional estimators in determining process dispersion and location parameters, respectively, based on the assumption of independent and normally distributed datasets. On the other hand, there is no guarantee that the real-world data would be normally distributed. In the cases of estimated process parameters from Phase I data clouded with outliers, efficiency of traditional estimators is significantly reduced, and performance of Xbar charts are undesirably low, e.g. occasional outliers in the rational subgroups in Phase I data set may considerably affect the sample mean and standard deviation, resulting a serious delay in detection of inferior products in Phase II. For more efficient application of control charts, it is required to use robust estimators against contaminations, which may exist in Phase I. In the current study, we present a simple approach to construct robust Xbar control charts using average distance to the median, Qn-estimator of scale, M-estimator of scale with logistic psi-function in the estimation of process dispersion parameter, and Harrell-Davis qth quantile estimator, Hodge-Lehmann estimator and M-estimator of location with Huber psi-function and logistic psi-function in the estimation of process location parameter. Phase I efficiency of proposed estimators and Phase II performance of Xbar charts constructed from these estimators are compared with the conventional mean and standard deviation statistics both under normality and against diffuse-localized and symmetric-asymmetric contaminations using 50,000 Monte Carlo simulations on MATLAB. Consequently, it is found that robust estimators yield parameter estimates with higher efficiency against all types of contaminations, and Xbar charts constructed using robust estimators have higher power in detecting disturbances, compared to conventional methods. Additionally, utilizing individuals charts to screen outlier subgroups and employing different combination of dispersion and location estimators on subgroups and individual observations are found to improve the performance of Xbar charts.

Keywords: average run length, M-estimators, quality control, robust estimators

Procedia PDF Downloads 180
113 Strategic Public Procurement: A Lever for Social Entrepreneurship and Innovation

Authors: B. Orser, A. Riding, Y. Li

Abstract:

To inform government about how gender gaps in SME ( small and medium-sized enterprise) contracting might be redressed, the research question was: What are the key obstacles to, and response strategies for, increasing the engagement of women business owners among SME suppliers to the government of Canada? Thirty-five interviews with senior policymakers, supplier diversity organization executives, and expert witnesses to the Canadian House of Commons, Standing Committee on Government Operations and Estimates. Qualitative data were conducted and analysed using N’Vivo 11 software. High order response categories included: (a) SME risk mitigation strategies, (b) SME procurement program design, and (c) performance measures. Primary obstacles cited were government red tape and long and complicated requests for proposals (RFPs). The majority of 'common' complaints occur when SMEs have questions about the federal procurement process. Witness responses included use of outcome-based rather than prescriptive procurement practices, more agile procurement, simplified RFPs, making payment within 30 days a procurement priority. Risk mitigation strategies included provision of procurement officers to assess risks and opportunities for businesses and development of more agile procurement procedures and processes. Recommendations to enhance program design included: improved definitional consistency of qualifiers and selection criteria, better co-ordination across agencies; clarification about how SME suppliers benefit from federal contracting; goal setting; specification of categories that are most suitable for women-owned businesses; and, increasing primary contractor awareness about the importance of subcontract relationships. Recommendations also included third-party certification of eligible firms and the need to enhance SMEs’ financial literacy to reduce financial errors. Finally, there remains the need for clear and consistent pre-program statistics to establish baselines (by sector, issuing department) performance measures, targets based on percentage of contracts granted, value of contract, percentage of target employee (women, indigenous), and community benefits including hiring local employees. The study advances strategies to enhance federal procurement programs to facilitate socio-economic policy objectives.

Keywords: procurement, small business, policy, women

Procedia PDF Downloads 105
112 Tracing Sources of Sediment in an Arid River, Southern Iran

Authors: Hesam Gholami

Abstract:

Elevated suspended sediment loads in riverine systems resulting from accelerated erosion due to human activities are a serious threat to the sustainable management of watersheds and ecosystem services therein worldwide. Therefore, mitigation of deleterious sediment effects as a distributed or non-point pollution source in the catchments requires reliable provenance information. Sediment tracing or sediment fingerprinting, as a combined process consisting of sampling, laboratory measurements, different statistical tests, and the application of mixing or unmixing models, is a useful technique for discriminating the sources of sediments. From 1996 to the present, different aspects of this technique, such as grouping the sources (spatial and individual sources), discriminating the potential sources by different statistical techniques, and modification of mixing and unmixing models, have been introduced and modified by many researchers worldwide, and have been applied to identify the provenance of fine materials in agricultural, rural, mountainous, and coastal catchments, and in large catchments with numerous lakes and reservoirs. In the last two decades, efforts exploring the uncertainties associated with sediment fingerprinting results have attracted increasing attention. The frameworks used to quantify the uncertainty associated with fingerprinting estimates can be divided into three groups comprising Monte Carlo simulation, Bayesian approaches and generalized likelihood uncertainty estimation (GLUE). Given the above background, the primary goal of this study was to apply geochemical fingerprinting within the GLUE framework in the estimation of sub-basin spatial sediment source contributions in the arid Mehran River catchment in southern Iran, which drains into the Persian Gulf. The accuracy of GLUE predictions generated using four different sets of statistical tests for discriminating three sub-basin spatial sources was evaluated using 10 virtual sediments (VS) samples with known source contributions using the root mean square error (RMSE) and mean absolute error (MAE). Based on the results, the contributions modeled by GLUE for the western, central and eastern sub-basins are 1-42% (overall mean 20%), 0.5-30% (overall mean 12%) and 55-84% (overall mean 68%), respectively. According to the mean absolute fit (MAF; ≥ 95% for all target sediment samples) and goodness-of-fit (GOF; ≥ 99% for all samples), our suggested modeling approach is an accurate technique to quantify the source of sediments in the catchments. Overall, the estimated source proportions can help watershed engineers plan the targeting of conservation programs for soil and water resources.

Keywords: sediment source tracing, generalized likelihood uncertainty estimation, virtual sediment mixtures, Iran

Procedia PDF Downloads 67
111 A Model of the Universe without Expansion of Space

Authors: Jia-Chao Wang

Abstract:

A model of the universe without invoking space expansion is proposed to explain the observed redshift-distance relation and the cosmic microwave background radiation (CMB). The main hypothesized feature of the model is that photons traveling in space interact with the CMB photon gas. This interaction causes the photons to gradually lose energy through dissipation and, therefore, experience redshift. The interaction also causes some of the photons to be scattered off their track toward an observer and, therefore, results in beam intensity attenuation. As observed, the CMB exists everywhere in space and its photon density is relatively high (about 410 per cm³). The small average energy of the CMB photons (about 6.3×10⁻⁴ eV) can reduce the energies of traveling photons gradually and will not alter their momenta drastically as in, for example, Compton scattering, to totally blur the images of distant objects. An object moving through a thermalized photon gas, such as the CMB, experiences a drag. The cause is that the object sees a blue shifted photon gas along the direction of motion and a redshifted one in the opposite direction. An example of this effect can be the observed CMB dipole: The earth travels at about 368 km/s (600 km/s) relative to the CMB. In the all-sky map from the COBE satellite, radiation in the Earth's direction of motion appears 0.35 mK hotter than the average temperature, 2.725 K, while radiation on the opposite side of the sky is 0.35 mK colder. The pressure of a thermalized photon gas is given by Pγ = Eγ/3 = αT⁴/3, where Eγ is the energy density of the photon gas and α is the Stefan-Boltzmann constant. The observed CMB dipole, therefore, implies a pressure difference between the two sides of the earth and results in a CMB drag on the earth. By plugging in suitable estimates of quantities involved, such as the cross section of the earth and the temperatures on the two sides, this drag can be estimated to be tiny. But for a photon traveling at the speed of light, 300,000 km/s, the drag can be significant. In the present model, for the dissipation part, it is assumed that a photon traveling from a distant object toward an observer has an effective interaction cross section pushing against the pressure of the CMB photon gas. For the attenuation part, the coefficient of the typical attenuation equation is used as a parameter. The values of these two parameters are determined by fitting the 748 µ vs. z data points compiled from 643 supernova and 105 γ-ray burst observations with z values up to 8.1. The fit is as good as that obtained from the lambda cold dark matter (ΛCDM) model using online cosmological calculators and Planck 2015 results. The model can be used to interpret Hubble's constant, Olbers' paradox, the origin and blackbody nature of the CMB radiation, the broadening of supernova light curves, and the size of the observable universe.

Keywords: CMB as the lowest energy state, model of the universe, origin of CMB in a static universe, photon-CMB photon gas interaction

Procedia PDF Downloads 124
110 Robust Processing of Antenna Array Signals under Local Scattering Environments

Authors: Ju-Hong Lee, Ching-Wei Liao

Abstract:

An adaptive array beamformer is designed for automatically preserving the desired signals while cancelling interference and noise. Providing robustness against model mismatches and tracking possible environment changes calls for robust adaptive beamforming techniques. The design criterion yields the well-known generalized sidelobe canceller (GSC) beamformer. In practice, the knowledge of the desired steering vector can be imprecise, which often occurs due to estimation errors in the DOA of the desired signal or imperfect array calibration. In these situations, the SOI is considered as interference, and the performance of the GSC beamformer is known to degrade. This undesired behavior results in a reduction of the array output signal-to-interference plus-noise-ratio (SINR). Therefore, it is worth developing robust techniques to deal with the problem due to local scattering environments. As to the implementation of adaptive beamforming, the required computational complexity is enormous when the array beamformer is equipped with massive antenna array sensors. To alleviate this difficulty, a generalized sidelobe canceller (GSC) with partially adaptivity for less adaptive degrees of freedom and faster adaptive response has been proposed in the literature. Unfortunately, it has been shown that the conventional GSC-based adaptive beamformers are usually very sensitive to the mismatch problems due to local scattering situations. In this paper, we present an effective GSC-based beamformer against the mismatch problems mentioned above. The proposed GSC-based array beamformer adaptively estimates the actual direction of the desired signal by using the presumed steering vector and the received array data snapshots. We utilize the predefined steering vector and a presumed angle tolerance range to carry out the required estimation for obtaining an appropriate steering vector. A matrix associated with the direction vector of signal sources is first created. Then projection matrices related to the matrix are generated and are utilized to iteratively estimate the actual direction vector of the desired signal. As a result, the quiescent weight vector and the required signal blocking matrix required for performing adaptive beamforming can be easily found. By utilizing the proposed GSC-based beamformer, we find that the performance degradation due to the considered local scattering environments can be effectively mitigated. To further enhance the beamforming performance, a signal subspace projection matrix is also introduced into the proposed GSC-based beamformer. Several computer simulation examples show that the proposed GSC-based beamformer outperforms the existing robust techniques.

Keywords: adaptive antenna beamforming, local scattering, signal blocking, steering mismatch

Procedia PDF Downloads 105
109 Genetic Structure Analysis through Pedigree Information in a Closed Herd of the New Zealand White Rabbits

Authors: M. Sakthivel, A. Devaki, D. Balasubramanyam, P. Kumarasamy, A. Raja, R. Anilkumar, H. Gopi

Abstract:

The New Zealand White breed of rabbit is one of the most commonly used, well adapted exotic breeds in India. Earlier studies were limited only to analyze the environmental factors affecting the growth and reproductive performance. In the present study, the population of the New Zealand White rabbits in a closed herd was evaluated for its genetic structure. Data on pedigree information (n=2508) for 18 years (1995-2012) were utilized for the study. Pedigree analysis and the estimates of population genetic parameters based on gene origin probabilities were performed using the software program ENDOG (version 4.8). The analysis revealed that the mean values of generation interval, coefficients of inbreeding and equivalent inbreeding were 1.489 years, 13.233 percent and 17.585 percent, respectively. The proportion of population inbred was 100 percent. The estimated mean values of average relatedness and the individual increase in inbreeding were 22.727 and 3.004 percent, respectively. The percent increase in inbreeding over generations was 1.94, 3.06 and 3.98 estimated through maximum generations, equivalent generations, and complete generations, respectively. The number of ancestors contributing the most of 50% genes (fₐ₅₀) to the gene pool of reference population was 4 which might have led to the reduction in genetic variability and increased amount of inbreeding. The extent of genetic bottleneck assessed by calculating the effective number of founders (fₑ) and the effective number of ancestors (fₐ), as expressed by the fₑ/fₐ ratio was 1.1 which is indicative of the absence of stringent bottlenecks. Up to 5th generation, 71.29 percent pedigree was complete reflecting the well-maintained pedigree records. The maximum known generations were 15 with an average of 7.9 and the average equivalent generations traced were 5.6 indicating of a fairly good depth in pedigree. The realized effective population size was 14.93 which is very critical, and with the increasing trend of inbreeding, the situation has been assessed to be worse in future. The proportion of animals with the genetic conservation index (GCI) greater than 9 was 39.10 percent which can be used as a scale to use such animals with higher GCI to maintain balanced contribution from the founders. From the study, it was evident that the herd was completely inbred with very high inbreeding coefficient and the effective population size was critical. Recommendations were made to reduce the probability of deleterious effects of inbreeding and to improve the genetic variability in the herd. The present study can help in carrying out similar studies to meet the demand for animal protein in developing countries.

Keywords: effective population size, genetic structure, pedigree analysis, rabbit genetics

Procedia PDF Downloads 285
108 Development of Earthquake and Typhoon Loss Models for Japan, Specifically Designed for Underwriting and Enterprise Risk Management Cycles

Authors: Nozar Kishi, Babak Kamrani, Filmon Habte

Abstract:

Natural hazards such as earthquakes and tropical storms, are very frequent and highly destructive in Japan. Japan experiences, every year on average, more than 10 tropical cyclones that come within damaging reach, and earthquakes of moment magnitude 6 or greater. We have developed stochastic catastrophe models to address the risk associated with the entire suite of damaging events in Japan, for use by insurance, reinsurance, NGOs and governmental institutions. KCC’s (Karen Clark and Company) catastrophe models are procedures constituted of four modular segments: 1) stochastic events sets that would represent the statistics of the past events, hazard attenuation functions that could model the local intensity, vulnerability functions that would address the repair need for local buildings exposed to the hazard, and financial module addressing policy conditions that could estimates the losses incurring as result of. The events module is comprised of events (faults or tracks) with different intensities with corresponding probabilities. They are based on the same statistics as observed through the historical catalog. The hazard module delivers the hazard intensity (ground motion or wind speed) at location of each building. The vulnerability module provides library of damage functions that would relate the hazard intensity to repair need as percentage of the replacement value. The financial module reports the expected loss, given the payoff policies and regulations. We have divided Japan into regions with similar typhoon climatology, and earthquake micro-zones, within each the characteristics of events are similar enough for stochastic modeling. For each region, then, a set of stochastic events is developed that results in events with intensities corresponding to annual occurrence probabilities that are of interest to financial communities; such as 0.01, 0.004, etc. The intensities, corresponding to these probabilities (called CE, Characteristics Events) are selected through a superstratified sampling approach that is based on the primary uncertainty. Region specific hazard intensity attenuation functions followed by vulnerability models leads to estimation of repair costs. Extensive economic exposure model addresses all local construction and occupancy types, such as post-linter Shinand Okabe wood, as well as concrete confined in steel, SRC (Steel-Reinforced Concrete), high-rise.

Keywords: typhoon, earthquake, Japan, catastrophe modelling, stochastic modeling, stratified sampling, loss model, ERM

Procedia PDF Downloads 258
107 The Relationship of Depression Risk and Gestational Diabetes Mellitus: A Systematic Review and Meta-Analysis

Authors: Yu Chen Su

Abstract:

Introduction: Gestational diabetes mellitus (GDM) refers to impaired glucose tolerance in pregnant women, impacting both the mother and newborn with short and long-term effects. It increases risks of preeclampsia, hypertension, type 2 diabetes, cesarean section, and preterm birth. GDM is associated with fetal macrosomia, shoulder dystocia, neonatal hypoglycemia, and future type 2 diabetes risk. A study on 6,421 pregnant women found 12% experienced high stress, linked to maladaptive coping and depressive emotions. Women with high-risk pregnancies may experience greater stress and depression. Research suggests GDM increases depression prevalence. A study on 632 Hispanic women with GDM showed severe stress and depression tendencies. Involving 95 women with GDM, 33.4% exhibited depression symptoms. Another study compared 180 GDM women to 186 with normal glucose levels, revealing higher depression levels in GDM women. They found GDM women were 1.85 times more likely to receive antidepressants during pregnancy and 1.69 times more likely to experience postpartum depression. Maternal stress and depressive symptoms during pregnancy are significant factors. Early identification by healthcare professionals can greatly benefit GDM women, their infants, and their families. Objectives: The purpose of this study was to investigate the association between gestational diabetes mellitus (GDM) and the risk of depression. Methods: This study reviewed and analyzed relevant literature on gestational diabetes mellitus (GDM) and depression in 6,876 patients. The literature search followed PRISMA guidelines and included databases like Embase, PubMed, MEDLINE, CINAHL, and Cochrane Library. Prospective or retrospective studies with relevant risk ratios and estimates were included, using a random-effects model for the analysis of depression risk correlation. Studies without depression data or relevant risks were excluded. The search period extended until October 2022. Results: Systematic review of 7 studies (6,876 participants) found a significant association (OR = 8.77, CI: 7.98-9.64, p < 0.05) between gestational diabetes mellitus (GDM) and higher depression risk compared to healthy pregnant women. Conclusions: Pregnancy is a significant life transition involving physiological, psychological, and social changes. Gestational diabetes poses challenges to women's physical and mental well-being. Sensitive healthcare professionals identifying issues early can greatly benefit women, babies, and the family.

Keywords: gestational diabetes, depression, systematic review, neta-analysis

Procedia PDF Downloads 65
106 Sustainable Technology and the Production of Housing

Authors: S. Arias

Abstract:

New housing developments and the technological changes that this implies, adapt the styles of living of its residents, as well as new family structures and forms of work due to the particular needs of a specific group of people which involves different techniques of dealing with, organize, equip and use a particular territory. Currently, own their own space is increasingly important and the cities are faced with the challenge of providing the opportunity for such demands, as well as energy, water and waste removal necessary in the process of construction and occupation of new human settlements. Until the day of today, not has failed to give full response to these demands and needs, resulting in cities that grow without control, badly used land, avenues and congested streets. Buildings and dwellings have an important impact on the environment and on the health of the people, therefore environmental quality associated with the comfort of humans to the sustainable development of natural resources. Applied to architecture, this concept involves the incorporation of new technologies in all the constructive process of a dwelling, changing customs of developers and users, what must be a greater effort in planning energy savings and thus reducing the emissions Greenhouse Gases (GHG) depending on the geographical location where it is planned to develop. Since the techniques of occupation of the territory are not the same everywhere, must take into account that these depend on the geographical, social, political, economic and climatic-environmental circumstances of place, which in modified according to the degree of development reached. In the analysis that must be undertaken to check the degree of sustainability of the place, it is necessary to make estimates of the energy used in artificial air conditioning and lighting. In the same way is required to diagnose the availability and distribution of the water resources used for hygiene and for the cooling of artificially air-conditioned spaces, as well as the waste resulting from these technological processes. Based on the results obtained through the different stages of the analysis, it is possible to perform an energy audit in the process of proposing recommendations of sustainability in architectural spaces in search of energy saving, rational use of water and natural resources optimization. The above can be carried out through the development of a sustainable building code in develop technical recommendations to the regional characteristics of each study site. These codes would seek to build bases to promote a building regulations applicable to new human settlements looking for is generated at the same time quality, protection and safety in them. This building regulation must be consistent with other regulations both national and municipal and State, such as the laws of human settlements, urban development and zoning regulations.

Keywords: building regulations, housing, sustainability, technology

Procedia PDF Downloads 339
105 Economic Efficiency of Cassava Production in Nimba County, Liberia: An Output-Oriented Approach

Authors: Kollie B. Dogba, Willis Oluoch-Kosura, Chepchumba Chumo

Abstract:

In Liberia, many of the agricultural households cultivate cassava for either sustenance purposes, or to generate farm income. Many of the concentrated cassava farmers reside in Nimba, a north-eastern County that borders two other economies: the Republics of Cote D’Ivoire and Guinea. With a high demand for cassava output and products in emerging Asian markets coupled with an objective of the Liberia agriculture policies to increase the competitiveness of valued agriculture crops; there is a need to examine the level of resource-use efficiency for many agriculture crops. However, there is a scarcity of information on the efficiency of many agriculture crops, including cassava. Hence the study applying an output-oriented method seeks to assess the economic efficiency of cassava farmers in Nimba County, Liberia. A multi-stage sampling technique was employed to generate a sample for the study. From 216 cassava farmers, data related to on-farm attributes, socio-economic and institutional factors were collected. The stochastic frontier models, using the Translog functional forms, of production and revenue, were used to determine the level of revenue efficiency and its determinants. The result showed that most of the cassava farmers are male (60%). Many of the farmers are either married, engaged or living together with a spouse (83%), with a mean household size of nine persons. Farmland is prevalently obtained by inheritance (95%), average farm size is 1.34 hectares, and most cassava farmers did not access agriculture credits (76%) and extension services (91%). The mean cassava output per hectare is 1,506.02 kg, which estimates average revenue of L$23,551.16 (Liberian dollars). Empirical results showed that the revenue efficiency of cassava farmers varies from 0.1% to 73.5%; with the mean revenue efficiency of 12.9%. This indicates that on average, there is a vast potential of 87.1% to increase the economic efficiency of cassava farmers in Nimba by improving technical and allocative efficiencies. For the significant determinants of revenue efficiency, age and group membership had negative effects on revenue efficiency of cassava production; while farming experience, access to extension, formal education, and average wage rate have positive effects. The study recommends the setting-up and incentivizing of farmer field schools for cassava farmers to primarily share their farming experiences with others and to learn robust cultivation techniques of sustainable agriculture. Also, farm managers and farmers should consider a fix wage rate in labor contracts for all stages of cassava farming.

Keywords: economic efficiency, frontier production and revenue functions, Nimba County, Liberia, output-oriented approach, revenue efficiency, sustainable agriculture

Procedia PDF Downloads 119
104 Development of a Multi-Variate Model for Matching Plant Nitrogen Requirements with Supply for Reducing Losses in Dairy Systems

Authors: Iris Vogeler, Rogerio Cichota, Armin Werner

Abstract:

Dairy farms are under pressure to increase productivity while reducing environmental impacts. Effective fertiliser management practices are critical to achieve this. Determination of optimum nitrogen (N) fertilisation rates which maximise pasture growth and minimise N losses is challenging due to variability in plant requirements and likely near-future supply of N by the soil. Remote sensing can be used for mapping N nutrition status of plants and to rapidly assess the spatial variability within a field. An algorithm is, however, lacking which relates the N status of the plants to the expected yield response to additions of N. The aim of this simulation study was to develop a multi-variate model for determining N fertilisation rate for a target percentage of the maximum achievable yield based on the pasture N concentration (ii) use of an algorithm for guiding fertilisation rates, and (iii) evaluation of the model regarding pasture yield and N losses, including N leaching, denitrification and volatilisation. A simulation study was carried out using the Agricultural Production Systems Simulator (APSIM). The simulations were done for an irrigated ryegrass pasture in the Canterbury region of New Zealand. A multi-variate model was developed and used to determine monthly required N fertilisation rates based on pasture N content prior to fertilisation and targets of 50, 75, 90 and 100% of the potential monthly yield. These monthly optimised fertilisation rules were evaluated by running APSIM for a ten-year period to provide yield and N loss estimates from both nonurine and urine affected areas. Comparison with typical fertilisation rates of 150 and 400 kg N/ha/year was also done. Assessment of pasture yield and leaching from fertiliser and urine patches indicated a large reduction in N losses when N fertilisation rates were controlled by the multi-variate model. However, the reduction in leaching losses was much smaller when taking into account the effects of urine patches. The proposed approach based on biophysical modelling to develop a multi-variate model for determining optimum N fertilisation rates dependent on pasture N content is very promising. Further analysis, under different environmental conditions and validation is required before the approach can be used to help adjust fertiliser management practices to temporal and spatial N demand based on the nitrogen status of the pasture.

Keywords: APSIM modelling, optimum N fertilization rate, pasture N content, ryegrass pasture, three dimensional surface response function.

Procedia PDF Downloads 124
103 Sources of Precipitation and Hydrograph Components of the Sutri Dhaka Glacier, Western Himalaya

Authors: Ajit Singh, Waliur Rahaman, Parmanand Sharma, Laluraj C. M., Lavkush Patel, Bhanu Pratap, Vinay Kumar Gaddam, Meloth Thamban

Abstract:

The Himalayan glaciers are the potential source of perennial water supply to Asia’s major river systems like the Ganga, Brahmaputra and the Indus. In order to improve our understanding about the source of precipitation and hydrograph components in the interior Himalayan glaciers, it is important to decipher the sources of moisture and their contribution to the glaciers in this river system. In doing so, we conducted an extensive pilot study in a Sutri Dhaka glacier, western Himalaya during 2014-15. To determine the moisture sources, rain, surface snow, ice, and stream meltwater samples were collected and analyzed for stable oxygen (δ¹⁸O) and hydrogen (δD) isotopes. A two-component hydrograph separation was performed for the glacier stream using these isotopes assuming the contribution of rain, groundwater and spring water contribution is negligible based on field studies and available literature. To validate the results obtained from hydrograph separation using above method, snow and ice melt ablation were measured using a network of bamboo stakes and snow pits. The δ¹⁸O and δD in rain samples range from -5.3% to -20.8% and -31.7% to -148.4% respectively. It is noteworthy to observe that the rain samples showed enriched values in the early season (July-August) and progressively get depleted at the end of the season (September). This could be due to the ‘amount effect’. Similarly, old snow samples have shown enriched isotopic values compared to fresh snow. This could because of the sublimation processes operating over the old surface snow. The δ¹⁸O and δD values in glacier ice samples range from -11.6% to -15.7% and -31.7% to -148.4%, whereas in a Sutri Dhaka meltwater stream, it ranges from -12.7% to -16.2% and -82.9% to -112.7% respectively. The mean deuterium excess (d-excess) value in all collected samples exceeds more than 16% which suggests the predominant moisture source of precipitation is from the Western Disturbances. Our detailed estimates of the hydrograph separation of Sutri Dhaka meltwater using isotope hydrograph separation and glaciological field methods agree within their uncertainty; stream meltwater budget is dominated by glaciers ice melt over snowmelt. The present study provides insights into the sources of moisture, controlling mechanism of the isotopic characteristics of Sutri Dhaka glacier water and helps in understanding the snow and ice melt components in Chandra basin, Western Himalaya.

Keywords: D-excess, hydrograph separation, Sutri Dhaka, stable water isotope, western Himalaya

Procedia PDF Downloads 145
102 Status of Sensory Profile Score among Children with Autism in Selected Centers of Dhaka City

Authors: Nupur A. D., Miah M. S., Moniruzzaman S. K.

Abstract:

Autism is a neurobiological disorder that affects physical, social, and language skills of a person. A child with autism feels difficulty for processing, integrating, and responding to sensory stimuli. Current estimates have shown that 45% to 96 % of children with Autism Spectrum Disorder demonstrate sensory difficulties. As autism is a worldwide burning issue, it has become a highly prioritized and important service provision in Bangladesh. The sensory deficit does not only hamper the normal development of a child, it also hampers the learning process and functional independency. The purpose of this study was to find out the prevalence of sensory dysfunction among children with autism and recognize common patterns of sensory dysfunction. A cross-sectional study design was chosen to carry out this research work. This study enrolled eighty children with autism and their parents by using the systematic sampling method. In this study, data were collected through the Short Sensory Profile (SSP) assessment tool, which consists of 38 items in the questionnaire, and qualified graduate Occupational Therapists were directly involved in interviewing parents as well as observing child responses to sensory related activities of the children with autism from four selected autism centers in Dhaka, Bangladesh. All item analyses were conducted to identify items yielding or resulting in the highest reported sensory processing dysfunction among those children through using SSP and Statistical Package for Social Sciences (SPSS) version 21.0 for data analysis. This study revealed that almost 78.25% of children with autism had significant sensory processing dysfunction based on their sensory response to relevant activities. Under-responsive sensory seeking and auditory filtering were the least common problems among them. On the other hand, most of them (95%) represented that they had definite to probable differences in sensory processing, including under-response or sensory seeking, auditory filtering, and tactile sensitivity. Besides, the result also shows that the definite difference in sensory processing among 64 children was within 100%; it means those children with autism suffered from sensory difficulties, and thus it drew a great impact on the children’s Daily Living Activities (ADLs) as well as social interaction with others. Almost 95% of children with autism require intervention to overcome or normalize the problem. The result gives insight regarding types of sensory processing dysfunction to consider during diagnosis and ascertaining the treatment. So, early sensory problem identification is very important and thus will help to provide appropriate sensory input to minimize the maladaptive behavior and enhance to reach the normal range of adaptive behavior.

Keywords: autism, sensory processing difficulties, sensory profile, occupational therapy

Procedia PDF Downloads 54
101 Association of Genetically Proxied Cholesterol-Lowering Drug Targets and Head and Neck Cancer Survival: A Mendelian Randomization Analysis

Authors: Danni Cheng

Abstract:

Background: Preclinical and epidemiological studies have reported potential protective effects of low-density lipoprotein cholesterol (LDL-C) lowering drugs on head and neck squamous cell cancer (HNSCC) survival, but the causality was not consistent. Genetic variants associated with LDL-C lowering drug targets can predict the effects of their therapeutic inhibition on disease outcomes. Objective: We aimed to evaluate the causal association of genetically proxied cholesterol-lowering drug targets and circulating lipid traits with cancer survival in HNSCC patients stratified by human papillomavirus (HPV) status using two-sample Mendelian randomization (MR) analyses. Method: Single-nucleotide polymorphisms (SNPs) in gene region of LDL-C lowering drug targets (HMGCR, NPC1L1, CETP, PCSK9, and LDLR) associated with LDL-C levels in genome-wide association study (GWAS) from the Global Lipids Genetics Consortium (GLGC) were used to proxy LDL-C lowering drug action. SNPs proxy circulating lipids (LDL-C, HDL-C, total cholesterol, triglycerides, apoprotein A and apoprotein B) were also derived from the GLGC data. Genetic associations of these SNPs and cancer survivals were derived from 1,120 HPV-positive oropharyngeal squamous cell carcinoma (OPSCC) and 2,570 non-HPV-driven HNSCC patients in VOYAGER program. We estimated the causal associations of LDL-C lowering drugs and circulating lipids with HNSCC survival using the inverse-variance weighted method. Results: Genetically proxied HMGCR inhibition was significantly associated with worse overall survival (OS) in non-HPV-drive HNSCC patients (inverse variance-weighted hazard ratio (HR IVW), 2.64[95%CI,1.28-5.43]; P = 0.01) but better OS in HPV-positive OPSCC patients (HR IVW,0.11[95%CI,0.02-0.56]; P = 0.01). Estimates for NPC1L1 were strongly associated with worse OS in both total HNSCC (HR IVW,4.17[95%CI,1.06-16.36]; P = 0.04) and non-HPV-driven HNSCC patients (HR IVW,7.33[95%CI,1.63-32.97]; P = 0.01). A similar result was found that genetically proxied PSCK9 inhibitors were significantly associated with poor OS in non-HPV-driven HNSCC (HR IVW,1.56[95%CI,1.02 to 2.39]). Conclusion: Genetically proxied long-term HMGCR inhibition was significantly associated with decreased OS in non-HPV-driven HNSCC and increased OS in HPV-positive OPSCC. While genetically proxied NPC1L1 and PCSK9 had associations with worse OS in total and non-HPV-driven HNSCC patients. Further research is needed to understand whether these drugs have consistent associations with head and neck tumor outcomes.

Keywords: Mendelian randomization analysis, head and neck cancer, cancer survival, cholesterol, statin

Procedia PDF Downloads 91
100 Estimation of Rock Strength from Diamond Drilling

Authors: Hing Hao Chan, Thomas Richard, Masood Mostofi

Abstract:

The mining industry relies on an estimate of rock strength at several stages of a mine life cycle: mining (excavating, blasting, tunnelling) and processing (crushing and grinding), both very energy-intensive activities. An effective comminution design that can yield significant dividends often requires a reliable estimate of the material rock strength. Common laboratory tests such as rod, ball mill, and uniaxial compressive strength share common shortcomings such as time, sample preparation, bias in plug selection cost, repeatability, and sample amount to ensure reliable estimates. In this paper, the authors present a methodology to derive an estimate of the rock strength from drilling data recorded while coring with a diamond core head. The work presented in this paper builds on a phenomenological model of the bit-rock interface proposed by Franca et al. (2015) and is inspired by the now well-established use of the scratch test with PDC (Polycrystalline Diamond Compact) cutter to derive the rock uniaxial compressive strength. The first part of the paper introduces the phenomenological model of the bit-rock interface for a diamond core head that relates the forces acting on the drill bit (torque, axial thrust) to the bit kinematic variables (rate of penetration and angular velocity) and introduces the intrinsic specific energy or the energy required to drill a unit volume of rock for an ideally sharp drilling tool (meaning ideally sharp diamonds and no contact between the bit matrix and rock debris) that is found well correlated to the rock uniaxial compressive strength for PDC and roller cone bits. The second part describes the laboratory drill rig, the experimental procedure that is tailored to minimize the effect of diamond polishing over the duration of the experiments, and the step-by-step methodology to derive the intrinsic specific energy from the recorded data. The third section presents the results and shows that the intrinsic specific energy correlates well to the uniaxial compressive strength for the 11 tested rock materials (7 sedimentary and 4 igneous rocks). The last section discusses best drilling practices and a method to estimate the rock strength from field drilling data considering the compliance of the drill string and frictional losses along the borehole. The approach is illustrated with a case study from drilling data recorded while drilling an exploration well in Australia.

Keywords: bit-rock interaction, drilling experiment, impregnated diamond drilling, uniaxial compressive strength

Procedia PDF Downloads 128
99 Capital Market Reaction to Governance and Disclosure Violations: Evidence from the Saudi Arabian Capital Market

Authors: Nasser Alsadoun

Abstract:

Today's companies in Saudi Arabian capital market must comply with strict criteria and adhere to rigid corporate governance rules and continuous disclosure requirements. Unlike other regulators in the region, decision makers of the Capital Market Authority (hereafter CMA) of Saudi Arabia believes that the announcements of economic sanctions and penalties for non-compliance firms will foster more effective regulatory compliance and hence improve the quality of financial reporting. An implied argument put forward by the opponents, however, states that such penalties are unnecessary and stated to be onerous for non-compliance firms. Over that last years, the CMA has publicly announced several economic fines levied on some listed companies for their failing to comply with corporate governance and continuous disclosure regulation clauses, with the amount of fine levied ranges between 50,000 SR to 100,000 SR for each failing. Economic theory suggests that rational investors make decisions based on a cost-benefit principal. The regulatory intervention made by CMA on the announcement of economic sanctions has been costly to the society (economy) hoping that it improves the transparency of financial statements. It is argued, therefore, that threat of regulators and economic sanctions will provide incentives for firms’ managers to report more relevant and reliable accounting information, and the benefit of such announcements is likely to be reflected in the context of the quality of the financial reports. Yet, the economic consequences of the revealed fines announcement for non-compliance firms in Saudi Arabian market have not been examined. Thus, this study attempts to empirically examine whether market participants are pricing the supposed benefits of rigid governance and disclosure rules in the Saudi market. The study employs an event study methodology to assess the impact of CMA economic sanctions announcements on the market price of non-compliance firms. The study also estimates and examines bid–ask spread behavior of violated firms around the CMA announcements. The findings indicate that the CMA fines announcements for failing to comply with governance and disclosure rules do not appear to play any significant role in securities pricing. In addition, tests of bid-ask behavior does not indicate any significant increases in information asymmetry surrounding these announcements. While the CMA has developed many goals to increase the awareness of listed companies with the best governance and disclosure practices, it seems they have to develop more goals to improve market efficiency and increase investors and public awareness.

Keywords: governance and disclosure violations, financial reporting quality, regulatory intervention, market efficiency

Procedia PDF Downloads 297
98 Greenhouse Gasses’ Effect on Atmospheric Temperature Increase and the Observable Effects on Ecosystems

Authors: Alexander J. Severinsky

Abstract:

Radiative forces of greenhouse gases (GHG) increase the temperature of the Earth's surface, more on land, and less in oceans, due to their thermal capacities. Given this inertia, the temperature increase is delayed over time. Air temperature, however, is not delayed as air thermal capacity is much lower. In this study, through analysis and synthesis of multidisciplinary science and data, an estimate of atmospheric temperature increase is made. Then, this estimate is used to shed light on current observations of ice and snow loss, desertification and forest fires, and increased extreme air disturbances. The reason for this inquiry is due to the author’s skepticism that current changes cannot be explained by a "~1 oC" global average surface temperature rise within the last 50-60 years. The only other plausible cause to explore for understanding is that of atmospheric temperature rise. The study utilizes an analysis of air temperature rise from three different scientific disciplines: thermodynamics, climate science experiments, and climactic historical studies. The results coming from these diverse disciplines are nearly the same, within ± 1.6%. The direct radiative force of GHGs with a high level of scientific understanding is near 4.7 W/m2 on average over the Earth’s entire surface in 2018, as compared to one in pre-Industrial time in the mid-1700s. The additional radiative force of fast feedbacks coming from various forms of water gives approximately an additional ~15 W/m2. In 2018, these radiative forces heated the atmosphere by approximately 5.1 oC, which will create a thermal equilibrium average ground surface temperature increase of 4.6 oC to 4.8 oC by the end of this century. After 2018, the temperature will continue to rise without any additional increases in the concentration of the GHGs, primarily of carbon dioxide and methane. These findings of the radiative force of GHGs in 2018 were applied to estimates of effects on major Earth ecosystems. This additional force of nearly 20 W/m2 causes an increase in ice melting by an additional rate of over 90 cm/year, green leaves temperature increase by nearly 5 oC, and a work energy increase of air by approximately 40 Joules/mole. This explains the observed high rates of ice melting at all altitudes and latitudes, the spread of deserts and increases in forest fires, as well as increased energy of tornadoes, typhoons, hurricanes, and extreme weather, much more plausibly than the 1.5 oC increase in average global surface temperature in the same time interval. Planned mitigation and adaptation measures might prove to be much more effective when directed toward the reduction of existing GHGs in the atmosphere.

Keywords: greenhouse radiative force, greenhouse air temperature, greenhouse thermodynamics, greenhouse historical, greenhouse radiative force on ice, greenhouse radiative force on plants, greenhouse radiative force in air

Procedia PDF Downloads 94
97 Application of Principal Component Analysis and Ordered Logit Model in Diabetic Kidney Disease Progression in People with Type 2 Diabetes

Authors: Mequanent Wale Mekonen, Edoardo Otranto, Angela Alibrandi

Abstract:

Diabetic kidney disease is one of the main microvascular complications caused by diabetes. Several clinical and biochemical variables are reported to be associated with diabetic kidney disease in people with type 2 diabetes. However, their interrelations could distort the effect estimation of these variables for the disease's progression. The objective of the study is to determine how the biochemical and clinical variables in people with type 2 diabetes are interrelated with each other and their effects on kidney disease progression through advanced statistical methods. First, principal component analysis was used to explore how the biochemical and clinical variables intercorrelate with each other, which helped us reduce a set of correlated biochemical variables to a smaller number of uncorrelated variables. Then, ordered logit regression models (cumulative, stage, and adjacent) were employed to assess the effect of biochemical and clinical variables on the order-level response variable (progression of kidney function) by considering the proportionality assumption for more robust effect estimation. This retrospective cross-sectional study retrieved data from a type 2 diabetic cohort in a polyclinic hospital at the University of Messina, Italy. The principal component analysis yielded three uncorrelated components. These are principal component 1, with negative loading of glycosylated haemoglobin, glycemia, and creatinine; principal component 2, with negative loading of total cholesterol and low-density lipoprotein; and principal component 3, with negative loading of high-density lipoprotein and a positive load of triglycerides. The ordered logit models (cumulative, stage, and adjacent) showed that the first component (glycosylated haemoglobin, glycemia, and creatinine) had a significant effect on the progression of kidney disease. For instance, the cumulative odds model indicated that the first principal component (linear combination of glycosylated haemoglobin, glycemia, and creatinine) had a strong and significant effect on the progression of kidney disease, with an effect or odds ratio of 0.423 (P value = 0.000). However, this effect was inconsistent across levels of kidney disease because the first principal component did not meet the proportionality assumption. To address the proportionality problem and provide robust effect estimates, alternative ordered logit models, such as the partial cumulative odds model, the partial adjacent category model, and the partial continuation ratio model, were used. These models suggested that clinical variables such as age, sex, body mass index, medication (metformin), and biochemical variables such as glycosylated haemoglobin, glycemia, and creatinine have a significant effect on the progression of kidney disease.

Keywords: diabetic kidney disease, ordered logit model, principal component analysis, type 2 diabetes

Procedia PDF Downloads 26
96 A Hybrid Artificial Intelligence and Two Dimensional Depth Averaged Numerical Model for Solving Shallow Water and Exner Equations Simultaneously

Authors: S. Mehrab Amiri, Nasser Talebbeydokhti

Abstract:

Modeling sediment transport processes by means of numerical approach often poses severe challenges. In this way, a number of techniques have been suggested to solve flow and sediment equations in decoupled, semi-coupled or fully coupled forms. Furthermore, in order to capture flow discontinuities, a number of techniques, like artificial viscosity and shock fitting, have been proposed for solving these equations which are mostly required careful calibration processes. In this research, a numerical scheme for solving shallow water and Exner equations in fully coupled form is presented. First-Order Centered scheme is applied for producing required numerical fluxes and the reconstruction process is carried out toward using Monotonic Upstream Scheme for Conservation Laws to achieve a high order scheme.  In order to satisfy C-property of the scheme in presence of bed topography, Surface Gradient Method is proposed. Combining the presented scheme with fourth order Runge-Kutta algorithm for time integration yields a competent numerical scheme. In addition, to handle non-prismatic channels problems, Cartesian Cut Cell Method is employed. A trained Multi-Layer Perceptron Artificial Neural Network which is of Feed Forward Back Propagation (FFBP) type estimates sediment flow discharge in the model rather than usual empirical formulas. Hydrodynamic part of the model is tested for showing its capability in simulation of flow discontinuities, transcritical flows, wetting/drying conditions and non-prismatic channel flows. In this end, dam-break flow onto a locally non-prismatic converging-diverging channel with initially dry bed conditions is modeled. The morphodynamic part of the model is verified simulating dam break on a dry movable bed and bed level variations in an alluvial junction. The results show that the model is capable in capturing the flow discontinuities, solving wetting/drying problems even in non-prismatic channels and presenting proper results for movable bed situations. It can also be deducted that applying Artificial Neural Network, instead of common empirical formulas for estimating sediment flow discharge, leads to more accurate results.

Keywords: artificial neural network, morphodynamic model, sediment continuity equation, shallow water equations

Procedia PDF Downloads 182
95 The Relations between Coping Strategies, Caregiver Bonding, and Dating Violence of Emerging Adults: Cross-Cultural Comparison between China and Turkiye

Authors: Zubaidan Yushan, Hudayar Cıhan

Abstract:

Turkiye and China are countries that have collective cultures, but they have different cultural backgrounds. They have different cultures, different religions, and different levels of economic development. The aim of this study is to test the moderation effect of caregiver bonding on the relationship between dating violence and coping strategies among unmarried emerging adults in China and Turkiye. Participants ages were 19 and 26 years (X=23.66, SD=3.66), unmarried emerging adults Turkish 171 participants (72.5% women, 24% men, 3.5% prefer not to say), Chinese 170 participants (71.8% women, 21.8% men, 6.5% prefer not to say). All participants had been in a relationship for more than six months. Participants completed The Conflict Tactics Scales—(CTS2), The Cope Inventory, and The Parental Bonding Instrument (PBI). Examining the dating violence and coping strategies of the participant's relationship through caregiver bonding moderation analysis was performed using the Jamovi. Significance was tested using the bootstrapping method with bias-corrected confidence estimates. The outcome variable for analysis was dating violence, and the predictor variable for the analysis was coping strategies. The moderator variable evaluated for the analysis was parent attachment. Before the analysis, the mean-centered scores of each variable and moderator were calculated. Furthermore, the moderation analysis was conducted separately for each outcome. The Moderation analysis results show that the sub-dimension of over-protection moderates psychological aggression perpetration and avoidance coping in China. The sub-dimension of care moderates injury victimization and avoidance management in Turkiye; also, over-protection moderates injury victimization and social support coping. Moreover, the sub-dimension of care moderates sexual coercion perpetration and avoidance coping. In the results, caregiver bonding moderates the relationship between coping strategies and dating violence, which may be explained by the fact that our ways of coping with problems are learned, and people are influenced by their parents when they face problems. Therefore, problem-solving is permanently fixed, and each person has his or her fixed solution, which leads to a habit of using solutions to problems. However, sometimes, these solutions become the justification for the injured or abusive person. The quality of the attachment between parents can regulate this state. The results are somewhat similar to and slightly different from those in the previous literature. These mixed results indicate the need for further exploration. Many other factors, such as alcohol, drug violence, and pathological problems, maybe the reasons for these differences. In addition, diverse factors such as the study environment and the applied measurement scales may also affect the results.

Keywords: caregiver bonding, coping strategies, dating violence, emerging adulthood, cross-cultural, comparison

Procedia PDF Downloads 44
94 Influence of Confinement on Phase Behavior in Unconventional Gas Condensate Reservoirs

Authors: Szymon Kuczynski

Abstract:

Poland is characterized by the presence of numerous sedimentary basins and hydrocarbon provinces. Since 2006 exploration for hydrocarbons in Poland become gradually more focus on new unconventional targets, particularly on the shale gas potential of the Upper Ordovician and Lower Silurian in the Baltic-Podlasie-Lublin Basin. The first forecast prepared by US Energy Information Administration in 2011 indicated to 5.3 Tcm of natural gas. In 2012, Polish Geological Institute presented its own forecast which estimated maximum reserves on 1.92 Tcm. The difference in the estimates was caused by problems with calculations of the initial amount of adsorbed, as well as free, gas trapped in shale rocks (GIIP - Gas Initially in Place). This value is dependent from sorption capacity, gas saturation and mutual interactions between gas, water, and rock. Determination of the reservoir type in the initial exploration phase brings essential knowledge, which has an impact on decisions related to the production. The study of porosity impact for phase envelope shift eliminates errors and improves production profitability. Confinement phenomenon affects flow characteristics, fluid properties, and phase equilibrium. The thermodynamic behavior of confined fluids in porous media is subject to the basic considerations for industrial applications such as hydrocarbons production. In particular the knowledge of the phase equilibrium and the critical properties of the contained fluid is essential for the design and optimization of such process. In pores with a small diameter (nanopores), the effect of the wall interaction with the fluid particles becomes significant and occurs in shale formations. Nano pore size is similar to the fluid particles’ diameter and the area of particles which flow without interaction with pore wall is almost equal to the area where this phenomenon occurs. The molecular simulation studies have shown an effect of confinement to the pseudo critical properties. Therefore, the critical parameters pressure and temperature and the flow characteristics of hydrocarbons in terms of nano-scale are under the strong influence of fluid particles with the pore wall. It can be concluded that the impact of a single pore size is crucial when it comes to the nanoscale because there is possible the above-described effect. Nano- porosity makes it difficult to predict the flow of reservoir fluid. Research are conducted to explain the mechanisms of fluid flow in the nanopores and gas extraction from porous media by desorption.

Keywords: adsorption, capillary condensation, phase envelope, nanopores, unconventional natural gas

Procedia PDF Downloads 327
93 Cross-Comparison between Land Surface Temperature from Polar and Geostationary Satellite over Heterogenous Landscape: A Case Study in Hong Kong

Authors: Ibrahim A. Adeniran, Rui F. Zhu, Man S. Wong

Abstract:

Owing to the insufficiency in the spatial representativeness and continuity of in situ temperature measurements from weather stations (WS), the use of temperature measurement from WS for large-range diurnal analysis in heterogenous landscapes has been limited. This has made the accurate estimation of land surface temperature (LST) from remotely sensed data more crucial. Moreover, the study of dynamic interaction between the atmosphere and the physical surface of the Earth could be enhanced at both annual and diurnal scales by using optimal LST data derived from satellite sensors. The tradeoff between the spatial and temporal resolution of LSTs from satellite’s thermal infrared sensors (TIRS) has, however, been a major challenge, especially when high spatiotemporal LST data are recommended. It is well-known from existing literature that polar satellites have the advantage of high spatial resolution, while geostationary satellites have a high temporal resolution. Hence, this study is aimed at designing a framework for the cross-comparison of LST data from polar and geostationary satellites in a heterogeneous landscape. This could help to understand the relationship between the LST estimates from the two satellites and, consequently, their integration in diurnal LST analysis. Landsat-8 satellite data will be used as the representative of the polar satellite due to the availability of its long-term series, while the Himawari-8 satellite will be used as the data source for the geostationary satellite because of its improved TIRS. For the study area, Hong Kong Special Administrative Region (HK SAR) will be selected; this is due to the heterogeneity in the landscape of the region. LST data will be retrieved from both satellites using the Split window algorithm (SWA), and the resulting data will be validated by comparing satellite-derived LST data with temperature data from automatic WS in HK SAR. The LST data from the satellite data will then be separated based on the land use classification in HK SAR using the Global Land Cover by National Mapping Organization version3 (GLCNMO 2013) data. The relationship between LST data from Landsat-8 and Himawari-8 will then be investigated based on the land-use class and over different seasons of the year in order to account for seasonal variation in their relationship. The resulting relationship will be spatially and statistically analyzed and graphically visualized for detailed interpretation. Findings from this study will reveal the relationship between the two satellite data based on the land use classification within the study area and the seasons of the year. While the information provided by this study will help in the optimal combination of LST data from Polar (Landsat-8) and geostationary (Himawari-8) satellites, it will also serve as a roadmap in the annual and diurnal urban heat (UHI) analysis in Hong Kong SAR.

Keywords: automatic weather station, Himawari-8, Landsat-8, land surface temperature, land use classification, split window algorithm, urban heat island

Procedia PDF Downloads 63