Search results for: accuracy ratio
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8018

Search results for: accuracy ratio

878 Artificial Neural Network Model Based Setup Period Estimation for Polymer Cutting

Authors: Zsolt János Viharos, Krisztián Balázs Kis, Imre Paniti, Gábor Belső, Péter Németh, János Farkas

Abstract:

The paper presents the results and industrial applications in the production setup period estimation based on industrial data inherited from the field of polymer cutting. The literature of polymer cutting is very limited considering the number of publications. The first polymer cutting machine is known since the second half of the 20th century; however, the production of polymer parts with this kind of technology is still a challenging research topic. The products of the applying industrial partner must met high technical requirements, as they are used in medical, measurement instrumentation and painting industry branches. Typically, 20% of these parts are new work, which means every five years almost the entire product portfolio is replaced in their low series manufacturing environment. Consequently, it requires a flexible production system, where the estimation of the frequent setup periods' lengths is one of the key success factors. In the investigation, several (input) parameters have been studied and grouped to create an adequate training information set for an artificial neural network as a base for the estimation of the individual setup periods. In the first group, product information is collected such as the product name and number of items. The second group contains material data like material type and colour. In the third group, surface quality and tolerance information are collected including the finest surface and tightest (or narrowest) tolerance. The fourth group contains the setup data like machine type and work shift. One source of these parameters is the Manufacturing Execution System (MES) but some data were also collected from Computer Aided Design (CAD) drawings. The number of the applied tools is one of the key factors on which the industrial partners’ estimations were based previously. The artificial neural network model was trained on several thousands of real industrial data. The mean estimation accuracy of the setup periods' lengths was improved by 30%, and in the same time the deviation of the prognosis was also improved by 50%. Furthermore, an investigation on the mentioned parameter groups considering the manufacturing order was also researched. The paper also highlights the manufacturing introduction experiences and further improvements of the proposed methods, both on the shop floor and on the quotation preparation fields. Every week more than 100 real industrial setup events are given and the related data are collected.

Keywords: artificial neural network, low series manufacturing, polymer cutting, setup period estimation

Procedia PDF Downloads 241
877 Radiomics: Approach to Enable Early Diagnosis of Non-Specific Breast Nodules in Contrast-Enhanced Magnetic Resonance Imaging

Authors: N. D'Amico, E. Grossi, B. Colombo, F. Rigiroli, M. Buscema, D. Fazzini, G. Cornalba, S. Papa

Abstract:

Purpose: To characterize, through a radiomic approach, the nature of nodules considered non-specific by expert radiologists, recognized in magnetic resonance mammography (MRm) with T1-weighted (T1w) sequences with paramagnetic contrast. Material and Methods: 47 cases out of 1200 undergoing MRm, in which the MRm assessment gave uncertain classification (non-specific nodules), were admitted to the study. The clinical outcome of the non-specific nodules was later found through follow-up or further exams (biopsy), finding 35 benign and 12 malignant. All MR Images were acquired at 1.5T, a first basal T1w sequence and then four T1w acquisitions after the paramagnetic contrast injection. After a manual segmentation of the lesions, done by a radiologist, and the extraction of 150 radiomic features (30 features per 5 subsequent times) a machine learning (ML) approach was used. An evolutionary algorithm (TWIST system based on KNN algorithm) was used to subdivide the dataset into training and validation test and to select features yielding the maximal amount of information. After this pre-processing, different machine learning systems were applied to develop a predictive model based on a training-testing crossover procedure. 10 cases with a benign nodule (follow-up older than 5 years) and 18 with an evident malignant tumor (clear malignant histological exam) were added to the dataset in order to allow the ML system to better learn from data. Results: NaiveBayes algorithm working on 79 features selected by a TWIST system, resulted to be the best performing ML system with a sensitivity of 96% and a specificity of 78% and a global accuracy of 87% (average values of two training-testing procedures ab-ba). The results showed that in the subset of 47 non-specific nodules, the algorithm predicted the outcome of 45 nodules which an expert radiologist could not identify. Conclusion: In this pilot study we identified a radiomic approach allowing ML systems to perform well in the diagnosis of a non-specific nodule at MR mammography. This algorithm could be a great support for the early diagnosis of malignant breast tumor, in the event the radiologist is not able to identify the kind of lesion and reduces the necessity for long follow-up. Clinical Relevance: This machine learning algorithm could be essential to support the radiologist in early diagnosis of non-specific nodules, in order to avoid strenuous follow-up and painful biopsy for the patient.

Keywords: breast, machine learning, MRI, radiomics

Procedia PDF Downloads 265
876 The Examination of Cement Effect on Isotropic Sands during Static, Dynamic, Melting and Freezing Cycles

Authors: Mehdi Shekarbeigi

Abstract:

The consolidation of loose substrates as well as substrate layers through promoting stabilizing materials is one of the most commonly used road construction techniques. Cement, lime, and flax, as well as asphalt emulsion, are common materials used for soil stabilization to enhance the soil’s strength and durability properties. Cement could be simply used to stabilize permeable materials such as sand in a relatively short time threshold. In this research, typical Portland cement is selected for the stabilization of isotropic sand; the effect of static and cyclic loading on the behavior of these soils has been examined with various percentages of Portland cement. Thus, firstly, a soil’s general features are investigated, and then static tests, including direct cutting, density and single axis tests, and California Bearing Ratio, are performed on the samples. After that, the dynamic behavior of cement on silica sand with the same grain size is analyzed. These experiments are conducted on cement samples of 3, 6, and 9 of the same rates and ineffective limiting pressures of 0 to 1200 kPa with 200 kPa steps of the face according to American Society for Testing and Materials D 3999 standards. Also, to test the effect of temperature on molds and frost samples, 0, 5, 10, and 20 are carried out during 0, 5, 10, and 20-second periods. Results of the static tests showed that increasing the cement percentage increases the soil density and shear strength. The single-axis compressive strength increase is higher for samples with higher cement content and lower densities. The results also illustrate the relationship between single-axial compressive strength and cement weight parameters. Results of the dynamic experiments indicate that increasing the number of loading cycles and melting and freezing cycles enhances permeability and decreases the applied pressure. According to the results of this research, it could be stated that samples containing 9% cement have the highest amount of shear modulus and, therefore, decrease the permeability of soil. This amount could be considered as the optimal amount. Also, the enhancement of effective limited pressure from 400 to 800kPa increased the shear modulus of the sample by an average of 20 to 30 percent in small strains.

Keywords: cement, isotropic sands, static load, three-axis cycle, melting and freezing cycles

Procedia PDF Downloads 72
875 ScRNA-Seq RNA Sequencing-Based Program-Polygenic Risk Scores Associated with Pancreatic Cancer Risks in the UK Biobank Cohort

Authors: Yelin Zhao, Xinxiu Li, Martin Smelik, Oleg Sysoev, Firoj Mahmud, Dina Mansour Aly, Mikael Benson

Abstract:

Background: Early diagnosis of pancreatic cancer is clinically challenging due to vague, or no symptoms, and lack of biomarkers. Polygenic risk score (PRS) scores may provide a valuable tool to assess increased or decreased risk of PC. This study aimed to develop such PRS by filtering genetic variants identified by GWAS using transcriptional programs identified by single-cell RNA sequencing (scRNA-seq). Methods: ScRNA-seq data from 24 pancreatic ductal adenocarcinoma (PDAC) tumor samples and 11 normal pancreases were analyzed to identify differentially expressed genes (DEGs) in in tumor and microenvironment cell types compared to healthy tissues. Pathway analysis showed that the DEGs were enriched for hundreds of significant pathways. These were clustered into 40 “programs” based on gene similarity, using the Jaccard index. Published genetic variants associated with PDAC were mapped to each program to generate program PRSs (pPRSs). These pPRSs, along with five previously published PRSs (PGS000083, PGS000725, PGS000663, PGS000159, and PGS002264), were evaluated in a European-origin population from the UK Biobank, consisting of 1,310 PDAC participants and 407,473 non-pancreatic cancer participants. Stepwise Cox regression analysis was performed to determine associations between pPRSs with the development of PC, with adjustments of sex and principal components of genetic ancestry. Results: The PDAC genetic variants were mapped to 23 programs and were used to generate pPRSs for these programs. Four distinct pPRSs (P1, P6, P11, and P16) and two published PRSs (PGS000663 and PGS002264) were significantly associated with an increased risk of developing PC. Among these, P6 exhibited the greatest hazard ratio (adjusted HR[95% CI] = 1.67[1.14-2.45], p = 0.008). In contrast, P10 and P4 were associated with lower risk of developing PC (adjusted HR[95% CI] = 0.58[0.42-0.81], p = 0.001, and adjusted HR[95% CI] = 0.75[0.59-0.96], p = 0.019). By comparison, two of the five published PRS exhibited an association with PDAC onset with HR (PGS000663: adjusted HR[95% CI] = 1.24[1.14-1.35], p < 0.001 and PGS002264: adjusted HR[95% CI] = 1.14[1.07-1.22], p < 0.001). Conclusion: Compared to published PRSs, scRNA-seq-based pPRSs may be used not only to assess increased but also decreased risk of PDAC.

Keywords: cox regression, pancreatic cancer, polygenic risk score, scRNA-seq, UK biobank

Procedia PDF Downloads 96
874 21st Century Business Dynamics: Acting Local and Thinking Global through Extensive Business Reporting Language (XBRL)

Authors: Samuel Faboyede, Obiamaka Nwobu, Samuel Fakile, Dickson Mukoro

Abstract:

In the present dynamic business environment of corporate governance and regulations, financial reporting is an inevitable and extremely significant process for every business enterprise. Several financial elements such as Annual Reports, Quarterly Reports, ad-hoc filing, and other statutory/regulatory reports provide vital information to the investors and regulators, and establish trust and rapport between the internal and external stakeholders of an organization. Investors today are very demanding, and emphasize greatly on authenticity, accuracy, and reliability of financial data. For many companies, the Internet plays a key role in communicating business information, internally to management and externally to stakeholders. Despite high prominence being attached to external reporting, it is disconnected in most companies, who generate their external financial documents manually, resulting in high degree of errors and prolonged cycle times. Chief Executive Officers and Chief Financial Officers are increasingly susceptible to endorsing error-laden reports, late filing of reports, and non-compliance with regulatory acts. There is a lack of common platform to manage the sensitive information – internally and externally – in financial reports. The Internet financial reporting language known as eXtensible Business Reporting Language (XBRL) continues to develop in the face of challenges and has now reached the point where much of its promised benefits are available. This paper looks at the emergence of this revolutionary twenty-first century language of digital reporting. It posits that today, the world is on the brink of an Internet revolution that will redefine the ‘business reporting’ paradigm. The new Internet technology, eXtensible Business Reporting Language (XBRL), is already being deployed and used across the world. It finds that XBRL is an eXtensible Markup Language (XML) based information format that places self-describing tags around discrete pieces of business information. Once tags are assigned, it is possible to extract only desired information, rather than having to download or print an entire document. XBRL is platform-independent and it will work on any current or recent-year operating system, or any computer and interface with virtually any software. The paper concludes that corporate stakeholders and the government cannot afford to ignore the XBRL. It therefore recommends that all must act locally and think globally now via the adoption of XBRL that is changing the face of worldwide business reporting.

Keywords: XBRL, financial reporting, internet, internal and external reports

Procedia PDF Downloads 283
873 3d Gis Participatory Mapping And Conflict Ladm: Comparative Analysis Of Land Policies And Survey Procedures Applied By The Igorots, Ncip, And Denr To Itogon Ancestral Domain Boundaries

Authors: Deniz A. Apostol, Denyl A. Apostol, Oliver T. Macapinlac, George S. Katigbak

Abstract:

Ang lupa ay buhay at ang buhay ay lupa (land is life and life is land). Based on the 2015 census, the Indigenous Peoples (IPs) population in the Philippines is estimated to be 11.3-20.2 million. They hail from various regions, possess distinct cultures, but encounter shared struggles in territorial disputes. Itogon, the largest Benguet municipality, is home to the Ibaloi, Kankanaey, and other Igorot tribes. Despite having three (3) Ancestral Domains (ADs), Itogon is predominantly labeled as timberland or forest. These overlapping land classifications highlight the presence of inconsistencies in national laws and jurisdictions. This study aims to analyze surveying procedures used by the Igorots, NCIP, and DENR in mapping the Itogon AD Boundaries, show land boundary delineation conflicts, propose surveying guidelines, and recommend 3D Participatory Mapping as geomatics solution for updated AD reference maps. Interpretative Phenomenological Analysis (IPA), Comparative Legal Analysis (CLA), and Map Overlay Analysis (MOA) were utilized to examine the interviews, compare land policies and surveying procedures, and identify differences and overlaps in conflicting land boundaries. In the IPA, master themes identified were AD Definition (rights, responsibilities, restrictions), AD Overlaps (land classifications, political boundaries, ancestral domains, land laws/policies), and Other Conflicts (with other agencies, misinterpretations, suggestions), as considerations for mapping ADs. CLA focused on conflicting surveying procedures: AD Definitions, Surveying Equipment, Surveying Methods, Map Projections, Order of Accuracy, Monuments, Survey Parties, Pre-survey, Survey Proper, and Post-survey procedures. MOA emphasized the land area percentage of conflicting areas, showcasing the impact of misaligned surveying procedures. The findings are summarized through a Land Administration Domain Model (LADM) Conflict, for AD versus AD and Political Boundaries. The products of this study are identification of land conflict factors, survey guidelines recommendations, and contested land area computations. These can serve as references for revising survey manuals, updating AD Sustainable Development and Protection Plans, and making amendments to laws.

Keywords: ancestral domain, gis, indigenous people, land policies, participatory mapping, surveying, survey procedures

Procedia PDF Downloads 87
872 Effects of Macroprudential Policies on BankLending and Risks

Authors: Stefanie Behncke

Abstract:

This paper analyses the effects of different macroprudential policy measures that have recently been implemented in Switzerland. Among them is the activation and the increase of the countercyclical capital buffer (CCB) and a tightening of loan-to-value (LTV) requirements. These measures were introduced to limit systemic risks in the Swiss mortgage and real estate markets. They were meant to affect mortgage growth, mortgage risks, and banks’ capital buffers. Evaluation of their quantitative effects provides insights for Swiss policymakers when reassessing their policy. It is also informative for policymakers in other countries who plan to introduce macroprudential instruments. We estimate the effects of the different macroprudential measures with a Differences-in-Differences estimator. Banks differ with respect to the relative importance of mortgages in their portfolio, their riskiness, and their capital buffers. Thus, some of the banks were more affected than others by the CCB, while others were more affected by the LTV requirements. Our analysis is made possible by an unusually informative bank panel data set. It combines data on newly issued mortgage loans and quantitative risk indicators such as LTV and loan-to-income (LTI) ratios with supervisory information on banks’ capital and liquidity situation and balance sheets. Our results suggest that the LTV cap of 90% was most effective. The proportion of new mortgages with a high LTV ratio was significantly reduced. This result does not only apply to the 90% LTV, but also to other threshold values (e.g. 80%, 75%) suggesting that the entire upper part of the LTV distribution was affected. Other outcomes such as the LTI distribution, the growth rates of mortgages and other credits, however, were not significantly affected. Regarding the activation and the increase of the CCB, we do not find any significant effects: neither LTV/LTI risk parameters nor mortgage and other credit growth rates were significantly reduced. This result may reflect that the size of the CCB (1% of relevant residential real estate risk-weighted assets at activation, respectively 2% at the increase) was not sufficiently high enough to trigger a distinct reaction between the banks most likely to be affected by the CCB and those serving as controls. Still, it might be have been effective in increasing the resilience in the overall banking system. From a policy perspective, these results suggest that targeted macroprudential policy measures can contribute to financial stability. In line with findings by others, caps on LTV reduced risk taking in Switzerland. To fully assess the effectiveness of the CCB, further experience is needed.

Keywords: banks, financial stability, macroprudential policy, mortgages

Procedia PDF Downloads 359
871 Tracing Sources of Sediment in an Arid River, Southern Iran

Authors: Hesam Gholami

Abstract:

Elevated suspended sediment loads in riverine systems resulting from accelerated erosion due to human activities are a serious threat to the sustainable management of watersheds and ecosystem services therein worldwide. Therefore, mitigation of deleterious sediment effects as a distributed or non-point pollution source in the catchments requires reliable provenance information. Sediment tracing or sediment fingerprinting, as a combined process consisting of sampling, laboratory measurements, different statistical tests, and the application of mixing or unmixing models, is a useful technique for discriminating the sources of sediments. From 1996 to the present, different aspects of this technique, such as grouping the sources (spatial and individual sources), discriminating the potential sources by different statistical techniques, and modification of mixing and unmixing models, have been introduced and modified by many researchers worldwide, and have been applied to identify the provenance of fine materials in agricultural, rural, mountainous, and coastal catchments, and in large catchments with numerous lakes and reservoirs. In the last two decades, efforts exploring the uncertainties associated with sediment fingerprinting results have attracted increasing attention. The frameworks used to quantify the uncertainty associated with fingerprinting estimates can be divided into three groups comprising Monte Carlo simulation, Bayesian approaches and generalized likelihood uncertainty estimation (GLUE). Given the above background, the primary goal of this study was to apply geochemical fingerprinting within the GLUE framework in the estimation of sub-basin spatial sediment source contributions in the arid Mehran River catchment in southern Iran, which drains into the Persian Gulf. The accuracy of GLUE predictions generated using four different sets of statistical tests for discriminating three sub-basin spatial sources was evaluated using 10 virtual sediments (VS) samples with known source contributions using the root mean square error (RMSE) and mean absolute error (MAE). Based on the results, the contributions modeled by GLUE for the western, central and eastern sub-basins are 1-42% (overall mean 20%), 0.5-30% (overall mean 12%) and 55-84% (overall mean 68%), respectively. According to the mean absolute fit (MAF; ≥ 95% for all target sediment samples) and goodness-of-fit (GOF; ≥ 99% for all samples), our suggested modeling approach is an accurate technique to quantify the source of sediments in the catchments. Overall, the estimated source proportions can help watershed engineers plan the targeting of conservation programs for soil and water resources.

Keywords: sediment source tracing, generalized likelihood uncertainty estimation, virtual sediment mixtures, Iran

Procedia PDF Downloads 73
870 Frequency Response of Complex Systems with Localized Nonlinearities

Authors: E. Menga, S. Hernandez

Abstract:

Finite Element Models (FEMs) are widely used in order to study and predict the dynamic properties of structures and usually, the prediction can be obtained with much more accuracy in the case of a single component than in the case of assemblies. Especially for structural dynamics studies, in the low and middle frequency range, most complex FEMs can be seen as assemblies made by linear components joined together at interfaces. From a modelling and computational point of view, these types of joints can be seen as localized sources of stiffness and damping and can be modelled as lumped spring/damper elements, most of time, characterized by nonlinear constitutive laws. On the other side, most of FE programs are able to run nonlinear analysis in time-domain. They treat the whole structure as nonlinear, even if there is one nonlinear degree of freedom (DOF) out of thousands of linear ones, making the analysis unnecessarily expensive from a computational point of view. In this work, a methodology in order to obtain the nonlinear frequency response of structures, whose nonlinearities can be considered as localized sources, is presented. The work extends the well-known Structural Dynamic Modification Method (SDMM) to a nonlinear set of modifications, and allows getting the Nonlinear Frequency Response Functions (NLFRFs), through an ‘updating’ process of the Linear Frequency Response Functions (LFRFs). A brief summary of the analytical concepts is given, starting from the linear formulation and understanding what the implications of the nonlinear one, are. The response of the system is formulated in both: time and frequency domain. First the Modal Database is extracted and the linear response is calculated. Secondly the nonlinear response is obtained thru the NL SDMM, by updating the underlying linear behavior of the system. The methodology, implemented in MATLAB, has been successfully applied to estimate the nonlinear frequency response of two systems. The first one is a two DOFs spring-mass-damper system, and the second example takes into account a full aircraft FE Model. In spite of the different levels of complexity, both examples show the reliability and effectiveness of the method. The results highlight a feasible and robust procedure, which allows a quick estimation of the effect of localized nonlinearities on the dynamic behavior. The method is particularly powerful when most of the FE Model can be considered as acting linearly and the nonlinear behavior is restricted to few degrees of freedom. The procedure is very attractive from a computational point of view because the FEM needs to be run just once, which allows faster nonlinear sensitivity analysis and easier implementation of optimization procedures for the calibration of nonlinear models.

Keywords: frequency response, nonlinear dynamics, structural dynamic modification, softening effect, rubber

Procedia PDF Downloads 264
869 Therapeutic Role of T Subpopulations Cells (CD4, CD8 and Treg (CD25 and FOXP3+ Cells) of UC MSC Isolated from Three Different Methods in Various Disease

Authors: Kumari Rekha, Mathur K Dhananjay, Maheshwari Deepanshu, Nautiyal Nidhi, Shubham Smriti, Laal Deepika, Sinha Swati, Kumar Anupam, Biswas Subhrajit, Shiv Kumar Sarin

Abstract:

Background: Mesenchymal stem cells are multipotent stem cells derived from mesoderm and are used for therapeutic purposes because of their self-renewal, homing capacity, Immunomodulatory capability, low immunogenicity and mitochondrial transfer signaling. MSCs have the ability to regulate the mechanism of both innate as well as adaptive immune responses through the modulation of cellular response and the secretion of inflammatory mediators. Different sources of MSC are UC MSC, BM MSC, Dental Pulp, and Adipose MSC. The most frequent source used is umbilical cord tissue due to its being easily available and free of limitations of collection procedures from respective hospitals. The immunosuppressive role of MSCs is particularly interesting for clinical use since it confers resistance to rejection by the host immune response. Methodology: In this study, T helper cells (TH4), Cytotoxic T cells (CD-8), immunoregulatory cells (CD25 +FOXP3+) are compared from isolated MSC from three different methods, UC Dissociation Kit (Miltenyi), Explant Culture and Collagenase Type-IV. To check the immunomodulatory property, these MSCs were seeded with PBMC(Coculture) in CD3 coated 24 well plates. Cd28 antibody was added in coculture for six days. The coculture was analyzed in FACS Verse flow cytometry. Results: From flow cytometry analysis of coculture, it found that All over T helper cells (CD4+) number p<0.0264 increases in (All Enzymes) MSC rather than explant MSC(p>0.0895) as compared to Collagenase(p>0.7889) in a coculture of Activated T cell and Mesenchymal Stem Cell. Similar T reg cells (CD25+, FOXP3+) expression p<0.0234increases in All Enzymes), decreases in Explant and Collagenase. Experiments have shown that MSCs can also directly prevent the cytotoxic activity of CD8 lymphocytes mainly by blocking their proliferation rather than by inhibiting the cytotoxic effect. And promoting the t-reg cells, which helps in the mediation of immune response in various diseases. Conclusion: MSC suppress Cytotoxic CD8 T cell and Enhance immunoregulatory T reg (CD4+, CD25+, FOXP3+) Cell expression. Thus, MSC maintains a proper balance(ratio) between CD4 T cells and Cytotoxic CD8 T cells.

Keywords: MSC, disease, T cell, T regulatory

Procedia PDF Downloads 110
868 An Assessment of the Performance of Local Government in Ondo State Nigeria: A Capital Budgeting Approach

Authors: Olurankinse Felix

Abstract:

Local governments in Ondo State Nigeria are the third tier of government saddled with the responsibility of providing governance and economic services at the grassroots. To be able to do this, the Constitution of the Federal Republic of Nigeria provided that a proportion of Federation Account be allocated to them in addition to their internally generated revenue. From the allocation and other incidental sources of revenue, the local governments are expected to provide basic infrastructures and other social amenities to better the lots of the rural dwellers. Nevertheless, local governments’ performances in terms of provision of social amenities are without questioning and quite not encouraging. Assessing the performance of local governments in this period of dearth and scarcity of resources is highly indispensable more so that the activities of local governments’ staff are bedeviled and characterized with fraud, corruption and mismanagement. Considering the direct impact of the consequences of their action on the living standard of the rural dwellers therefore calls for the need to evaluate their level of performances using capital budgeting approach. The paper being a time series study adopts the survey design. Data were obtained through secondary source mainly from the Annual financial statements and publication of approved budgets estimates covering the period of study (2008-2012). The use of ratio analysis was employed in analyzing the comparative level of performances of the local governments under study. The result of the study shows that less than 30% of the local governments were able to harness the budgetary allocation to provide amenities to the beneficiaries while majority of the local governments were involved in unethical conduct ranging from theft of fund, corruption, diversion of funds and extra-budgetary activities. Also, there is poor internally generated revenue to complement the statutory allocation and besides, the monthly withholding of larger portions of local government share by the state in the name of joint account were also seen as contributory factors. The study recommends the need for transparency and accountability in public fund management through the oversight function of the state house of assembly. Also local government should be made to be autonomous and independent of the state by jettisoning the idea of joint account.

Keywords: performance, transparency and accountability, capital budgeting, joint account, local government autonomy

Procedia PDF Downloads 328
867 Production and Evaluation of Physicochemical, Nutritional, Sensorial and Microbiological Properties of Mixed Fruit Juice Blend Prepared from Apple, Orange and Mosambi

Authors: Himalaya Patir, Bitupon Baruah, Sanjay Gayary, Subhajit Ray

Abstract:

In recent age significant importance is given for the development of nutritious and health beneficial foods. Fruit juices collected from different fruits when blended that improves not only the physicochemical and nutritional properties but also enhance the sensorial or organoleptic properties. The study was carried out to determine the physico-chemical, nutritional, microbiological analysis and sensory evaluation of mixed fruit juice blend. Juice of orange (Citrus sinensis), apple (Malus domestica), mosambi (Citrus limetta) were blended in the ratio of sample-I (30% apple:30% orange:40% mosambi), sample-II ( 40% apple :30% orange :30% mosambi), sample-III (30% apple :40% orange :30% mosambi) , sample-IV (50% apple :30% orange :20% mosambi), sample-V (30% apple:20% orange:50% mosambi), sample-VI (20% apple :50% orange :30% mosambi) to evaluate all quality characteristics. Their colour characteristics in terms of hue angle, chroma and colour difference (∆E) were evaluated. The physico-chemical parameters analysis carried out were total soluble solids (TSS), total titratable acidity (TTA), pH, acidity (FA), volatile acidity (VA), pH, and vitamin C. There were significant differences (p˂0.05) in the TSS of the samples. However, sample-V (30% apple: 20% orange: 50% mosambi) provides the highest TSS of 9.02gm and significantly differed from other samples (p˂0.05). Sample-IV (50% apple: 30% orange: 20% mosambi) was shown the highest titratable acidity (.59%) in comparison to other samples. The highest value of pH was found as 5.01 for sample-IV (50% apple: 30% orange: 20% mosambi). Sample-VI (20% apple: 50% orange :30% mosambi) blend has the highest hue angle, chroma and colour changes of 72.14,25.29 and 54.48 and vitamin C, i.e. Ascorbic acid (.33g/l) content compared to other samples. The nutritional compositions study showed that, sample- VI (20% apple: 50% orange: 30% mosambi) has the significantly higher carbohydrate (51.67%), protein (.78%) and ash (1.24%) than other samples, while sample-V (30% apple: 20% orange: 50% mosambi) has higher dietary fibre (12.84%) and fat (2.82%) content. Microbiological analysis of all samples in terms of total plate count (TPC) ranges from 44-60 in 101 dilution and 4-5 in 107 dilutions and was found satisfactory. Moreover, other pathogenic bacterial count was found nil. The general acceptability of the mixed fruit juice blend samples were moderately liked by the panellists, and sensorial quality studies showed that sample-V (30% apple: 20% orange: 50% mosambi) contains highest overall acceptability of 8.37 over other samples and can be considered good for consumption.

Keywords: microbiological, nutritional, physico-chemical, sensory properties

Procedia PDF Downloads 173
866 An Evaluation Study of Sleep and Sleep-Related Factors in Clinic Clients with Sleep Difficulties

Authors: Chi-Feng Lai, Wen-Chun Liao Liao

Abstract:

Many people are bothered by sleep difficulties in Taiwan’s society. However, majority of patients get medical treatments without a comprehensive sleep assessment. It is still a big challenge to formulate a comprehensive assessment of sleep difficulties in clinical settings, even though many assessment tools have existed in literature. This study tries to implement reliable and effective ‘comprehensive sleep assessment scales’ in a medical center and to explore differences in sleep-related factors between clinic clients with or without sleep difficulty complaints. The comprehensive sleep assessment (CSA) scales were composed of 5 dimensions: ‘personal factors’, ‘physiological factors’, ‘psychological factors’, ‘social factors’ and ‘environmental factors, and were first evaluated by expert validity and 20 participants with test-retest reliability. The Content Validity Index (CVI) of the CSA was 0.94 and the alpha of the consistency reliability ranged 0.996-1.000. Clients who visited sleep clinic due to sleep difficulties (n=32, 16 males and 16 females, ages 43.66 ±14.214) and gender-and age- matched healthy subjects without sleep difficulties (n=96, 47 males and 49 females, ages 41.99 ±13.69) were randomly recruited at a ratio of 1:3 (with sleep difficulties vs. without sleep difficulties) to compare their sleep and the CSA factors. Results show that all clinic clients with sleep difficulties did have poor sleep quality (PSQI>5) and mild to moderate daytime sleepiness (ESS >11). Personal factors of long working hours (χ2= 10.315, p=0.001), shift workers (χ2= 8.964, p=0.003), night shift (χ2=9.395, p=0.004) and perceived stress (χ2=9.503, p=0.002) were disruptors of sleep difficulties. Physiological factors from physical examination including breathing by mouth, low soft palate, high narrow palate, Edward Angle, tongue hypertrophy, and occlusion of the worn surface were observed in clinic clients. Psychological factors including higher perceived stress (χ2=32.542, p=0.000), anxiety and depression (χ2=32.868, p=0.000); social factors including lack of leisure activities (χ2=39.857, p=0.000), more drinking habits (χ2=1.798, p=0.018), irregular amount and frequency in meals (χ2=5.086, p=0.024), excessive dinner (χ2=21.511, p=0.000), being incapable of getting up on time due to previous poor night sleep (χ2=4.444, p=0.035); and environmental factors including lights (χ2=7.683, p=0.006), noise (χ2=5.086, p=0.024), low or high bedroom temperature (χ2=4.595, p=0.032) were existed in clients. In conclusion, the CSA scales can work as valid and reliable instruments for evaluating sleep-related factors. Findings of this study provide important reference for assessing clinic clients with sleep difficulties.

Keywords: comprehensive sleep assessment, sleep-related factors, sleep difficulties

Procedia PDF Downloads 270
865 Placement of Inflow Control Valve for Horizontal Oil Well

Authors: S. Thanabanjerdsin, F. Srisuriyachai, J. Chewaroungroj

Abstract:

Drilling horizontal well is one of the most cost-effective method to exploit reservoir by increasing exposure area between well and formation. Together with horizontal well technology, intelligent completion is often co-utilized to increases petroleum production by monitoring/control downhole production. Combination of both technological results in an opportunity to lower water cresting phenomenon, a detrimental problem that does not lower only oil recovery but also cause environmental problem due to water disposal. Flow of reservoir fluid is a result from difference between reservoir and wellbore pressure. In horizontal well, reservoir fluid around the heel location enters wellbore at higher rate compared to the toe location. As a consequence, Oil-Water Contact (OWC) at the heel side of moves upward relatively faster compared to the toe side. This causes the well to encounter an early water encroachment problem. Installation of Inflow Control Valve (ICV) in particular sections of horizontal well can involve several parameters such as number of ICV, water cut constrain of each valve, length of each section. This study is mainly focused on optimization of ICV configuration to minimize water production and at the same time, to enhance oil production. A reservoir model consisting of high aspect ratio of oil bearing zone to underneath aquifer is drilled with horizontal well and completed with variation of ICV segments. Optimization of the horizontal well configuration is firstly performed by varying number of ICV, segment length, and individual preset water cut for each segment. Simulation results show that installing ICV can increase oil recovery factor up to 5% of Original Oil In Place (OOIP) and can reduce of produced water depending on ICV segment length as well as ICV parameters. For equally partitioned-ICV segment, more number of segment results in better oil recovery. However, number of segment exceeding 10 may not give a significant additional recovery. In first production period, deformation of OWC strongly depends on number of segment along the well. Higher number of segment results in smoother deformation of OWC. After water breakthrough at heel location segment, the second production period begins. Deformation of OWC is principally dominated by ICV parameters. In certain situations that OWC is unstable such as high production rate, high viscosity fluid above aquifer and strong aquifer, second production period may give wide enough window to ICV parameter to take the roll.

Keywords: horizontal well, water cresting, inflow control valve, reservoir simulation

Procedia PDF Downloads 412
864 Desulphurization of Waste Tire Pyrolytic Oil (TPO) Using Photodegradation and Adsorption Techniques

Authors: Moshe Mello, Hilary Rutto, Tumisang Seodigeng

Abstract:

The nature of tires makes them extremely challenging to recycle due to the available chemically cross-linked polymer and, therefore, they are neither fusible nor soluble and, consequently, cannot be remolded into other shapes without serious degradation. Open dumping of tires pollutes the soil, contaminates underground water and provides ideal breeding grounds for disease carrying vermins. The thermal decomposition of tires by pyrolysis produce char, gases and oil. The composition of oils derived from waste tires has common properties to commercial diesel fuel. The problem associated with the light oil derived from pyrolysis of waste tires is that it has a high sulfur content (> 1.0 wt.%) and therefore emits harmful sulfur oxide (SOx) gases to the atmosphere when combusted in diesel engines. Desulphurization of TPO is necessary due to the increasing stringent environmental regulations worldwide. Hydrodesulphurization (HDS) is the commonly practiced technique for the removal of sulfur species in liquid hydrocarbons. However, the HDS technique fails in the presence of complex sulfur species such as Dibenzothiopene (DBT) present in TPO. This study aims to investigate the viability of photodegradation (Photocatalytic oxidative desulphurization) and adsorptive desulphurization technologies for efficient removal of complex and non-complex sulfur species in TPO. This study focuses on optimizing the cleaning (removal of impurities and asphaltenes) process by varying process parameters; temperature, stirring speed, acid/oil ratio and time. The treated TPO will then be sent for vacuum distillation to attain the desired diesel like fuel. The effect of temperature, pressure and time will be determined for vacuum distillation of both raw TPO and the acid treated oil for comparison purposes. Polycyclic sulfides present in the distilled (diesel like) light oil will be oxidized dominantly to the corresponding sulfoxides and sulfone via a photo-catalyzed system using TiO2 as a catalyst and hydrogen peroxide as an oxidizing agent and finally acetonitrile will be used as an extraction solvent. Adsorptive desulphurization will be used to adsorb traces of sulfurous compounds which remained during photocatalytic desulphurization step. This desulphurization convoy is expected to give high desulphurization efficiency with reasonable oil recovery.

Keywords: adsorption, asphaltenes, photocatalytic oxidation, pyrolysis

Procedia PDF Downloads 270
863 Groundwater Flow Dynamics in Shallow Coastal Plain Sands Aquifer, Abesan Area, Eastern Dahomey Basin, Southwestern Nigeria

Authors: Anne Joseph, Yinusa Asiwaju-Bello, Oluwaseun Olabode

Abstract:

Sustainable administration of groundwater resources tapped in Coastal Plain Sands aquifer in Abesan area, Eastern Dahomey Basin, Southwestern Nigeria necessitates the knowledge of the pattern of groundwater flow in meeting a suitable environmental need for habitation. Thirty hand-dug wells were identified and evaluated to study the groundwater flow dynamics and anionic species distribution in the study area. Topography and water table levels method with the aid of Surfer were adopted in the identification of recharge and discharge zones where six recharge and discharge zones were delineated correspondingly. Dissolved anionic species of HCO3-, Cl-, SO42-and NO3- were determined using titrimetric and spectrophotometric method. The trend of significant anionic concentrations of groundwater samples are in the order Cl- > HCO3-> SO42- > NO3-. The prominent anions in the discharge and recharge area are Cl- and HCO3- ranging from 0.22ppm to 3.67ppm and 2.59ppm to 0.72ppm respectively. Analysis of groundwater head distribution and the groundwater flow vector in Abesan area confirmed that Cl- concentration is higher than HCO3- concentration in recharge zones. Conversely, there is a high concentration of HCO3- than Cl- inland towards the continent; therefore, HCO3-concentration in the discharge zones is higher than the Cl- concentration. The anions were to be closely related to the recharge and discharge areas which were confirmed by comparison of activities such as rainfall regime and anthropogenic activities in Abesan area. A large percentage of the samples showed that HCO3-, Cl-, SO42-and NO3- falls within the permissible limit of the W.H.O standard. Most of the samples revealed Cl- / (CO3- + HCO3-) ratio higher than 0.5 indicating that there is saltwater intrusion imprints in the groundwater of the study area. Gibbs plot shown that most of the samples is from rock dominance, some from evaporation dominance and few from precipitation dominance. Potential salinity and SO42/ Cl- ratios signifies that most of the groundwater in Abesan is saline and falls in a water class found to be insuitable for irrigation. Continuous dissolution of these anionic species may pose a significant threat to the inhabitants of Abesan area in the nearest future.

Keywords: Abessan, Anionic species, Discharge, Groundwater flow, Recharge

Procedia PDF Downloads 119
862 Risk Assessment of Lead Element in Red Peppers Collected from Marketplaces in Antalya, Southern Turkey

Authors: Serpil Kilic, Ihsan Burak Cam, Murat Kilic, Timur Tongur

Abstract:

Interest in the lead (Pb) has considerably increased due to knowledge about the potential toxic effects of this element, recently. Exposure to heavy metals above the acceptable limit affects human health. Indeed, Pb is accumulated through food chains up to toxic concentrations; therefore, it can pose an adverse potential threat to human health. A sensitive and reliable method for determination of Pb element in red pepper were improved in the present study. Samples (33 red pepper products having different brands) were purchased from different markets in Turkey. The selected method validation criteria (linearity, Limit of Detection, Limit of Quantification, recovery, and trueness) demonstrated. Recovery values close to 100% showed adequate precision and accuracy for analysis. According to the results of red pepper analysis, all of the tested lead element in the samples was determined at various concentrations. A Perkin- Elmer ELAN DRC-e model ICP-MS system was used for detection of Pb. Organic red pepper was used to obtain a matrix for all method validation studies. The certified reference material, Fapas chili powder, was digested and analyzed, together with the different sample batches. Three replicates from each sample were digested and analyzed. The results of the exposure levels of the elements were discussed considering the scientific opinions of the European Food Safety Authority (EFSA), which is the European Union’s (EU) risk assessment source associated with food safety. The Target Hazard Quotient (THQ) was described by the United States Environmental Protection Agency (USEPA) for the calculation of potential health risks associated with long-term exposure to chemical pollutants. THQ value contains intake of elements, exposure frequency and duration, body weight and the oral reference dose (RfD). If the THQ value is lower than one, it means that the exposed population is assumed to be safe and 1 < THQ < 5 means that the exposed population is in a level of concern interval. In this study, the THQ of Pb was obtained as < 1. The results of THQ calculations showed that the values were below one for all the tested, meaning the samples did not pose a health risk to the local population. This work was supported by The Scientific Research Projects Coordination Unit of Akdeniz University. Project Number: FBA-2017-2494.

Keywords: lead analyses, red pepper, risk assessment, daily exposure

Procedia PDF Downloads 165
861 Factors Associated with Death during Tuberculosis Treatment of Patients Co-Infected with HIV at a Tertiary Care Setting in Cameroon: An 8-Year Hospital-Based Retrospective Cohort Study (2006-2013)

Authors: A. A. Agbor, Jean Joel R. Bigna, Serges Clotaire Billong, Mathurin Cyrille Tejiokem, Gabriel L. Ekali, Claudia S. Plottel, Jean Jacques N. Noubiap, Hortence Abessolo, Roselyne Toby, Sinata Koulla-Shiro

Abstract:

Background: Contributors to fatal outcomes in patients undergoing tuberculosis (TB) treatment in the setting of HIV co-infection are poorly characterized, especially in sub-Saharan Africa. Our study’s aim was to assess factors associated with death in TB/HIV co-infected patients during the first 6 months their TB treatment. Methods: We conducted a tertiary-care hospital-based retrospective cohort study from January 2006 to December 2013 at the Yaoundé Central Hospital, Cameroon. We reviewed medical records to identify hospitalized co-infected TB/HIV patients aged 15 years and older. Death was defined as any death occurring during TB treatment, as per the World Health Organization’s recommendations. Logistic regression analysis identified factors associated with death. Magnitudes of associations were expressed by adjusted odds ratio (aOR) with 95% confidence interval. A p value < 0.05 was considered statistically significant. Results: The 337 patients enrolled had a mean age of 39.3 (+/- 10.3) years and more (54.3%) were women. TB treatment outcomes included: treatment success in 60.8% (n=205), death in 29.4% (n=99), not evaluated in 5.3% (n=18), loss to follow-up in 5.3% (n=14), and failure in 0.3% (n=1) . After exclusion of patients lost to follow-up and not evaluated, death in TB/HIV co-infected patients during TB treatment was associated with: a TB diagnosis made before national implementation of guidelines regarding initiation of antiretroviral therapy (aOR = 2.50 [1.31-4.78]; p = 0.006), the presence of other AIDS-defining infections (aOR = 2.73 [1.27-5.86]; p = 0.010), non-AIDS comorbidities (aOR = 3.35 [1.37-8.21]; p = 0.008), not receiving co-trimoxazole prophylaxis (aOR = 3.61 [1.71-7.63]; p = 0.001), not receiving antiretroviral therapy (aOR = 2.45 [1.18-5.08]; p = 0.016), and CD4 cell counts < 50 cells/mm3 (aOR = 16.43 [1.05-258.04]; p = 0.047). Conclusions: The success rate of anti-tuberculosis treatment among hospitalized TB/HIV co-infected patients in our setting is low. Mortality in the first 6 months of treatment was high and strongly associated with specific clinical factors including states of greater immunosuppression, highlighting the urgent need for targeted interventions, including provision of anti-retroviral therapy and co-trimoxazole prophylaxis in order to enhance patient outcomes.

Keywords: TB/HIV co-infection, death, treatment outcomes, factors

Procedia PDF Downloads 442
860 Numerical Analysis of Charge Exchange in an Opposed-Piston Engine

Authors: Zbigniew Czyż, Adam Majczak, Lukasz Grabowski

Abstract:

The paper presents a description of geometric models, computational algorithms, and results of numerical analyses of charge exchange in a two-stroke opposed-piston engine. The research engine was a newly designed internal Diesel engine. The unit is characterized by three cylinders in which three pairs of opposed-pistons operate. The engine will generate a power output equal to 100 kW at a crankshaft rotation speed of 3800-4000 rpm. The numerical investigations were carried out using ANSYS FLUENT solver. Numerical research, in contrast to experimental research, allows us to validate project assumptions and avoid costly prototype preparation for experimental tests. This makes it possible to optimize the geometrical model in countless variants with no production costs. The geometrical model includes an intake manifold, a cylinder, and an outlet manifold. The study was conducted for a series of modifications of manifolds and intake and exhaust ports to optimize the charge exchange process in the engine. The calculations specified a swirl coefficient obtained under stationary conditions for a full opening of intake and exhaust ports as well as a CA value of 280° for all cylinders. In addition, mass flow rates were identified separately in all of the intake and exhaust ports to achieve the best possible uniformity of flow in the individual cylinders. For the models under consideration, velocity, pressure and streamline contours were generated in important cross sections. The developed models are designed primarily to minimize the flow drag through the intake and exhaust ports while the mass flow rate increases. Firstly, in order to calculate the swirl ratio [-], tangential velocity v [m/s] and then angular velocity ω [rad / s] with respect to the charge as the mean of each element were calculated. The paper contains comparative analyses of all the intake and exhaust manifolds of the designed engine. Acknowledgement: This work has been realized in the cooperation with The Construction Office of WSK "PZL-KALISZ" S.A." and is part of Grant Agreement No. POIR.01.02.00-00-0002/15 financed by the Polish National Centre for Research and Development.

Keywords: computational fluid dynamics, engine swirl, fluid mechanics, mass flow rates, numerical analysis, opposed-piston engine

Procedia PDF Downloads 196
859 Experimenting with Clay 3D Printing Technology to Create an Undulating Facade

Authors: Naeimehsadat Hosseininam, Rui Wang, Dishita Shah

Abstract:

In recent years, new experimental approaches with the help of the new technology have bridged the gaps between the application of natural materials and creating unconventional forms. Clay has been one of the oldest building materials in all ancient civilizations. The availability and workability of clay have contributed to the widespread application of this material around the world. The aim of this experimental research is to apply the Clay 3D printing technology to create a load bearing and visually dynamic and undulating façade. Creation of different unique pieces is the most significant goal of this research which justifies the application of 3D printing technology instead of the conventional mass industrial production. This study provides an abbreviated overview of the similar cases which have used the Clay 3D printing to generate the corresponding prototypes. The study of these cases also helps in understanding the potential and flexibility of the material and 3D printing machine in developing different forms. In the next step, experimental research carried out by 3D printing of six various options which designed considering the properties of clay as well as the methodology of them being 3D printed. Here, the ratio of water to clay (W/C) has a significant role in the consistency of the material and the workability of the clay. Also, the size of the selected nozzle impacts the shape and the smoothness of the final surface. Moreover, the results of these experiments show the limitations of clay toward forming various slopes. The most notable consequence of having steep slopes in the prototype is an unpredicted collapse which is the result of internal tension in the material. From the six initial design ideas, the final prototype selected with the aim of creating a self-supported component with unique blocks that provides a possibility of installing the insulation system within the component. Apart from being an undulated façade, the presented prototype has the potential to be used as a fence and an interior partition (double-sided). The central shaft also provides a space to run services or insulation in different parts of the wall. In parallel to present the capability and potential of the clay 3D printing technology, this study illustrates the limitations of this system in some certain areas. There are inevitable parameters such as printing speed, temperature, drying speed that need to be considered while printing each piece. Clay 3D printing technology provides the opportunity to create variations and design parametric building components with the application of the most practiced material in the world.

Keywords: clay 3D printing, material capability, undulating facade, load bearing facade

Procedia PDF Downloads 138
858 Factors Associated with Involvement in Physical Activity among Children (Aged 6-18 Years) Training at Excel Soccer Academy in Uganda

Authors: Syrus Zimaze, George Nsimbe, Valley Mugwanya, Matiya Lule, Edgar Watson, Patrick Gwayambadde

Abstract:

Physical inactivity is a growing global epidemic, also recognised as a major public health challenge. Globally, there are alarming rates of children reported with cardiovascular disease and obesity with limited interventions. In Sub Saharan Africa, there is limited information about involvement in physical activity especially among children aged 6 to 18 years. The aim of this study was to explore factors associated with involvement in physical activity among children in Uganda. Methods: We included all parents with children aged 6 to 18 years training with Excel Soccer Academy between January 2017 and June 2018. Physical activity definition was time spent participating in routine soccer training at the academy for more than 30 days. Each child's attendance was recorded, and parents provided demographic and social economic data. Data on predictors of physical activity involvement were collected using a standardized questionnaire. Descriptive statistics and frequency were used. Binary logistic regression was used at the multi variable level adjusting for education, residence, transport means and access to information technology. Results: Overall 356 parents were interviewed; Boys 318 (89.3%) engaged more in physical activity than girls. The median age for children was 13 years (IQR:6-18) and 42 years (IQR:37-49) among parents. The median time spent at the Excel soccer academy was 13.4 months (IQR: 4.6-35.7) Majority of the children attended formal education, p < 0.001). Factors associated with involvement in physical activity included: owning a permanent house compared to a rented house (odds ratio [OR] :2.84: 95% CI: 2.09-3.86, p < 0.0001), owning a car compared to using public transport (OR: 5.64 CI: 4.80-6.63, p < 0.0001), a parent having received formal education compared to non-formal education (OR: 2.93 CI: 2.47-3.46, p < 0.0001) and daily access to information technology (OR:0.40 CI:0.25-0.66, p < 0.001). Parent’s age and gender were not associated to involvement in physical activity. Conclusions: Socioeconomic factors were positively associated with involvement in physical activity with boys participating more than girls in soccer activities. More interventions are required geared towards increasing girl’s participation in physical activity and those targeting children from less privilege homes.

Keywords: physical activity, Sub-Saharan Africa, social economic factors, children

Procedia PDF Downloads 161
857 Development of Structural Deterioration Models for Flexible Pavement Using Traffic Speed Deflectometer Data

Authors: Sittampalam Manoharan, Gary Chai, Sanaul Chowdhury, Andrew Golding

Abstract:

The primary objective of this paper is to present a simplified approach to develop the structural deterioration model using traffic speed deflectometer data for flexible pavements. Maintaining assets to meet functional performance is not economical or sustainable in the long terms, and it would end up needing much more investments for road agencies and extra costs for road users. Performance models have to be included for structural and functional predicting capabilities, in order to assess the needs, and the time frame of those needs. As such structural modelling plays a vital role in the prediction of pavement performance. A structural condition is important for the prediction of remaining life and overall health of a road network and also major influence on the valuation of road pavement. Therefore, the structural deterioration model is a critical input into pavement management system for predicting pavement rehabilitation needs accurately. The Traffic Speed Deflectometer (TSD) is a vehicle-mounted Doppler laser system that is capable of continuously measuring the structural bearing capacity of a pavement whilst moving at traffic speeds. The device’s high accuracy, high speed, and continuous deflection profiles are useful for network-level applications such as predicting road rehabilitations needs and remaining structural service life. The methodology adopted in this model by utilizing time series TSD maximum deflection (D0) data in conjunction with rutting, rutting progression, pavement age, subgrade strength and equivalent standard axle (ESA) data. Then, regression analyses were undertaken to establish a correlation equation of structural deterioration as a function of rutting, pavement age, seal age and equivalent standard axle (ESA). This study developed a simple structural deterioration model which will enable to incorporate available TSD structural data in pavement management system for developing network-level pavement investment strategies. Therefore, the available funding can be used effectively to minimize the whole –of- life cost of the road asset and also improve pavement performance. This study will contribute to narrowing the knowledge gap in structural data usage in network level investment analysis and provide a simple methodology to use structural data effectively in investment decision-making process for road agencies to manage aging road assets.

Keywords: adjusted structural number (SNP), maximum deflection (D0), equant standard axle (ESA), traffic speed deflectometer (TSD)

Procedia PDF Downloads 146
856 Hybrid Model: An Integration of Machine Learning with Traditional Scorecards

Authors: Golnush Masghati-Amoli, Paul Chin

Abstract:

Over the past recent years, with the rapid increases in data availability and computing power, Machine Learning (ML) techniques have been called on in a range of different industries for their strong predictive capability. However, the use of Machine Learning in commercial banking has been limited due to a special challenge imposed by numerous regulations that require lenders to be able to explain their analytic models, not only to regulators but often to consumers. In other words, although Machine Leaning techniques enable better prediction with a higher level of accuracy, in comparison with other industries, they are adopted less frequently in commercial banking especially for scoring purposes. This is due to the fact that Machine Learning techniques are often considered as a black box and fail to provide information on why a certain risk score is given to a customer. In order to bridge this gap between the explain-ability and performance of Machine Learning techniques, a Hybrid Model is developed at Dun and Bradstreet that is focused on blending Machine Learning algorithms with traditional approaches such as scorecards. The Hybrid Model maximizes efficiency of traditional scorecards by merging its practical benefits, such as explain-ability and the ability to input domain knowledge, with the deep insights of Machine Learning techniques which can uncover patterns scorecard approaches cannot. First, through development of Machine Learning models, engineered features and latent variables and feature interactions that demonstrate high information value in the prediction of customer risk are identified. Then, these features are employed to introduce observed non-linear relationships between the explanatory and dependent variables into traditional scorecards. Moreover, instead of directly computing the Weight of Evidence (WoE) from good and bad data points, the Hybrid Model tries to match the score distribution generated by a Machine Learning algorithm, which ends up providing an estimate of the WoE for each bin. This capability helps to build powerful scorecards with sparse cases that cannot be achieved with traditional approaches. The proposed Hybrid Model is tested on different portfolios where a significant gap is observed between the performance of traditional scorecards and Machine Learning models. The result of analysis shows that Hybrid Model can improve the performance of traditional scorecards by introducing non-linear relationships between explanatory and target variables from Machine Learning models into traditional scorecards. Also, it is observed that in some scenarios the Hybrid Model can be almost as predictive as the Machine Learning techniques while being as transparent as traditional scorecards. Therefore, it is concluded that, with the use of Hybrid Model, Machine Learning algorithms can be used in the commercial banking industry without being concerned with difficulties in explaining the models for regulatory purposes.

Keywords: machine learning algorithms, scorecard, commercial banking, consumer risk, feature engineering

Procedia PDF Downloads 130
855 Comparison between Photogrammetric and Structure from Motion Techniques in Processing Unmanned Aerial Vehicles Imageries

Authors: Ahmed Elaksher

Abstract:

Over the last few years, significant progresses have been made and new approaches have been proposed for efficient collection of 3D spatial data from Unmanned aerial vehicles (UAVs) with reduced costs compared to imagery from satellite or manned aircraft. In these systems, a low-cost GPS unit provides the position, velocity of the vehicle, a low-quality inertial measurement unit (IMU) determines its orientation, and off-the-shelf cameras capture the images. Structure from Motion (SfM) and photogrammetry are the main tools for 3D surface reconstruction from images collected by these systems. Unlike traditional techniques, SfM allows the computation of calibration parameters using point correspondences across images without performing a rigorous laboratory or field calibration process and it is more flexible in that it does not require consistent image overlap or same rotation angles between successive photos. These benefits make SfM ideal for UAVs aerial mapping. In this paper, a direct comparison between SfM Digital Elevation Models (DEM) and those generated through traditional photogrammetric techniques was performed. Data was collected by a 3DR IRIS+ Quadcopter with a Canon PowerShot S100 digital camera. Twenty ground control points were randomly distributed on the ground and surveyed with a total station in a local coordinate system. Images were collected from an altitude of 30 meters with a ground resolution of nine mm/pixel. Data was processed with PhotoScan, VisualSFM, Imagine Photogrammetry, and a photogrammetric algorithm developed by the author. The algorithm starts with performing a laboratory camera calibration then the acquired imagery undergoes an orientation procedure to determine the cameras’ positions and orientations. After the orientation is attained, correlation based image matching is conducted to automatically generate three-dimensional surface models followed by a refining step using sub-pixel image information for high matching accuracy. Tests with different number and configurations of the control points were conducted. Camera calibration parameters estimated from commercial software and those obtained with laboratory procedures were comparable. Exposure station positions were within less than few centimeters and insignificant differences, within less than three seconds, among orientation angles were found. DEM differencing was performed between generated DEMs and few centimeters vertical shifts were found.

Keywords: UAV, photogrammetry, SfM, DEM

Procedia PDF Downloads 288
854 Determinants of Household Food Security in Addis Ababa City Administration

Authors: Estibe Dagne Mekonnen

Abstract:

In recent years, the prevalence of undernourishment was 30 percent for sub-Saharan Africa, compared with 16 percent for Asia and the Pacific (Ali, 2011). In Ethiopia, almost 40 percent of the total population in the country and 57 percent of Addis Ababa population lives below the international poverty line of US$ 1.25 per day (UNICEF, 2009). This study aims to analyze the determinant of household food secrity in Addis Ababa city administration. Primary data were collected from a survey of 256 households in the selected sub-city, namely Addis Ketema, Arada, and Kolfe Keranio, in the year 2022. Both Purposive and multi-stage cluster random sampling procedures were employed to select study areas and respondents. Descriptive statistics and order logistic regression model were used to test the formulated hypotheses. The result reveals that out of the total sampled households, 25% them were food secured, 13% were mildly food insecure, 26% were moderately food insecure and 36% were severely food insecure. The study indicates that household family size, house ownership, household income, household food source, household asset possession, household awareness on inflation, household access to social protection program, household access to credit and saving and household access to training and supervision on food security have a positive and significant effect on the likelihood of household food security status. However, marital status of household head, employment sector of household head, dependency ratio and household’s nonfood expenditure has a negative and significant influence on household food security status. The study finally suggests that the government in collaboration with financial institutions and NGO should work on sustaining household food security by creating awareness, providing credit, facilitate rural-urban linkage between producer and consumer and work on urban infrastructure improvement. Moreover, the governments also work closely and monitor consumer good suppliers, if possible find a way to subsidize consumable goods to more insecure households and make them to be food secured. Last but not least, keeping this country’s peace will play a crucial role to sustain food security.

Keywords: determinants, household, food security, order logit model, Addis Ababa

Procedia PDF Downloads 68
853 Calcitriol Improves Plasma Lipoprotein Profile by Decreasing Plasma Total Cholesterol and Triglyceride in Hypercholesterolemic Golden Syrian Hamsters

Authors: Xiaobo Wang, Zhen-Yu Chen

Abstract:

Higher plasma total cholesterol (TC) and low-density lipoprotein cholesterol (LDL-C) are independent risk factors of cardiovascular disease while high-density lipoprotein cholesterol (HDL-C) is protective. Vitamin D is well-known for its regulatory role in calcium homeostasis. Its potential important role in cardiovascular disease has recently attracted much attention. This study was conducted to investigate effects of different dosage of calcitriol on plasma lipoprotein profile and the underlying mechanism. Sixty male Syrian Golden hamsters were randomly divided into 6 groups: no-cholesterol control (NCD), high-cholesterol control (HCD), groups with calcitriol supplementation at 10/20/40/80ng/kg body weight (CA, CB, CC, CD), respectively. Calcitriol in medium-chain triacylglycerol (MCT) oil was delivered to four experimental groups via oral gavage every other day, while NCD and HCD received MCT oil in the equivalent amount. NCD hamsters were fed with non-cholesterol diet while other five groups were maintained on diet containing 0.2% cholesterol to induce a hypercholesterolemic condition. The treatment lasts for 6 weeks followed by sample collection after hamsters sacrificed. Four experimental groups experienced a reduction in average food intake around 11% compared to HCD with slight decrease in body weight (not exceeding 10%). This reduction reflects on the deceased relative weights of testis, epididymal and perirenal adipose tissue in a dose-dependent manner. Plasma calcitriol levels were measured and was corresponding to oral gavage. At the end of week 6, lipoprotein profiles were improved with calcitriol supplementation with TC, non-HDL-C and plasma triglyceride (TG) decreased in a dose-dependent manner (TC: r=0.373, p=0.009, non-HDL-C: r=0.479, p=0.001, TG: r=0.405, p=0.004). Since HDL-C of four experiment groups showed no significant difference compared to HCD, the ratio of nHDL-C to HDL-C and HDL-C to TC had been restored in a dose-dependent manner. For hamsters receiving the highest level of calcitriol (80ng/kg) showed a reduction of TC by 11.5%, nHDL-C by 24.1% and TG by 31.25%. Little difference was found among six groups on the acetylcholine-induced endothelium-dependent relaxation or contraction of thoracic aorta. To summarize, calcitriol supplementation in hamster at maximum 80ng/kg body weight for 6 weeks lead to an overall improvement in plasma lipoprotein profile with decreased TC and TG level. The molecular mechanism of its effects is under investigation.

Keywords: cholesterol, vitamin D, calcitriol, hamster

Procedia PDF Downloads 229
852 Relationship between Prolonged Timed up and Go Test and Worse Cardiometabolic Diseases Risk Factors Profile in a Population Aged 60-65 Years

Authors: Bartłomiej K. Sołtysik, Agnieszka Guligowska, Łukasz Kroc, Małgorzata Pigłowska, Elizavetta Fife, Tomasz Kostka

Abstract:

Introduction: Functional capacity is one of the basic determinants of health in older age. Functional capacity may be influenced by multiple disorders, including cardiovascular and metabolic diseases. Nevertheless, there is relatively little evidence regarding the association of functional status and cardiometabolic risk factors. Aim: The aim of this research is to check possible association between functional capacity and cardiovascular risk factor in a group of younger seniors. Materials and Methods: The study group consisted of 300 participants aged 60-65 years (50% were women). Total cholesterol (TC), triglycerides (TG), high density lipoprotein cholesterol (HDL-C), low density lipoprotein cholesterol (LDL-C), glucose, uric acid, body mass index (BMI), waist-to-height ratio (WHtR) and blood pressure were measured. Smoking status and physical activity level (by Seven Day Physical Activity Recall Questionnaire ) were analysed. Functional status was assessed with the Timed Up and Go (TUG) Test. The data were compared according to gender, and then separately for both sexes regarding prolonged TUG score (>7 s). The limit of significance was set at p≤0.05 for all analyses. Results: Women presented with higher serum lipids and longer TUG. Men had higher blood pressure, glucose, uric acid, the prevalence of hypertension and history of heart infarct. In women group, those with prolonged TUG displayed significantly higher obesity rate (BMI, WHTR), uric acid, hypertension and ischemic heart disease (IHD), but lower physical activity level, TC or LDL-C. Men with prolonged TUG were heavier smokers, had higher TG, lower HDL and presented with higher prevalence of diabetes and IHD. Discussion: This study shows association between functional status and risk profile of cardiometabolic disorders. In women, the relationship of lower functional status to cardiometabolic diseases may be mediated by overweight/obesity. In men, locomotor problems may be related to smoking. Higher education level may be considered as a protective factor regardless of gender.

Keywords: cardiovascular risk factors, functional capacity, TUG test, seniors

Procedia PDF Downloads 285
851 Extraction of Nutraceutical Bioactive Compounds from the Native Algae Using Solvents with a Deep Natural Eutectic Point and Ultrasonic-assisted Extraction

Authors: Seyedeh Bahar Hashemi, Alireza Rahimi, Mehdi Arjmand

Abstract:

Food is the source of energy and growth through the breakdown of its vital components and plays a vital role in human health and nutrition. Many natural compounds found in plant and animal materials play a special role in biological systems and the origin of many such compounds directly or indirectly is algae. Algae is an enormous source of polysaccharides and have gained much interest in human flourishing. In this study, algae biomass extraction is conducted using deep eutectic-based solvents (NADES) and Ultrasound-assisted extraction (UAE). The aim of this research is to extract bioactive compounds including total carotenoid, antioxidant activity, and polyphenolic contents. For this purpose, the influence of three important extraction parameters namely, biomass-to-solvent ratio, temperature, and time are studied with respect to their impact on the recovery of carotenoids, and phenolics, and on the extracts’ antioxidant activity. Here we employ the Response Surface Methodology for the process optimization. The influence of the independent parameters on each dependent is determined through Analysis of Variance. Our results show that Ultrasound-assisted extraction (UAE) for 50 min is the best extraction condition, and proline:lactic acid (1:1) and choline chloride:urea (1:2) extracts show the highest total phenolic contents (50.00 ± 0.70 mgGAE/gdw) and antioxidant activity [60.00 ± 1.70 mgTE/gdw, 70.00 ± 0.90 mgTE/gdw in 2.2-diphenyl-1-picrylhydrazyl (DPPH), and 2.2′-azino-bis (3-ethylbenzothiazoline-6-sulfonic acid) (ABTS)]. Our results confirm that the combination of UAE and NADES provides an excellent alternative to organic solvents for sustainable and green extraction and has huge potential for use in industrial applications involving the extraction of bioactive compounds from algae. This study is among the first attempts to optimize the effects of ultrasonic-assisted extraction, ultrasonic devices, and deep natural eutectic point and investigate their application in bioactive compounds extraction from algae. We also study the future perspective of ultrasound technology which helps to understand the complex mechanism of ultrasonic-assisted extraction and further guide its application in algae.

Keywords: natural deep eutectic solvents, ultrasound-assisted extraction, algae, antioxidant activity, phenolic compounds, carotenoids

Procedia PDF Downloads 173
850 Application of Compressed Sensing and Different Sampling Trajectories for Data Reduction of Small Animal Magnetic Resonance Image

Authors: Matheus Madureira Matos, Alexandre Rodrigues Farias

Abstract:

Magnetic Resonance Imaging (MRI) is a vital imaging technique used in both clinical and pre-clinical areas to obtain detailed anatomical and functional information. However, MRI scans can be expensive, time-consuming, and often require the use of anesthetics to keep animals still during the imaging process. Anesthetics are commonly administered to animals undergoing MRI scans to ensure they remain still during the imaging process. However, prolonged or repeated exposure to anesthetics can have adverse effects on animals, including physiological alterations and potential toxicity. Minimizing the duration and frequency of anesthesia is, therefore, crucial for the well-being of research animals. In recent years, various sampling trajectories have been investigated to reduce the number of MRI measurements leading to shorter scanning time and minimizing the duration of animal exposure to the effects of anesthetics. Compressed sensing (CS) and sampling trajectories, such as cartesian, spiral, and radial, have emerged as powerful tools to reduce MRI data while preserving diagnostic quality. This work aims to apply CS and cartesian, spiral, and radial sampling trajectories for the reconstruction of MRI of the abdomen of mice sub-sampled at levels below that defined by the Nyquist theorem. The methodology of this work consists of using a fully sampled reference MRI of a female model C57B1/6 mouse acquired experimentally in a 4.7 Tesla MRI scanner for small animals using Spin Echo pulse sequences. The image is down-sampled by cartesian, radial, and spiral sampling paths and then reconstructed by CS. The quality of the reconstructed images is objectively assessed by three quality assessment techniques RMSE (Root mean square error), PSNR (Peak to Signal Noise Ratio), and SSIM (Structural similarity index measure). The utilization of optimized sampling trajectories and CS technique has demonstrated the potential for a significant reduction of up to 70% of image data acquisition. This result translates into shorter scan times, minimizing the duration and frequency of anesthesia administration and reducing the potential risks associated with it.

Keywords: compressed sensing, magnetic resonance, sampling trajectories, small animals

Procedia PDF Downloads 70
849 New Derivatives 7-(diethylamino)quinolin-2-(1H)-one Based Chalcone Colorimetric Probes for Detection of Bisulfite Anion in Cationic Micellar Media

Authors: Guillermo E. Quintero, Edwin G. Perez, Oriel Sanchez, Christian Espinosa-Bustos, Denis Fuentealba, Margarita E. Aliaga

Abstract:

Bisulfite ion (HSO3-) has been used as a preservative in food, drinks, and medication. However, it is well-known that HSO3- can cause health problems like asthma and allergic reactions in people. Due to the above, the development of analytical methods for detecting this ion has gained great interest. In line with the above, the current use of colorimetric and/or fluorescent probes as a detection technique has acquired great relevance due to their high sensitivity and accuracy. In this context, 2-quinolinone derivatives have been found to possess promising activity as antiviral agents, sensitizers in solar cells, antifungals, antioxidants, and sensors. In particular, 7-(diethylamino)-2-quinolinone derivatives have attracted attention in recent years since their suitable photophysical properties become promising fluorescent probes. In Addition, there is evidence that photophysical properties and reactivity can be affected by the study medium, such as micellar media. Based on the above background, 7-(diethylamino)-2-quinolinone derivatives based chalcone will be able to be incorporated into a cationic micellar environment (Cetyltrimethylammonium bromide, CTAB). Furthermore, the supramolecular control induced by the micellar environment will increase the reactivity of these derivatives towards nucleophilic analytes such as HSO3- (Michael-type addition reaction), leading to the generation of new colorimetric and/or fluorescent probes. In the present study, two derivatives of 7-(diethylamino)-2-quinolinone based chalcone DQD1-2 were synthesized according to the method reported by the literature. These derivatives were structurally characterized by 1H, 13C NMR, and HRMS-ESI. In addition, UV-VIS and fluorescence studies determined absorption bands near 450 nm, emission bands near 600 nm, fluorescence quantum yields near 0.01, and fluorescence lifetimes of 5 ps. In line with the foregoing, these photophysical properties aforementioned were improved in the presence of a cationic micellar medium using CTAB thanks to the formation of adducts presenting association constants of the order of 2,5x105 M-1, increasing the quantum yields to 0.12 and the fluorescence lifetimes corresponding to two lifetimes near to 120 and 400 ps for DQD1 and DQD2. Besides, thanks to the presence of the micellar medium, the reactivity of these derivatives with nucleophilic analytes, such as HSO3-, was increased. This was achieved through kinetic studies, which demonstrated an increase in the bimolecular rate constants in the presence of a micellar medium. Finally, probe DQD1 was chosen as the best sensor since it was assessed to detect HSO3- with excellent results.

Keywords: bisulfite detection, cationic micelle, colorimetric probes, quinolinone derivatives

Procedia PDF Downloads 89