Search results for: adaptive estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2776

Search results for: adaptive estimation

346 Serum Vitamin D and Carboxy-Terminal TelopeptideType I Collagen Levels: As Markers for Bone Health Affection in Patients Treated with Different Antiepileptic Drugs

Authors: Moetazza M. Al-Shafei, Hala Abdel Karim, Eitedal M. Daoud, Hassan Zaki Hassuna

Abstract:

Epilepsy is a common neurological disorder affecting all age groups. It is one of the world's most prevalent non-communicable diseases. Increased evidence suggesting that long term usage of anti-epileptic drugs can have adverse effects on bone mineralization and bone molding .Aiming to study these effects and to give guide lines to support bone health through early intervention. From Neurology Out-Patient Clinic kaser Elaini University Hospital, 60 Patients were enrolled, 40 patients on antiepileptic drugs for at least two years and 20 controls matched with age and sex, epileptic but before starting treatment both chosen under specific criteria. Patients were divided into four groups, three groups with monotherapy treated with either Phynetoin, Valporic acid or Carbamazipine and fourth group treated with both Valporic acid and Carbamazipine. Estimation of serum Carboxy-Terminal Telopeptide of Type I- Collagen(ICTP) bone resorption marker, serum 25(OH )vit D3, calcium ,magnesium and phosphorus were done .Results showed that all patients on AED had significant low levels of 25(OH) vit D3 (p<0.001) ,with significant elevation of ICTP (P<0.05) versus controls. In group treated with Phynotoin highly significant elevation of (ICTP) marker and decrease of both serum 25(OH) vit D3 (P<0, 0001) and serum calcium(P<0.05)versus control. Double drug group showed significant decrease of serum 25(OH) vit D3 (P<0.0001) and decrease in Phosphorus (P<0.05) versus controls. Serum magnesium showed no significant differences between studied groups. We concluded that Anti- epileptic drugs appears to be an aggravating factor on bone mineralization ,so therapeutically it can be worth wile to supplement calcium and vitamin D even before initiation of antiepileptic therapy. ICTP marker can be used to evaluate change in bone resorption before and during AED therapy.

Keywords: antiepileptic drugs, bone minerals, carboxy teminal telopeptidetype-1-collagen bone resorption marker, vitamin D

Procedia PDF Downloads 473
345 The Classification Performance in Parametric and Nonparametric Discriminant Analysis for a Class- Unbalanced Data of Diabetes Risk Groups

Authors: Lily Ingsrisawang, Tasanee Nacharoen

Abstract:

Introduction: The problems of unbalanced data sets generally appear in real world applications. Due to unequal class distribution, many research papers found that the performance of existing classifier tends to be biased towards the majority class. The k -nearest neighbors’ nonparametric discriminant analysis is one method that was proposed for classifying unbalanced classes with good performance. Hence, the methods of discriminant analysis are of interest to us in investigating misclassification error rates for class-imbalanced data of three diabetes risk groups. Objective: The purpose of this study was to compare the classification performance between parametric discriminant analysis and nonparametric discriminant analysis in a three-class classification application of class-imbalanced data of diabetes risk groups. Methods: Data from a healthy project for 599 staffs in a government hospital in Bangkok were obtained for the classification problem. The staffs were diagnosed into one of three diabetes risk groups: non-risk (90%), risk (5%), and diabetic (5%). The original data along with the variables; diabetes risk group, age, gender, cholesterol, and BMI was analyzed and bootstrapped up to 50 and 100 samples, 599 observations per sample, for additional estimation of misclassification error rate. Each data set was explored for the departure of multivariate normality and the equality of covariance matrices of the three risk groups. Both the original data and the bootstrap samples show non-normality and unequal covariance matrices. The parametric linear discriminant function, quadratic discriminant function, and the nonparametric k-nearest neighbors’ discriminant function were performed over 50 and 100 bootstrap samples and applied to the original data. In finding the optimal classification rule, the choices of prior probabilities were set up for both equal proportions (0.33: 0.33: 0.33) and unequal proportions with three choices of (0.90:0.05:0.05), (0.80: 0.10: 0.10) or (0.70, 0.15, 0.15). Results: The results from 50 and 100 bootstrap samples indicated that the k-nearest neighbors approach when k = 3 or k = 4 and the prior probabilities of {non-risk:risk:diabetic} as {0.90:0.05:0.05} or {0.80:0.10:0.10} gave the smallest error rate of misclassification. Conclusion: The k-nearest neighbors approach would be suggested for classifying a three-class-imbalanced data of diabetes risk groups.

Keywords: error rate, bootstrap, diabetes risk groups, k-nearest neighbors

Procedia PDF Downloads 412
344 Monte Carlo Simulation of Thyroid Phantom Imaging Using Geant4-GATE

Authors: Parimalah Velo, Ahmad Zakaria

Abstract:

Introduction: Monte Carlo simulations of preclinical imaging systems allow opportunity to enable new research that could range from designing hardware up to discovery of new imaging application. The simulation system which could accurately model an imaging modality provides a platform for imaging developments that might be inconvenient in physical experiment systems due to the expense, unnecessary radiation exposures and technological difficulties. The aim of present study is to validate the Monte Carlo simulation of thyroid phantom imaging using Geant4-GATE for Siemen’s e-cam single head gamma camera. Upon the validation of the gamma camera simulation model by comparing physical characteristic such as energy resolution, spatial resolution, sensitivity, and dead time, the GATE simulation of thyroid phantom imaging is carried out. Methods: A thyroid phantom is defined geometrically which comprises of 2 lobes with 80mm in diameter, 1 hot spot, and 3 cold spots. This geometry accurately resembling the actual dimensions of thyroid phantom. A planar image of 500k counts with 128x128 matrix size was acquired using simulation model and in actual experimental setup. Upon image acquisition, quantitative image analysis was performed by investigating the total number of counts in image, the contrast of the image, radioactivity distributions on image and the dimension of hot spot. Algorithm for each quantification is described in detail. The difference in estimated and actual values for both simulation and experimental setup is analyzed for radioactivity distribution and dimension of hot spot. Results: The results show that the difference between contrast level of simulation image and experimental image is within 2%. The difference in the total count between simulation and actual study is 0.4%. The results of activity estimation show that the relative difference between estimated and actual activity for experimental and simulation is 4.62% and 3.03% respectively. The deviation in estimated diameter of hot spot for both simulation and experimental study are similar which is 0.5 pixel. In conclusion, the comparisons show good agreement between the simulation and experimental data.

Keywords: gamma camera, Geant4 application of tomographic emission (GATE), Monte Carlo, thyroid imaging

Procedia PDF Downloads 252
343 The Use of a Novel Visual Kinetic Demonstration Technique in Student Skill Acquisition of the Sellick Cricoid Force Manoeuvre

Authors: L. Nathaniel-Wurie

Abstract:

The Sellick manoeuvre a.k.a the application of cricoid force (CF), was first described by Brian Sellick in 1961. CF is the application of digital pressure against the cricoid cartilage with the intention of posterior force causing oesophageal compression against the vertebrae. This is designed to prevent passive regurgitation of gastric contents, which is a major cause of morbidity and mortality during emergency airway management inside and outside of the hospital. To the authors knowledge, there is no universally standardised training modality and, therefore, no reliable way to examine if there are appropriate outcomes. If force is not measured during training, how can one surmise that appropriate, accurate, or precise amounts of force are being used routinely. Poor homogeneity in teaching and untested outcomes will correlate with reduced efficacy and increased adverse effects. For this study, the accuracy of force delivery in trained professionals was tested, and outcomes contrasted against a novice control and a novice study group. In this study, 20 operating department practitioners were tested (with a mean experience of 5.3years of performing CF). Subsequent contrast with 40 novice students who were randomised into one of two arms. ‘Arm A’ were explained the procedure, then shown the procedure then asked to perform CF with the corresponding force measurement being taken three times. Arm B had the same process as arm A then before being tested, they had 10, and 30 Newtons applied to their hands to increase intuitive understanding of what the required force equated to, then were asked to apply the equivalent amount of force against a visible force metre and asked to hold that force for 20 seconds which allowed direct visualisation and correction of any over or under estimation. Following this, Arm B were then asked to perform the manoeuvre, and the force generated measured three times. This study shows that there is a wide distribution of force produced by trained professionals and novices performing the procedure for the first time. Our methodology for teaching the manoeuvre shows an improved accuracy, precision, and homogeneity within the group when compared to novices and even outperforms trained practitioners. In conclusion, if this methodology is adopted, it may correlate with higher clinical outcomes, less adverse events, and more successful airway management in critical medical scenarios.

Keywords: airway, cricoid, medical education, sellick

Procedia PDF Downloads 51
342 Conflict Resolution in Fuzzy Rule Base Systems Using Temporal Modalities Inference

Authors: Nasser S. Shebka

Abstract:

Fuzzy logic is used in complex adaptive systems where classical tools of representing knowledge are unproductive. Nevertheless, the incorporation of fuzzy logic, as it’s the case with all artificial intelligence tools, raised some inconsistencies and limitations in dealing with increased complexity systems and rules that apply to real-life situations and hinders the ability of the inference process of such systems, but it also faces some inconsistencies between inferences generated fuzzy rules of complex or imprecise knowledge-based systems. The use of fuzzy logic enhanced the capability of knowledge representation in such applications that requires fuzzy representation of truth values or similar multi-value constant parameters derived from multi-valued logic, which set the basis for the three t-norms and their based connectives which are actually continuous functions and any other continuous t-norm can be described as an ordinal sum of these three basic ones. However, some of the attempts to solve this dilemma were an alteration to fuzzy logic by means of non-monotonic logic, which is used to deal with the defeasible inference of expert systems reasoning, for example, to allow for inference retraction upon additional data. However, even the introduction of non-monotonic fuzzy reasoning faces a major issue of conflict resolution for which many principles were introduced, such as; the specificity principle and the weakest link principle. The aim of our work is to improve the logical representation and functional modelling of AI systems by presenting a method of resolving existing and potential rule conflicts by representing temporal modalities within defeasible inference rule-based systems. Our paper investigates the possibility of resolving fuzzy rules conflict in a non-monotonic fuzzy reasoning-based system by introducing temporal modalities and Kripke's general weak modal logic operators in order to expand its knowledge representation capabilities by means of flexibility in classifying newly generated rules, and hence, resolving potential conflicts between these fuzzy rules. We were able to address the aforementioned problem of our investigation by restructuring the inference process of the fuzzy rule-based system. This is achieved by using time-branching temporal logic in combination with restricted first-order logic quantifiers, as well as propositional logic to represent classical temporal modality operators. The resulting findings not only enhance the flexibility of complex rule-base systems inference process but contributes to the fundamental methods of building rule bases in such a manner that will allow for a wider range of applicable real-life situations derived from a quantitative and qualitative knowledge representational perspective.

Keywords: fuzzy rule-based systems, fuzzy tense inference, intelligent systems, temporal modalities

Procedia PDF Downloads 66
341 An Analytical Formulation of Pure Shear Boundary Condition for Assessing the Response of Some Typical Sites in Mumbai

Authors: Raj Banerjee, Aniruddha Sengupta

Abstract:

An earthquake event, associated with a typical fault rupture, initiates at the source, propagates through a rock or soil medium and finally daylights at a surface which might be a populous city. The detrimental effects of an earthquake are often quantified in terms of the responses of superstructures resting on the soil. Hence, there is a need for the estimation of amplification of the bedrock motions due to the influence of local site conditions. In the present study, field borehole log data of Mangalwadi and Walkeswar sites in Mumbai city are considered. The data consists of variation of SPT N-value with the depth of soil. A correlation between shear wave velocity (Vₛ) and SPT N value for various soil profiles of Mumbai city has been developed using various existing correlations which is used further for site response analysis. MATLAB program is developed for studying the ground response analysis by performing two dimensional linear and equivalent linear analysis for some of the typical Mumbai soil sites using pure shear (Multi Point Constraint) boundary condition. The model is validated in linear elastic and equivalent linear domain using the popular commercial program, DEEPSOIL. Three actual earthquake motions are selected based on their frequency contents and durations and scaled to a PGA of 0.16g for the present ground response analyses. The results are presented in terms of peak acceleration time history with depth, peak shear strain time history with depth, Fourier amplitude versus frequency, response spectrum at the surface etc. The peak ground acceleration amplification factors are found to be about 2.374, 3.239 and 2.4245 for Mangalwadi site and 3.42, 3.39, 3.83 for Walkeswar site using 1979 Imperial Valley Earthquake, 1989 Loma Gilroy Earthquake and 1987 Whitter Narrows Earthquake, respectively. In the absence of any site-specific response spectrum for the chosen sites in Mumbai, the generated spectrum at the surface may be utilized for the design of any superstructure at these locations.

Keywords: deepsoil, ground response analysis, multi point constraint, response spectrum

Procedia PDF Downloads 161
340 Safety Validation of Black-Box Autonomous Systems: A Multi-Fidelity Reinforcement Learning Approach

Authors: Jared Beard, Ali Baheri

Abstract:

As autonomous systems become more prominent in society, ensuring their safe application becomes increasingly important. This is clearly demonstrated with autonomous cars traveling through a crowded city or robots traversing a warehouse with heavy equipment. Human environments can be complex, having high dimensional state and action spaces. This gives rise to two problems. One being that analytic solutions may not be possible. The other is that in simulation based approaches, searching the entirety of the problem space could be computationally intractable, ruling out formal methods. To overcome this, approximate solutions may seek to find failures or estimate their likelihood of occurrence. One such approach is adaptive stress testing (AST) which uses reinforcement learning to induce failures in the system. The premise of which is that a learned model can be used to help find new failure scenarios, making better use of simulations. In spite of these failures AST fails to find particularly sparse failures and can be inclined to find similar solutions to those found previously. To help overcome this, multi-fidelity learning can be used to alleviate this overuse of information. That is, information in lower fidelity can simulations can be used to build up samples less expensively, and more effectively cover the solution space to find a broader set of failures. Recent work in multi-fidelity learning has passed information bidirectionally using “knows what it knows” (KWIK) reinforcement learners to minimize the number of samples in high fidelity simulators (thereby reducing computation time and load). The contribution of this work, then, is development of the bidirectional multi-fidelity AST framework. Such an algorithm, uses multi-fidelity KWIK learners in an adversarial context to find failure modes. Thus far, a KWIK learner has been used to train an adversary in a grid world to prevent an agent from reaching its goal; thus demonstrating the utility of KWIK learners in an AST framework. The next step is implementation of the bidirectional multi-fidelity AST framework described. Testing will be conducted in a grid world containing an agent attempting to reach a goal position and adversary tasked with intercepting the agent as demonstrated previously. Fidelities will be modified by adjusting the size of a time-step, with higher-fidelity effectively allowing for more responsive closed loop feedback. Results will compare the single KWIK AST learner with the multi-fidelity algorithm with respect to number of samples, distinct failure modes found, and relative effect of learning after a number of trials.

Keywords: multi-fidelity reinforcement learning, multi-fidelity simulation, safety validation, falsification

Procedia PDF Downloads 130
339 Tea and Its Working Methodology in the Biomass Estimation of Poplar Species

Authors: Pratima Poudel, Austin Himes, Heidi Renninger, Eric McConnel

Abstract:

Populus spp. (poplar) are the fastest-growing trees in North America, making them ideal for a range of applications as they can achieve high yields on short rotations and regenerate by coppice. Furthermore, poplar undergoes biochemical conversion to fuels without complexity, making it one of the most promising, purpose-grown, woody perennial energy sources. Employing wood-based biomass for bioenergy offers numerous benefits, including reducing greenhouse gas (GHG) emissions compared to non-renewable traditional fuels, the preservation of robust forest ecosystems, and creating economic prospects for rural communities.In order to gain a better understanding of the potential use of poplar as a biomass feedstock for biofuel in the southeastern US, the conducted a techno-economic assessment (TEA). This assessment is an analytical approach that integrates technical and economic factors of a production system to evaluate its economic viability. the TEA specifically focused on a short rotation coppice system employing a single-pass cut-and-chip harvesting method for poplar. It encompassed all the costs associated with establishing dedicated poplar plantations, including land rent, site preparation, planting, fertilizers, and herbicides. Additionally, we performed a sensitivity analysis to evaluate how different costs can affect the economic performance of the poplar cropping system. This analysis aimed to determine the minimum average delivered selling price for one metric ton of biomass necessary to achieve a desired rate of return over the cropping period. To inform the TEA, data on the establishment, crop care activities, and crop yields were derived from a field study conducted at the Mississippi Agricultural and Forestry Experiment Station's Bearden Dairy Research Center in Oktibbeha County and Pontotoc Ridge-Flatwood Branch Experiment Station in Pontotoc County.

Keywords: biomass, populus species, sensitivity analysis, technoeconomic analysis

Procedia PDF Downloads 56
338 Using Lysosomal Immunogenic Cell Death to Target Breast Cancer via Xanthine Oxidase/Micro-Antibody Fusion Protein

Authors: Iulianna Taritsa, Kuldeep Neote, Eric Fossel

Abstract:

Lysosome-induced immunogenic cell death (LIICD) is a powerful mechanism of targeting cancer cells that kills circulating malignant cells and primes the host’s immune cells against future remission. Current immunotherapies for cancer are limited in preventing recurrence – a gap that can be bridged by training the immune system to recognize cancer neoantigens. Lysosomal leakage can be induced therapeutically to traffic antigens from dying cells to dendritic cells, which can later present those tumorigenic antigens to T cells. Previous research has shown that oxidative agents administered in the tumor microenvironment can initiate LIICD. We generated a fusion protein between an oxidative agent known as xanthine oxidase (XO) and a mini-antibody specific for EGFR/HER2-sensitive breast tumor cells. The anti-EGFR single domain antibody fragment is uniquely sourced from llama, which is functional without the presence of a light chain. These llama micro-antibodies have been shown to be better able to penetrate tissues and have improved physicochemical stability as compared to traditional monoclonal antibodies. We demonstrate that the fusion protein created is stable and can induce early markers of immunogenic cell death in an in vitro human breast cancer cell line (SkBr3). Specifically, we measured overall cell death, as well as surface-expressed calreticulin, extracellular ATP release, and HMGB1 production. These markers are consensus indicators of ICD. Flow cytometry, luminescence assays, and ELISA were used respectively to quantify biomarker levels between treated versus untreated cells. We also included a positive control group of SkBr3 cells dosed with doxorubicin (a known inducer of LIICD) and a negative control dosed with cisplatin (a known inducer of cell death, but not of the immunogenic variety). We looked at each marker at various time points after cancer cells were treated with the XO/antibody fusion protein, doxorubicin, and cisplatin. Upregulated biomarkers after treatment with the fusion protein indicate an immunogenic response. We thus show the potential for this fusion protein to induce an anticancer effect paired with an adaptive immune response against EGFR/HER2+ cells. Our research in human cell lines here provides evidence for the success of the same therapeutic method for patients and serves as the gateway to developing a new treatment approach against breast cancer.

Keywords: apoptosis, breast cancer, immunogenic cell death, lysosome

Procedia PDF Downloads 180
337 Dataset Quality Index:Development of Composite Indicator Based on Standard Data Quality Indicators

Authors: Sakda Loetpiparwanich, Preecha Vichitthamaros

Abstract:

Nowadays, poor data quality is considered one of the majority costs for a data project. The data project with data quality awareness almost as much time to data quality processes while data project without data quality awareness negatively impacts financial resources, efficiency, productivity, and credibility. One of the processes that take a long time is defining the expectations and measurements of data quality because the expectation is different up to the purpose of each data project. Especially, big data project that maybe involves with many datasets and stakeholders, that take a long time to discuss and define quality expectations and measurements. Therefore, this study aimed at developing meaningful indicators to describe overall data quality for each dataset to quick comparison and priority. The objectives of this study were to: (1) Develop a practical data quality indicators and measurements, (2) Develop data quality dimensions based on statistical characteristics and (3) Develop Composite Indicator that can describe overall data quality for each dataset. The sample consisted of more than 500 datasets from public sources obtained by random sampling. After datasets were collected, there are five steps to develop the Dataset Quality Index (SDQI). First, we define standard data quality expectations. Second, we find any indicators that can measure directly to data within datasets. Thirdly, each indicator aggregates to dimension using factor analysis. Next, the indicators and dimensions were weighted by an effort for data preparing process and usability. Finally, the dimensions aggregate to Composite Indicator. The results of these analyses showed that: (1) The developed useful indicators and measurements contained ten indicators. (2) the developed data quality dimension based on statistical characteristics, we found that ten indicators can be reduced to 4 dimensions. (3) The developed Composite Indicator, we found that the SDQI can describe overall datasets quality of each dataset and can separate into 3 Level as Good Quality, Acceptable Quality, and Poor Quality. The conclusion, the SDQI provide an overall description of data quality within datasets and meaningful composition. We can use SQDI to assess for all data in the data project, effort estimation, and priority. The SDQI also work well with Agile Method by using SDQI to assessment in the first sprint. After passing the initial evaluation, we can add more specific data quality indicators into the next sprint.

Keywords: data quality, dataset quality, data quality management, composite indicator, factor analysis, principal component analysis

Procedia PDF Downloads 114
336 A Prospective Study of a Clinically Significant Anatomical Change in Head and Neck Intensity-Modulated Radiation Therapy Using Transit Electronic Portal Imaging Device Images

Authors: Wilai Masanga, Chirapha Tannanonta, Sangutid Thongsawad, Sasikarn Chamchod, Todsaporn Fuangrod

Abstract:

The major factors of radiotherapy for head and neck (HN) cancers include patient’s anatomical changes and tumour shrinkage. These changes can significantly affect the planned dose distribution that causes the treatment plan deterioration. A measured transit EPID images compared to a predicted EPID images using gamma analysis has been clinically implemented to verify the dose accuracy as part of adaptive radiotherapy protocol. However, a global gamma analysis dose not sensitive to some critical organ changes as the entire treatment field is compared. The objective of this feasibility study is to evaluate the dosimetric response to patient anatomical changes during the treatment course in HN IMRT (Head and Neck Intensity-Modulated Radiation Therapy) using a novel comparison method; organ-of-interest gamma analysis. This method provides more sensitive to specific organ change detection. Random replanned 5 HN IMRT patients with causes of tumour shrinkage and patient weight loss that critically affect to the parotid size changes were selected and evaluated its transit dosimetry. A comprehensive physics-based model was used to generate a series of predicted transit EPID images for each gantry angle from original computed tomography (CT) and replan CT datasets. The patient structures; including left and right parotid, spinal cord, and planning target volume (PTV56) were projected to EPID level. The agreement between the transit images generated from original CT and replanned CT was quantified using gamma analysis with 3%, 3mm criteria. Moreover, only gamma pass-rate is calculated within each projected structure. The gamma pass-rate in right parotid and PTV56 between predicted transit of original CT and replan CT were 42.8%( ± 17.2%) and 54.7%( ± 21.5%). The gamma pass-rate for other projected organs were greater than 80%. Additionally, the results of organ-of-interest gamma analysis were compared with 3-dimensional cone-beam computed tomography (3D-CBCT) and the rational of replan by radiation oncologists. It showed that using only registration of 3D-CBCT to original CT does not provide the dosimetric impact of anatomical changes. Using transit EPID images with organ-of-interest gamma analysis can provide additional information for treatment plan suitability assessment.

Keywords: re-plan, anatomical change, transit electronic portal imaging device, EPID, head, and neck

Procedia PDF Downloads 198
335 Relationships Between the Petrophysical and Mechanical Properties of Rocks and Shear Wave Velocity

Authors: Anamika Sahu

Abstract:

The Himalayas, like many mountainous regions, is susceptible to multiple hazards. In recent times, the frequency of such disasters is continuously increasing due to extreme weather phenomena. These natural hazards are responsible for irreparable human and economic loss. The Indian Himalayas has repeatedly been ruptured by great earthquakes in the past and has the potential for a future large seismic event as it falls under the seismic gap. Damages caused by earthquakes are different in different localities. It is well known that, during earthquakes, damage to the structure is associated with the subsurface conditions and the quality of construction materials. So, for sustainable mountain development, prior estimation of site characterization will be valuable for designing and constructing the space area and for efficient mitigation of the seismic risk. Both geotechnical and geophysical investigation of the subsurface is required to describe the subsurface complexity. In mountainous regions, geophysical methods are gaining popularity as areas can be studied without disturbing the ground surface, and also these methods are time and cost-effective. The MASW method is used to calculate the Vs30. Vs30 is the average shear wave velocity for the top 30m of soil. Shear wave velocity is considered the best stiffness indicator, and the average of shear wave velocity up to 30 m is used in National Earthquake Hazards Reduction Program (NEHRP) provisions (BSSC,1994) and Uniform Building Code (UBC), 1997 classification. Parameters obtained through geotechnical investigation have been integrated with findings obtained through the subsurface geophysical survey. Joint interpretation has been used to establish inter-relationships among mineral constituents, various textural parameters, and unconfined compressive strength (UCS) with shear wave velocity. It is found that results obtained through the MASW method fitted well with the laboratory test. In both conditions, mineral constituents and textural parameters (grain size, grain shape, grain orientation, and degree of interlocking) control the petrophysical and mechanical properties of rocks and the behavior of shear wave velocity.

Keywords: MASW, mechanical, petrophysical, site characterization

Procedia PDF Downloads 67
334 Strategic Interventions to Combat Socio-economic Impacts of Drought in Thar - A Case Study of Nagarparkar

Authors: Anila Hayat

Abstract:

Pakistan is one of those developing countries that are least involved in emissions but has the most vulnerable environmental conditions. Pakistan is ranked 8th in most affected countries by climate change on the climate risk index 1992-2011. Pakistan is facing severe water shortages and flooding as a result of changes in rainfall patterns, specifically in the least developed areas such as Tharparkar. Nagarparkar, once an attractive tourist spot located in Tharparkar because of its tropical desert climate, is now facing severe drought conditions for the last few decades. This study investigates the present socio-economic situation of local communities, major impacts of droughts and their underlying causes and current mitigation strategies adopted by local communities. The study uses both secondary (quantitative in nature) and primary (qualitative in nature) methods to understand the impacts and explore causes on the socio-economic life of local communities of the study area. The relevant data has been collected through household surveys using structured questionnaires, focus groups and in-depth interviews of key personnel from local and international NGOs to explore the sensitivity of impacts and adaptation to droughts in the study area. This investigation is limited to four rural communities of union council Pilu of Nagarparkar district, including Bheel, BhojaBhoon, Mohd Rahan Ji Dhani and Yaqub Ji Dhani villages. The results indicate that drought has caused significant economic and social hardships for the local communities as more than 60% of the overall population is dependent on rainfall which has been disturbed by irregular rainfall patterns. The decline in Crop yields has forced the local community to migrate to nearby areas in search of livelihood opportunities. Communities have not undertaken any appropriate adaptive actions to counteract the adverse effect of drought; they are completely dependent on support from the government and external aid for survival. Respondents also reported that poverty is a major cause of their vulnerability to drought. An increase in population, limited livelihood opportunities, caste system, lack of interest from the government sector, unawareness shaped their vulnerability to drought and other social issues. Based on the findings of this study, it is recommended that the local authorities shall create awareness about drought hazards and improve the resilience of communities against drought. It is further suggested to develop, introduce and implement water harvesting practices at the community level to promote drought-resistant crops.

Keywords: migration, vulnerability, awareness, Drought

Procedia PDF Downloads 114
333 An Economic Study for Fish Production in Egypt

Authors: Manal Elsayed Elkheshin, Rasha Saleh Mansour, Mohamed Fawzy Mohamed Eldnasury, Mamdouh Elbadry Mohamed

Abstract:

This research Aims to identify the main factors affecting the production and the fish consumption in Egypt, through the econometric estimation for various forms functions of fish production and fish consumption during the period (1991-2014), as the aim of this research to forecast the production and the fish consumption in Egypt until 2020, through determine the best standard methods using (ARIMA).This research also aims to the economic feasibility of the production of fish in aquaculture farms study; investment cost and represents the value of land, buildings, equipment and irrigation. Aquaculture requires three types of fish (Tilapia, carp fish, and mullet fish), and the total area of the farm, about an acre. The annual Fish production from this project about 3.5 tons. The annual investment costs of about 50500 pounds, Find conclude that the project can repay the cost of their investments after about 4 years and 5 months, and therefore recommend the implementation of the project, and internal rate of return reached (IRR) of about 22.1%, where it is clear that the rate of large internal rate of return, and achieves pound invested in this project annual return is estimated at 22.1 pounds, more than the opportunity cost, so we recommend the need to implement the project.Recommendations:1. Increasing the fish agriculture to decrease the gap of animal protein. 2.Increasing the number of mechanism fishing boats, and the provision of transport equipped to maintain the quality of fish production. 3.Encourage and attract the local and foreign investments, providing advice to the investor on the aquaculture field. 4. Action newsletters awareness of the importance of these projects where these projects resulted in a net profit after recovery in less than five years, IRR amounted to about 23%, which is much more than the opportunity cost of a bank interest rate is about 7%, helping to create work and graduates opportunities, and contribute to the reduction of imports of the fish, and improve the performance of the food trade balance.

Keywords: equation model, individual share, red meat, consumption, production, endogenous variable, exogenous variable, financial performance evaluates fish culture, feasibility study, fish production, aquaculture

Procedia PDF Downloads 339
332 An Adaptive Oversampling Technique for Imbalanced Datasets

Authors: Shaukat Ali Shahee, Usha Ananthakumar

Abstract:

A data set exhibits class imbalance problem when one class has very few examples compared to the other class, and this is also referred to as between class imbalance. The traditional classifiers fail to classify the minority class examples correctly due to its bias towards the majority class. Apart from between-class imbalance, imbalance within classes where classes are composed of a different number of sub-clusters with these sub-clusters containing different number of examples also deteriorates the performance of the classifier. Previously, many methods have been proposed for handling imbalanced dataset problem. These methods can be classified into four categories: data preprocessing, algorithmic based, cost-based methods and ensemble of classifier. Data preprocessing techniques have shown great potential as they attempt to improve data distribution rather than the classifier. Data preprocessing technique handles class imbalance either by increasing the minority class examples or by decreasing the majority class examples. Decreasing the majority class examples lead to loss of information and also when minority class has an absolute rarity, removing the majority class examples is generally not recommended. Existing methods available for handling class imbalance do not address both between-class imbalance and within-class imbalance simultaneously. In this paper, we propose a method that handles between class imbalance and within class imbalance simultaneously for binary classification problem. Removing between class imbalance and within class imbalance simultaneously eliminates the biases of the classifier towards bigger sub-clusters by minimizing the error domination of bigger sub-clusters in total error. The proposed method uses model-based clustering to find the presence of sub-clusters or sub-concepts in the dataset. The number of examples oversampled among the sub-clusters is determined based on the complexity of sub-clusters. The method also takes into consideration the scatter of the data in the feature space and also adaptively copes up with unseen test data using Lowner-John ellipsoid for increasing the accuracy of the classifier. In this study, neural network is being used as this is one such classifier where the total error is minimized and removing the between-class imbalance and within class imbalance simultaneously help the classifier in giving equal weight to all the sub-clusters irrespective of the classes. The proposed method is validated on 9 publicly available data sets and compared with three existing oversampling techniques that rely on the spatial location of minority class examples in the euclidean feature space. The experimental results show the proposed method to be statistically significantly superior to other methods in terms of various accuracy measures. Thus the proposed method can serve as a good alternative to handle various problem domains like credit scoring, customer churn prediction, financial distress, etc., that typically involve imbalanced data sets.

Keywords: classification, imbalanced dataset, Lowner-John ellipsoid, model based clustering, oversampling

Procedia PDF Downloads 393
331 The Grade Six Pupils' Learning Styles and Their Achievements and Difficulties on Fractions Based on Kolb's Model

Authors: Faiza Abdul Latip

Abstract:

One of the ultimate goals of any nation is to produce competitive manpower and this includes Philippines. Inclination in the field of Mathematics has a significant role in achieving this goal. However, Mathematics, as considered by most people, is the most difficult subject matter along with its topics to learn. This could be manifested from the low performance of students in national and international assessments. Educators have been widely using learning style models in identifying the way students learn. Moreover, it could be the frontline in knowing the difficulties held by each learner in a particular topic specifically concepts pertaining to fractions. However, as what many educators observed, students show difficulties in doing mathematical tasks and in great degree in dealing with fractions most specifically in the district of Datu Odin Sinsuat, Maguindanao. This study focused on the Datu Odin Sinsuat district grade six pupils’ learning styles along with their achievements and difficulties in learning concepts on fractions. Five hundred thirty-two pupils from ten different public elementary schools of the Datu Odin Sinsuat districts were purposively used as the respondents of the study. A descriptive research using the survey method was employed in this study. Quantitative analysis on the pupils’ learning styles on the Kolb’s Learning Style Inventory (KLSI) and scores on the mathematics diagnostic test on fraction concepts were made using this method. The simple frequency and percentage counts were used to analyze the pupils’ learning styles and their achievements on fractions. To determine the pupils’ difficulties in fractions, the index of difficulty on every item was determined. Lastly, the Kruskal-Wallis Test was used in determining the significant difference in the pupils’ achievements on fractions classified by their learning styles. This test was set at 0.05 level of significance. The minimum H-Value of 7.82 was used to determine the significance of the test. The results revealed that the pupils of Datu Odin Sinsuat districts learn fractions in varied ways as they are of different learning styles. However, their achievements in fractions are low regardless of their learning styles. Difficulties in learning fractions were found most in the area of Estimation, Comparing/Ordering, and Division Interpretation of Fractions. Most of the pupils find it very difficult to use fraction as a measure, compare or arrange series of fractions and use the concept of fraction as a quotient.

Keywords: difficulties in fraction, fraction, Kolb's model, learning styles

Procedia PDF Downloads 192
330 Climate Change and Migration in the Semi-arid Tropic and Eastern Regions of India: Exploring Alternative Adaptation Strategies

Authors: Gauri Sreekumar, Sabuj Kumar Mandal

Abstract:

Contributing about 18% to India’s Gross Domestic Product, the agricultural sector plays a significant role in the Indian rural economy. Despite being the primary source of livelihood for more than half of India’s population, most of them are marginal and small farmers facing several challenges due to agro-climatic shocks. Climate change is expected to increase the risk in the regions that are highly agriculture dependent. With systematic and scientific evidence of changes in rainfall, temperature and other extreme climate events, migration started to emerge as a survival strategy for the farm households. In this backdrop, our present study aims to combine the two strands of literature and attempts to explore whether migration is the only adaptation strategy for the farmers once they experience crop failures due adverse climatic condition. Combining the temperature and rainfall information from the weather data provided by the Indian Meteorological Department with the household level panel data on Indian states belonging to the Eastern and Semi-Arid Tropics regions from the Village Dynamics in South Asia (VDSA) collected by the International Crop Research Institute for the Semi-arid Tropics, we form a rich panel data for the years 2010-2014. A Recursive Econometric Model is used to establish the three-way nexus between climate change-yield-migration while addressing the role of irrigation and local non-farm income diversification. Using Three Stage Least Squares Estimation method, we find that climate change induced yield loss is a major driver of farmers’ migration. However, irrigation and local level non-farm income diversification are found to mitigate the adverse impact of climate change on migration. Based on our empirical results, we suggest for enhancing irrigation facilities and making local non-farm income diversification opportunities available to increase farm productivity and thereby reduce farmers’ migration.

Keywords: climate change, migration, adaptation, mitigation

Procedia PDF Downloads 41
329 Household Climate-Resilience Index Development for the Health Sector in Tanzania: Use of Demographic and Health Surveys Data Linked with Remote Sensing

Authors: Heribert R. Kaijage, Samuel N. A. Codjoe, Simon H. D. Mamuya, Mangi J. Ezekiel

Abstract:

There is strong evidence that climate has changed significantly affecting various sectors including public health. The recommended feasible solution is adopting development trajectories which combine both mitigation and adaptation measures for improving resilience pathways. This approach demands a consideration for complex interactions between climate and social-ecological systems. While other sectors such as agriculture and water have developed climate resilience indices, the public health sector in Tanzania is still lagging behind. The aim of this study was to find out how can we use Demographic and Health Surveys (DHS) linked with Remote Sensing (RS) technology and metrological information as tools to inform climate change resilient development and evaluation for the health sector. Methodological review was conducted whereby a number of studies were content analyzed to find appropriate indicators and indices for climate resilience household and their integration approach. These indicators were critically reviewed, listed, filtered and their sources determined. Preliminary identification and ranking of indicators were conducted using participatory approach of pairwise weighting by selected national stakeholders from meeting/conferences on human health and climate change sciences in Tanzania. DHS datasets were retrieved from Measure Evaluation project, processed and critically analyzed for possible climate change indicators. Other sources for indicators of climate change exposure were also identified. For the purpose of preliminary reporting, operationalization of selected indicators was discussed to produce methodological approach to be used in resilience comparative analysis study. It was found that household climate resilient index depends on the combination of three indices namely Household Adaptive and Mitigation Capacity (HC), Household Health Sensitivity (HHS) and Household Exposure Status (HES). It was also found that, DHS alone cannot complement resilient evaluation unless integrated with other data sources notably flooding data as a measure of vulnerability, remote sensing image of Normalized Vegetation Index (NDVI) and Metrological data (deviation from rainfall pattern). It can be concluded that if these indices retrieved from DHS data sets are computed and scientifically integrated can produce single climate resilience index and resilience maps could be generated at different spatial and time scales to enhance targeted interventions for climate resilient development and evaluations. However, further studies are need to test for the sensitivity of index in resilience comparative analysis among selected regions.

Keywords: climate change, resilience, remote sensing, demographic and health surveys

Procedia PDF Downloads 143
328 The Effect of Acute Muscular Exercise and Training Status on Haematological Indices in Adult Males

Authors: Ibrahim Musa, Mohammed Abdul-Aziz Mabrouk, Yusuf Tanko

Abstract:

Introduction: Long term physical training affect the performance of athletes especially the females. Soccer which is a team sport, played in an outdoor field, require adequate oxygen transport system for the maximal aerobic power during exercise in order to complete 90 minutes of competitive play. Suboptimal haematological status has often been recorded in athletes with intensive physical activity. It may be due to the iron depletion caused by hemolysis or haemodilution results from plasma volume expansion. There is lack of data regarding the dynamics of red blood cell variables, in male football players. We hypothesized that, a long competitive season involving frequent matches and intense training could influence red blood cell variables, as a consequence of applying repeated physical loads when compared with sedentary. Methods: This cross sectional study was carried on 40 adult males (20 athletes and 20 non athletes) between 18-25 years of age. The 20 apparently healthy male non athletes were taken as sedentary and 20 male footballers comprise the study group. The university institutional review board (ABUTH/HREC/TRG/36) gave approval for all procedures in accordance with the Declaration of Helsinki. Red blood cell (RBC) concentration, packed cell volume (PCV), and plasma volume were measured in fasting state and immediately after exercise. Statistical analysis was done by using SPSS/ win.20.0 for comparison within and between the groups, using student’s paired and unpaired “t” test respectively. Results: The finding from our study shows that, immediately after termination of exercise, the mean RBC counts and PCV significantly (p<0.005) decreased with significant increased (p<0.005) in plasma volume when compared with pre-exercised values in both group. In addition the post exercise RBC was significantly higher in untrained (261.10±8.5) when compared with trained (255.20±4.5). However, there was no significant differences in the post exercise hematocrit and plasma volume parameters between the sedentary and the footballers. Moreover, beside changes in pre-exercise values among the sedentary and the football players, the resting red blood cell counts and Plasma volume (PV %) was significantly (p < 0.05) higher in the sedentary group (306.30±10.05 x 104 /mm3; 58.40±0.54%) when compared with football players (293.70±4.65 x 104 /mm3; 55.60±1.18%). On the other hand, the sedentary group exhibited significant (p < 0.05) decrease in PCV (41.60±0.54%) when compared with the football players (44.40±1.18%). Conclusions: It is therefore proposed that the acute football exercise induced reduction in RBC and PCV is entirely due to plasma volume expansion, and not of red blood cell hemolysis. In addition, the training status also influenced haematological indices of male football players differently from the sedentary at rest due to adaptive response. This is novel.

Keywords: Haematological Indices, Performance Status, Sedentary, Male Football Players

Procedia PDF Downloads 236
327 Genetic Diversity of Sugar Beet Pollinators

Authors: Ksenija Taški-Ajdukovic, Nevena Nagl, Živko Ćurčić, Dario Danojević

Abstract:

Information about genetic diversity of sugar beet parental populations is of a great importance for hybrid breeding programs. The aim of this research was to evaluate genetic diversity among and within populations and lines of diploid sugar beet pollinators, by using SSR markers. As plant material were used eight pollinators originating from three USDA-ARS breeding programs and four pollinators from Institute of Field and Vegetable Crops, Novi Sad. Depending on the presence of self-fertility gene, the pollinators were divided into three groups: autofertile (inbred lines), autosterile (open-pollinating populations), and group with partial presence of autofertility gene. A total of 40 SSR primers were screened, out of which 34 were selected for the analysis of genetic diversity. A total of 129 different alleles were obtained with mean value 3.2 alleles per SSR primer. According to the results of genetic variability assessment the number and percentage of polymorphic loci was the maximal in pollinators NS1 and tester cms2 while effective number of alleles, expected heterozygosis and Shannon’s index was highest in pollinator EL0204. Analysis of molecular variance (AMOVA) showed that 77.34% of the total genetic variation was attributed to intra-varietal variance. Correspondence analysis results were very similar to grouping by neighbor-joining algorithm. Number of groups was smaller by one, because correspondence analysis merged IFVCNS pollinators with CZ25 into one group. Pollinators FC220, FC221 and C 51 were in the next group, while self-fertile pollinators CR10 and C930-35 from USDA-Salinas were separated. On another branch were self-sterile pollinators ЕL0204 and ЕL53 from USDA-East Lansing. Sterile testers cms1 and cms2 formed separate group. The presented results confirmed that SSR analysis can be successfully used in estimation of genetic diversity within and among sugar beet populations. Since the tested pollinator differed considering the presence of self-fertility gene, their heterozygosity differed as well. It was lower in genotypes with fixed self-fertility genes. Since the most of tested populations were open-pollinated, which rarely self-pollinate, high variability within the populations was expected. Cluster analysis grouped populations according to their origin.

Keywords: auto fertility, genetic diversity, pollinator, SSR, sugar beet

Procedia PDF Downloads 438
326 Comparison of Rainfall Trends in the Western Ghats and Coastal Region of Karnataka, India

Authors: Vinay C. Doranalu, Amba Shetty

Abstract:

In recent days due to climate change, there is a large variation in spatial distribution of daily rainfall within a small region. Rainfall is one of the main end climatic variables which affect spatio-temporal patterns of water availability. The real task postured by the change in climate is identification, estimation and understanding the uncertainty of rainfall. This study intended to analyze the spatial variations and temporal trends of daily precipitation using high resolution (0.25º x 0.25º) gridded data of Indian Meteorological Department (IMD). For the study, 38 grid points were selected in the study area and analyzed for daily precipitation time series (113 years) over the period 1901-2013. Grid points were divided into two zones based on the elevation and situated location of grid points: Low Land (exposed to sea and low elevated area/ coastal region) and High Land (Interior from sea and high elevated area/western Ghats). Time series were applied to examine the spatial analysis and temporal trends in each grid points by non-parametric Mann-Kendall test and Theil-Sen estimator to perceive the nature of trend and magnitude of slope in trend of rainfall. Pettit-Mann-Whitney test is applied to detect the most probable change point in trends of the time period. Results have revealed remarkable monotonic trend in each grid for daily precipitation of the time series. In general, by the regional cluster analysis found that increasing precipitation trend in shoreline region and decreasing trend in Western Ghats from recent years. Spatial distribution of rainfall can be partly explained by heterogeneity in temporal trends of rainfall by change point analysis. The Mann-Kendall test shows significant variation as weaker rainfall towards the rainfall distribution over eastern parts of the Western Ghats region of Karnataka.

Keywords: change point analysis, coastal region India, gridded rainfall data, non-parametric

Procedia PDF Downloads 272
325 Improving Climate Awareness and the Knowledge Related to Climate Change's Health Impacts on Medical Schools

Authors: Abram Zoltan

Abstract:

Over the past hundred years, human activities, particularly the burning of fossil fuels, have released enough carbon dioxide and other greenhouse gases to dissipate additional heat into the lower atmosphere and affect the global climate. Climate change affects many social and environmental determinants of health: clean air, safe drinking water, and adequate food. Our aim is to draw attention to the effects of climate change on the health and health care system. Improving climate awareness and the knowledge related to climate change's health impacts are essential among medical students and practicing medical doctors. Therefore, in their everyday practice, they also need some assistance and up-to-date knowledge of how climate change can endanger human health and deal with these novel health problems. Our activity, based on the cooperation of more universities, aims to develop new curriculum outlines and learning materials on climate change's health impacts for medical schools. Special attention is intended to pay to the possible preventative measures against these impacts. For all of this, the project plans to create new curriculum outlines and learning materials for medical students, elaborate methodological guidelines and create training materials for medical doctors' postgraduate learning programs. The target groups of the project are medical students, educational staff of medical schools and universities, practicing medical doctors with special attention to the general practitioners and family doctors. We had searched various surveys, domestic and international studies about the effects of climate change and statistical estimation of the possible consequences. The health effects of climate change can be measured only approximately by considering only a fraction of the potential health effects and assuming continued economic growth and health progress. We can estimate that climate change is expected to cause about 250,000 more deaths. We conclude that climate change is one of the most serious problems of the 21st century, affecting all populations. In the short- to medium-term, the health effects of climate change will be determined mainly by human vulnerability. In the longer term, the effects depend increasingly on the extent to which transformational action is taken now to reduce emissions. We can contribute to reducing environmental pollution by raising awareness and by educating the population.

Keywords: climate change, health impacts, medical students, education

Procedia PDF Downloads 102
324 Discharge Estimation in a Two Flow Braided Channel Based on Energy Concept

Authors: Amiya Kumar Pati, Spandan Sahu, Kishanjit Kumar Khatua

Abstract:

River is our main source of water which is a form of open channel flow and the flow in the open channel provides with many complex phenomena of sciences that needs to be tackled such as the critical flow conditions, boundary shear stress, and depth-averaged velocity. The development of society, more or less solely depends upon the flow of rivers. The rivers are major sources of many sediments and specific ingredients which are much essential for human beings. A river flow consisting of small and shallow channels sometimes divide and recombine numerous times because of the slow water flow or the built up sediments. The pattern formed during this process resembles the strands of a braid. Braided streams form where the sediment load is so heavy that some of the sediments are deposited as shifting islands. Braided rivers often exist near the mountainous regions and typically carry coarse-grained and heterogeneous sediments down a fairly steep gradient. In this paper, the apparent shear stress formulae were suitably modified, and the Energy Concept Method (ECM) was applied for the prediction of discharges at the junction of a two-flow braided compound channel. The Energy Concept Method has not been applied for estimating the discharges in the braided channels. The energy loss in the channels is analyzed based on mechanical analysis. The cross-section of channel is divided into two sub-areas, namely the main-channel below the bank-full level and region above the bank-full level for estimating the total discharge. The experimental data are compared with a wide range of theoretical data available in the published literature to verify this model. The accuracy of this approach is also compared with Divided Channel Method (DCM). From error analysis of this method, it is observed that the relative error is less for the data-sets having smooth floodplains when compared to rough floodplains. Comparisons with other models indicate that the present method has reasonable accuracy for engineering purposes.

Keywords: critical flow, energy concept, open channel flow, sediment, two-flow braided compound channel

Procedia PDF Downloads 105
323 A Simplified Method to Assess the Damage of an Immersed Cylinder Subjected to Underwater Explosion

Authors: Kevin Brochard, Herve Le Sourne, Guillaume Barras

Abstract:

The design of a submarine’s hull is crucial for its operability and crew’s safety, but also complex. Indeed, engineers need to balance lightness, acoustic discretion and resistance to both immersion pressure and environmental attacks. Submarine explosions represent a first-rate threat for the integrity of the hull, whose behavior needs to be properly analyzed. The presented work is focused on the development of a simplified analytical method to study the structural response of a deeply immersed cylinder submitted to an underwater explosion. This method aims to provide engineers a quick estimation of the resulting damage, allowing them to simulate a large number of explosion scenarios. The present research relies on the so-called plastic string on plastic foundation model. A two-dimensional boundary value problem for a cylindrical shell is converted to an equivalent one-dimensional problem of a plastic string resting on a non-linear plastic foundation. For this purpose, equivalence parameters are defined and evaluated by making assumptions on the shape of the displacement and velocity field in the cross-sectional plane of the cylinder. Closed-form solutions for the deformation and velocity profile of the shell are obtained for explosive loading, and compare well with numerical and experimental results. However, the plastic-string model has not yet been adapted for a cylinder in immersion subjected to an explosive loading. In fact, the effects of fluid-structure interaction have to be taken into account. Moreover, when an underwater explosion occurs, several pressure waves are emitted by the gas bubble pulsations, called secondary waves. The corresponding loads, which may produce significant damages to the cylinder, must also be accounted for. The analytical developments carried out to solve the above problem of a shock wave impacting a cylinder, considering fluid-structure interaction will be presented for an unstiffened cylinder. The resulting deformations are compared to experimental and numerical results for different shock factors and different standoff distances.

Keywords: immersed cylinder, rigid plastic material, shock loading, underwater explosion

Procedia PDF Downloads 294
322 Structure-Guided Optimization of Sulphonamide as Gamma–Secretase Inhibitors for the Treatment of Alzheimer’s Disease

Authors: Vaishali Patil, Neeraj Masand

Abstract:

In older people, Alzheimer’s disease (AD) is turning out to be a lethal disease. According to the amyloid hypothesis, aggregation of the amyloid β–protein (Aβ), particularly its 42-residue variant (Aβ42), plays direct role in the pathogenesis of AD. Aβ is generated through sequential cleavage of amyloid precursor protein (APP) by β–secretase (BACE) and γ–secretase (GS). Thus in the treatment of AD, γ-secretase modulators (GSMs) are potential disease-modifying as they selectively lower pathogenic Aβ42 levels by shifting the enzyme cleavage sites without inhibiting γ–secretase activity. This possibly avoids known adverse effects observed with complete inhibition of the enzyme complex. Virtual screening, via drug-like ADMET filter, QSAR and molecular docking analyses, has been utilized to identify novel γ–secretase modulators with sulphonamide nucleus. Based on QSAR analyses and docking score, some novel analogs have been synthesized. The results obtained by in silico studies have been validated by performing in vivo analysis. In the first step, behavioral assessment has been carried out using Scopolamine induced amnesia methodology. Later the same series has been evaluated for neuroprotective potential against the oxidative stress induced by Scopolamine. Biochemical estimation was performed to evaluate the changes in biochemical markers of Alzheimer’s disease such as lipid peroxidation (LPO), Glutathione reductase (GSH), and Catalase. The Scopolamine induced amnesia model has shown increased Acetylcholinesterase (AChE) levels and the inhibitory effect of test compounds in the brain AChE levels have been evaluated. In all the studies Donapezil (Dose: 50µg/kg) has been used as reference drug. The reduced AChE activity is shown by compounds 3f, 3c, and 3e. In the later stage, the most potent compounds have been evaluated for Aβ42 inhibitory profile. It can be hypothesized that this series of alkyl-aryl sulphonamides exhibit anti-AD activity by inhibition of Acetylcholinesterase (AChE) enzyme as well as inhibition of plaque formation on prolong dosage along with neuroprotection from oxidative stress.

Keywords: gamma-secretase inhibitors, Alzzheimer's disease, sulphonamides, QSAR

Procedia PDF Downloads 230
321 Association of the Frequency of the Dairy Products Consumption by Students and Health Parameters

Authors: Radyah Ivan, Khanferyan Roman

Abstract:

Milk and dairy products are an important component of a balanced diet. Dairy products represent a heterogeneous food group of solid, semi-solid and liquid, fermented or non-fermented foods, each differing in nutrients such as fat and micronutrient content. Deficiency of milk and dairy products contributes a impact on the main health parameters of the various age groups of the population. The goal of this study was to analyze of the frequency of the consumption of milk and various groups of dairy products by students and its association with their body mass index (BMI), body composition and other physiological parameters. 388 full-time students of the Medical Institute of RUDN University (185 male and 203 female, average age was 20.4+2.2 and 21.9+1.7 y.o., respectively) took part in the cross-sectional study. Anthropometric measurements, estimation of BMI and body composition were analyzed by bioelectrical impedance analysis. The frequency of consumption of the milk and various groups of dairy products was studied using a modified questionnaire on the frequency of consumption of products. Due to the questionnaire data on the frequency of consumption of the diary products, it have been demonstrated that only 11% of respondents consume milk daily, 5% - cottage cheese, 4% and 1% - fermented natural and with fillers milk products, respectively, hard cheese -4%. The study demonstrated that about 16% of the respondents did not consume milk at all over the past month, about one third - cottage cheese, 22% - natural sour-milk products and 18% - sour-milk products with various fillers. hard cheeses and pickled cheeses didn’t consume 9% and 26% of respondents, respectively. We demonstrated the gender differences in the characteristics of consumer preferences were revealed. Thus female students are less likely to use cream, sour cream, soft cheese, milk comparing to male students. Among female students the prevalence of persons with overweight was higher (25%) than among male students (19%). A modest inverse relationship was demonstrated between daily milk intake, BMI, body composition parameters and diary products consumption (r=-0.61 and r=-0.65). The study showed daily insufficient milk and dairy products consumption by students and due to this it have been demonstrated the relationship between the low and rare consumption of diary products and main parameters of indicators of physical activity and health indicators.

Keywords: frequency of consumption, milk, dairy products, physical development, nutrition, body mass index.

Procedia PDF Downloads 12
320 A Methodology of Using Fuzzy Logics and Data Analytics to Estimate the Life Cycle Indicators of Solar Photovoltaics

Authors: Thor Alexis Sazon, Alexander Guzman-Urbina, Yasuhiro Fukushima

Abstract:

This study outlines the method of how to develop a surrogate life cycle model based on fuzzy logic using three fuzzy inference methods: (1) the conventional Fuzzy Inference System (FIS), (2) the hybrid system of Data Analytics and Fuzzy Inference (DAFIS), which uses data clustering for defining the membership functions, and (3) the Adaptive-Neuro Fuzzy Inference System (ANFIS), a combination of fuzzy inference and artificial neural network. These methods were demonstrated with a case study where the Global Warming Potential (GWP) and the Levelized Cost of Energy (LCOE) of solar photovoltaic (PV) were estimated using Solar Irradiation, Module Efficiency, and Performance Ratio as inputs. The effects of using different fuzzy inference types, either Sugeno- or Mamdani-type, and of changing the number of input membership functions to the error between the calibration data and the model-generated outputs were also illustrated. The solution spaces of the three methods were consequently examined with a sensitivity analysis. ANFIS exhibited the lowest error while DAFIS gave slightly lower errors compared to FIS. Increasing the number of input membership functions helped with error reduction in some cases but, at times, resulted in the opposite. Sugeno-type models gave errors that are slightly lower than those of the Mamdani-type. While ANFIS is superior in terms of error minimization, it could generate solutions that are questionable, i.e. the negative GWP values of the Solar PV system when the inputs were all at the upper end of their range. This shows that the applicability of the ANFIS models highly depends on the range of cases at which it was calibrated. FIS and DAFIS generated more intuitive trends in the sensitivity runs. DAFIS demonstrated an optimal design point wherein increasing the input values does not improve the GWP and LCOE anymore. In the absence of data that could be used for calibration, conventional FIS presents a knowledge-based model that could be used for prediction. In the PV case study, conventional FIS generated errors that are just slightly higher than those of DAFIS. The inherent complexity of a Life Cycle study often hinders its widespread use in the industry and policy-making sectors. While the methodology does not guarantee a more accurate result compared to those generated by the Life Cycle Methodology, it does provide a relatively simpler way of generating knowledge- and data-based estimates that could be used during the initial design of a system.

Keywords: solar photovoltaic, fuzzy logic, inference system, artificial neural networks

Procedia PDF Downloads 140
319 Development of an Implicit Coupled Partitioned Model for the Prediction of the Behavior of a Flexible Slender Shaped Membrane in Interaction with Free Surface Flow under the Influence of a Moving Flotsam

Authors: Mahtab Makaremi Masouleh, Günter Wozniak

Abstract:

This research is part of an interdisciplinary project, promoting the design of a light temporary installable textile defence system against flood. In case river water levels increase abruptly especially in winter time, one can expect massive extra load on a textile protective structure in term of impact as a result of floating debris and even tree trunks. Estimation of this impulsive force on such structures is of a great importance, as it can ensure the reliability of the design in critical cases. This fact provides the motivation for the numerical analysis of a fluid structure interaction application, comprising flexible slender shaped and free-surface water flow, where an accelerated heavy flotsam tends to approach the membrane. In this context, the analysis on both the behavior of the flexible membrane and its interaction with moving flotsam is conducted by finite elements based solvers of the explicit solver and implicit Abacus solver available as products of SIMULIA software. On the other hand, a study on how free surface water flow behaves in response to moving structures, has been investigated using the finite volume solver of Star CCM+ from Siemens PLM Software. An automatic communication tool (CSE, SIMULIA Co-Simulation Engine) and the implementation of an effective partitioned strategy in form of an implicit coupling algorithm makes it possible for partitioned domains to be interconnected powerfully. The applied procedure ensures stability and convergence in the solution of these complicated issues, albeit with high computational cost; however, the other complexity of this study stems from mesh criterion in the fluid domain, where the two structures approach each other. This contribution presents the approaches for the establishment of a convergent numerical solution and compares the results with experimental findings.

Keywords: co-simulation, flexible thin structure, fluid-structure interaction, implicit coupling algorithm, moving flotsam

Procedia PDF Downloads 364
318 Estimating Algae Concentration Based on Deep Learning from Satellite Observation in Korea

Authors: Heewon Jeong, Seongpyo Kim, Joon Ha Kim

Abstract:

Over the last few tens of years, the coastal regions of Korea have experienced red tide algal blooms, which are harmful and toxic to both humans and marine organisms due to their potential threat. It was accelerated owing to eutrophication by human activities, certain oceanic processes, and climate change. Previous studies have tried to monitoring and predicting the algae concentration of the ocean with the bio-optical algorithms applied to color images of the satellite. However, the accurate estimation of algal blooms remains problems to challenges because of the complexity of coastal waters. Therefore, this study suggests a new method to identify the concentration of red tide algal bloom from images of geostationary ocean color imager (GOCI) which are representing the water environment of the sea in Korea. The method employed GOCI images, which took the water leaving radiances centered at 443nm, 490nm and 660nm respectively, as well as observed weather data (i.e., humidity, temperature and atmospheric pressure) for the database to apply optical characteristics of algae and train deep learning algorithm. Convolution neural network (CNN) was used to extract the significant features from the images. And then artificial neural network (ANN) was used to estimate the concentration of algae from the extracted features. For training of the deep learning model, backpropagation learning strategy is developed. The established methods were tested and compared with the performances of GOCI data processing system (GDPS), which is based on standard image processing algorithms and optical algorithms. The model had better performance to estimate algae concentration than the GDPS which is impossible to estimate greater than 5mg/m³. Thus, deep learning model trained successfully to assess algae concentration in spite of the complexity of water environment. Furthermore, the results of this system and methodology can be used to improve the performances of remote sensing. Acknowledgement: This work was supported by the 'Climate Technology Development and Application' research project (#K07731) through a grant provided by GIST in 2017.

Keywords: deep learning, algae concentration, remote sensing, satellite

Procedia PDF Downloads 163
317 Flexible Design Solutions for Complex Free form Geometries Aimed to Optimize Performances and Resources Consumption

Authors: Vlad Andrei Raducanu, Mariana Lucia Angelescu, Ion Cinca, Vasile Danut Cojocaru, Doina Raducanu

Abstract:

By using smart digital tools, such as generative design (GD) and digital fabrication (DF), problems of high actuality concerning resources optimization (materials, energy, time) can be solved and applications or products of free-form type can be created. In the new digital technology materials are active, designed in response to a set of performance requirements, which impose a total rethinking of old material practices. The article presents the design procedure key steps of a free-form architectural object - a column type one with connections to get an adaptive 3D surface, by using the parametric design methodology and by exploiting the properties of conventional metallic materials. In parametric design the form of the created object or space is shaped by varying the parameters values and relationships between the forms are described by mathematical equations. Digital parametric design is based on specific procedures, as shape grammars, Lindenmayer - systems, cellular automata, genetic algorithms or swarm intelligence, each of these procedures having limitations which make them applicable only in certain cases. In the paper the design process stages and the shape grammar type algorithm are presented. The generative design process relies on two basic principles: the modeling principle and the generative principle. The generative method is based on a form finding process, by creating many 3D spatial forms, using an algorithm conceived in order to apply its generating logic onto different input geometry. Once the algorithm is realized, it can be applied repeatedly to generate the geometry for a number of different input surfaces. The generated configurations are then analyzed through a technical or aesthetic selection criterion and finally the optimal solution is selected. Endless range of generative capacity of codes and algorithms used in digital design offers various conceptual possibilities and optimal solutions for both technical and environmental increasing demands of building industry and architecture. Constructions or spaces generated by parametric design can be specifically tuned, in order to meet certain technical or aesthetical requirements. The proposed approach has direct applicability in sustainable architecture, offering important potential economic advantages, a flexible design (which can be changed until the end of the design process) and unique geometric models of high performance.

Keywords: parametric design, algorithmic procedures, free-form architectural object, sustainable architecture

Procedia PDF Downloads 349