Search results for: half step
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4245

Search results for: half step

915 Methylation Profiling and Validation of Candidate Tissue-Specific Differentially Methylated Regions for Identification of Human Blood, Saliva, Semen and Vaginal Fluid and Its Application in Forensics

Authors: Meenu Joshi, Natalie Naidoo, Farzeen Kader

Abstract:

Identification of body fluids is an essential step in forensic investigation to aid in crime reconstruction. Tissue-specific differentially methylated regions (tDMRs) of the human genome can be targeted to be used as biomarkers to differentiate between body fluids. The present study was undertaken to establish the methylation status of potential tDMRs in blood, semen, saliva, and vaginal fluid by using methylation-specific PCR (MSP) and bisulfite sequencing (BS). The methylation statuses of 3 potential tDMRS in genes ZNF282, PTPRS, and HPCAL1 were analysed in 10 samples of each body fluid. With MSP analysis, the ZNF282, and PTPRS1 tDMR displayed semen-specific hypomethylation while HPCAL1 tDMR showed saliva-specific hypomethylation. With quantitative analysis by BS, the ZNF282 tDMR showed statistically significant difference in overall methylation between semen and all other body fluids as well as at individual CpG sites (p < 0.05). To evaluate the effect of environmental conditions on the stability of methylation profiles of the ZNF282 tDMR, five samples of each body fluid were subjected to five different forensic simulated conditions (dry at room temperature, wet in an exsiccator, outside on the ground, sprayed with alcohol, and sprayed with bleach) for 50 days. Vaginal fluid showed highest DNA recovery under all conditions while semen had least DNA quantity. Under outside on the ground condition, all body fluids except semen showed a decrease in methylation level; however, a significant decrease in methylation level was observed for saliva. A statistical significant difference was observed for saliva and semen (p < 0.05) for outside on the ground condition. No differences in methylation level were observed for the ZNF282 tDMR under all conditions for vaginal fluid samples. Thus, in the present study ZNF282 tDMR has been identified as a novel and stable semen-specific hypomethylation marker.

Keywords: body fluids, bisulphite sequencing, forensics, tDMRs, MSP

Procedia PDF Downloads 155
914 Contributions of Women to the Development of Hausa Literature as an Effective Means of Public Enlightenment: The Case of a 19th Century Female Scholar Maryam Bint Uthman Ibn Foduye

Authors: Balbasatu Ibrahim

Abstract:

In the 19th century, Hausaland an Islamic revolution known as the Sokoto Jihad took place that led to the establishment of the Sokoto Caliphate in 1804 under the leadership of the famous Sheik Uthman Bn Fodiye. Before the Jihad movement in Hausaland (now Northern Nigeria), women were left in ignorance and were used and dumped like old kitchen utensils. The sheik and his followers did their best to actualising women’s right to education by using their female family members as role models who were highly educated and renowned scholars. After the Jihad with the establishment of an Islamic state, the women scholars initiated different strategies to teach the generality of the women. The most efficient strategy was the ‘Yantaru Movement founded by Nana Asma’u the daughter of Sheikh Uthman Bn Fodiye in collaboration with her sisters around 1840. The ‘Yantaru movement is a women’s educational movement aimed at enlightening women in rural and urban areas. The move helped in massively mobilizing women for education. In addition to town pupils, women from villages and throughout the nooks and crannies of metropolitan Sokoto participated in the movement in the search for knowledge. Thus, the birth of the ‘Yantaru system of women’s education. The ‘Yantaru operates the three-tier system at village, town and the metropolitan capital of Sokoto. ‘Yantaru functions include imparting knowledge to elderly women and young girls. Step down enlightenment program on returning home. The most effective medium of communication in the ‘Yantaru movement was through poetry where scholars composed educational poems which were memorized by the ‘Yantaru, who on return recite it to fellow women at home. Through this system, many women were educated. This paper translated and examines one of such educative poems written by the second leader of the ‘Yantaru Movement Maryam Bn Uthman Bn Fodiye in 1855.

Keywords: English, Hausa language, public enlightenment, Maryam Bint Uthman Ibn Foduye

Procedia PDF Downloads 353
913 Determinants of Walking among Middle-Aged and Older Overweight and Obese Adults: Demographic, Health, and Socio-Environmental Factors

Authors: Samuel N. Forjuoh, Marcia G. Ory, Jaewoong Won, Samuel D. Towne, Suojin Wang, Chanam Lee

Abstract:

The public health burden of obesity is well established as is the influence of physical activity (PA) on the health and wellness of individuals who are obese. This study examined the influence of selected demographic, health, and socioenvironmental factors on the walking behaviors of middle-aged and older overweight and obese adults. Online and paper surveys were administered to community-dwelling overweight and obese adults aged ≥ 50 years residing in four cities in central Texas and seen by a family physician in the primary care clinic from October 2013 to June 2014. Descriptive statistics were used to characterize participants’ anthropometric and demographic data as well as their health conditions and walking, socioenvironmental, and more broadly defined PA behaviors. Then Pearson chi-square tests were used to assess differences between participants who reported walking the recommended ≥ 150 minutes for any purpose in a typical week as a proxy to meeting the U.S. Centers for Disease Control and Prevention’s PA guidelines and those who did not. Finally, logistic regression was used to predict walking the recommended ≥ 150 minutes for any purpose, controlling for covariates. The analysis was conducted in 2016. Of the total sample (n=253, survey response rate of 6.8%), the majority were non-Hispanic white (81.7%), married (74.5%), male (53.5%), and reported an annual household income of ≥ $50,000 (65.7%). Approximately, half were employed (49.6%), or had at least a college degree (51.8%). Slightly more than 1 in 5 (n=57, 22.5%) reported walking the recommended ≥150 minutes for any purpose in a typical week. The strongest predictors of walking the recommended ≥ 150 minutes for any purpose in a typical week in adjusted analysis were related to education and a high favorable perception of the neighborhood environment. Compared to those with a high school diploma or some college, participants with at least a college degree were five times as likely to walk the recommended ≥ 150 minutes for any purpose (OR=5.55, 95% CI=1.79-17.25). Walking the recommended ≥ 150 minutes for any purpose was significantly associated with participants who disagreed that there were many distracted drivers (e.g., on the cell phone while driving) in their neighborhood (OR=4.08, 95% CI=1.47-11.36) and those who agreed that there are sidewalks or protected walkways (e.g., walking trails) in their neighborhood (OR=3.55, 95% CI=1.10-11.49). Those employed were less likely to walk the recommended ≥ 150 minutes for any purpose compared to those unemployed (OR=0.31, 95% CI=0.11-0.85) as were those who reported some difficulty walking for a quarter of a mile (OR=0.19, 95% CI=0.05-0.77). Other socio-environmental factors such as having care-giver responsibilities for elders, someone to walk with, or a dog in the household as well as Walk Score™ were not significantly associated with walking the recommended ≥ 150 minutes for any purpose in a typical week. Neighborhood perception appears to be an important factor associated with the walking behaviors of middle-aged and older overweight and obese individuals. Enhancing the neighborhood environment (e.g., providing walking trails) may promote walking among these individuals.

Keywords: determinants of walking, obesity, older adults, physical activity

Procedia PDF Downloads 252
912 Comparison of Receiver Operating Characteristic Curve Smoothing Methods

Authors: D. Sigirli

Abstract:

The Receiver Operating Characteristic (ROC) curve is a commonly used statistical tool for evaluating the diagnostic performance of screening and diagnostic test with continuous or ordinal scale results which aims to predict the presence or absence probability of a condition, usually a disease. When the test results were measured as numeric values, sensitivity and specificity can be computed across all possible threshold values which discriminate the subjects as diseased and non-diseased. There are infinite numbers of possible decision thresholds along the continuum of the test results. The ROC curve presents the trade-off between sensitivity and the 1-specificity as the threshold changes. The empirical ROC curve which is a non-parametric estimator of the ROC curve is robust and it represents data accurately. However, especially for small sample sizes, it has a problem of variability and as it is a step function there can be different false positive rates for a true positive rate value and vice versa. Besides, the estimated ROC curve being in a jagged form, since the true ROC curve is a smooth curve, it underestimates the true ROC curve. Since the true ROC curve is assumed to be smooth, several smoothing methods have been explored to smooth a ROC curve. These include using kernel estimates, using log-concave densities, to fit parameters for the specified density function to the data with the maximum-likelihood fitting of univariate distributions or to create a probability distribution by fitting the specified distribution to the data nd using smooth versions of the empirical distribution functions. In the present paper, we aimed to propose a smooth ROC curve estimation based on the boundary corrected kernel function and to compare the performances of ROC curve smoothing methods for the diagnostic test results coming from different distributions in different sample sizes. We performed simulation study to compare the performances of different methods for different scenarios with 1000 repetitions. It is seen that the performance of the proposed method was typically better than that of the empirical ROC curve and only slightly worse compared to the binormal model when in fact the underlying samples were generated from the normal distribution.

Keywords: empirical estimator, kernel function, smoothing, receiver operating characteristic curve

Procedia PDF Downloads 146
911 Evolving Credit Scoring Models using Genetic Programming and Language Integrated Query Expression Trees

Authors: Alexandru-Ion Marinescu

Abstract:

There exist a plethora of methods in the scientific literature which tackle the well-established task of credit score evaluation. In its most abstract form, a credit scoring algorithm takes as input several credit applicant properties, such as age, marital status, employment status, loan duration, etc. and must output a binary response variable (i.e. “GOOD” or “BAD”) stating whether the client is susceptible to payment return delays. Data imbalance is a common occurrence among financial institution databases, with the majority being classified as “GOOD” clients (clients that respect the loan return calendar) alongside a small percentage of “BAD” clients. But it is the “BAD” clients we are interested in since accurately predicting their behavior is crucial in preventing unwanted loss for loan providers. We add to this whole context the constraint that the algorithm must yield an actual, tractable mathematical formula, which is friendlier towards financial analysts. To this end, we have turned to genetic algorithms and genetic programming, aiming to evolve actual mathematical expressions using specially tailored mutation and crossover operators. As far as data representation is concerned, we employ a very flexible mechanism – LINQ expression trees, readily available in the C# programming language, enabling us to construct executable pieces of code at runtime. As the title implies, they model trees, with intermediate nodes being operators (addition, subtraction, multiplication, division) or mathematical functions (sin, cos, abs, round, etc.) and leaf nodes storing either constants or variables. There is a one-to-one correspondence between the client properties and the formula variables. The mutation and crossover operators work on a flattened version of the tree, obtained via a pre-order traversal. A consequence of our chosen technique is that we can identify and discard client properties which do not take part in the final score evaluation, effectively acting as a dimensionality reduction scheme. We compare ourselves with state of the art approaches, such as support vector machines, Bayesian networks, and extreme learning machines, to name a few. The data sets we benchmark against amount to a total of 8, of which we mention the well-known Australian credit and German credit data sets, and the performance indicators are the following: percentage correctly classified, area under curve, partial Gini index, H-measure, Brier score and Kolmogorov-Smirnov statistic, respectively. Finally, we obtain encouraging results, which, although placing us in the lower half of the hierarchy, drive us to further refine the algorithm.

Keywords: expression trees, financial credit scoring, genetic algorithm, genetic programming, symbolic evolution

Procedia PDF Downloads 112
910 Evaluation and Control of Cracking for Bending Rein-forced One-way Concrete Voided Slab with Plastic Hollow Inserts

Authors: Mindaugas Zavalis

Abstract:

Analysis of experimental tests data of bending one-way reinforced concrete slabs from various articles of science revealed that voided slabs with a grid of hollow plastic inserts inside have smaller mechani-cal and physical parameters compared to continuous cross-section slabs (solid slabs). The negative influence of a reinforced concrete slab is impacted by hollow plastic inserts, which make a grid of voids in the middle of the cross-sectional area of the reinforced concrete slab. A formed grid of voids reduces the slab’s stiffness, which influences the slab’s parameters of serviceability, like deflection and cracking. Prima-ry investigation of data established during experiments illustrates that cracks occur faster in the tensile surface of the voided slab under bend-ing compared to bending solid slab. It means that the crack bending moment force for the voided slab is smaller than the solid slab and the reduction can variate in the range of 14 – 40 %. Reduce of resistance to cracking can be controlled by changing a lot of factors: the shape of the plastic hallow insert, plastic insert height, steps between plastic in-serts, usage of prestressed reinforcement, the diameter of reinforcement bar, slab effective depth, the bottom cover thickness of concrete, effec-tive cross-section of the concrete area about reinforcement and etc. Mentioned parameters are used to evaluate crack width and step of cracking, but existing analytical calculation methods for cracking eval-uation of voided slab with plastic inserts are not so exact and the re-sults of cracking evaluation in this paper are higher than the results of analyzed experiments. Therefore, it was made analytically calculations according to experimental bending tests of voided reinforced concrete slabs with hollow plastic inserts to find and propose corrections for the evaluation of cracking for reinforced concrete voided slabs with hollow plastic inserts.

Keywords: voided slab, cracking, hallow plastic insert, bending, one-way reinforced concrete, serviceability

Procedia PDF Downloads 63
909 Sulfate Reducing Bacteria Based Bio-Electrochemical System: Towards Sustainable Landfill Leachate and Solid Waste Treatment

Authors: K. Sushma Varma, Rajesh Singh

Abstract:

Non-engineered landfills cause serious environmental damage due to toxic emissions and mobilization of persistent pollutants, organic and inorganic contaminants, as well as soluble metal ions. The available treatment technologies for landfill leachate and solid waste are not effective from an economic, environmental, and social standpoint. The present study assesses the potential of the bioelectrochemical system (BES) integrated with sulfate-reducing bacteria (SRB) in the sustainable treatment and decontamination of landfill wastes. For this purpose, solid waste and landfill leachate collected from different landfill sites were evaluated for long-term treatment using the integrated SRB-BES anaerobic designed bioreactors after pre-treatment. Based on periodic gas composition analysis, physicochemical characterization of the leachate and solid waste, and metal concentration determination, the present system demonstrated significant improvement in volumetric hydrogen production by suppressing methanogenesis. High reduction percentages of Be, Cr, Pb, Cd, Sb, Ni, Cr, COD, and sTOC removal were observed. This mineralization can be attributed to the synergistic effect of ammonia-assisted pre-treatment complexation and microbial sulphide formation. Despite being amended with 0.1N ammonia, the treated leachate level of NO³⁻ was found to be reduced along with SO₄²⁻. This integrated SRB-BES system can be recommended as an eco-friendly solution for landfill reclamation. The BES-treated solid waste was evidently more stabilized, as shown by a five-fold increase in surface area, and potentially useful for leachate immobilization and bio-fortification of agricultural fields. The vector arrangement and magnitude showed similar treatment with differences in magnitudes for both leachate and solid waste. These findings support the efficacy of SRB-BES in the treatment of landfill leachate and solid waste sustainably, inching a step closer to our sustainable development goals. It utilizes low-cost treatment, and anaerobic SRB adapted to landfill sites. This technology may prove to be a sustainable treatment strategy upon scaling up as its outcomes are two-pronged: landfill waste treatment and energy recovery.

Keywords: bio-electrochemical system, leachate /solid waste treatment, landfill leachate, sulfate-reducing bacteria

Procedia PDF Downloads 96
908 Training a Neural Network to Segment, Detect and Recognize Numbers

Authors: Abhisek Dash

Abstract:

This study had three neural networks, one for number segmentation, one for number detection and one for number recognition all of which are coupled to one another. All networks were trained on the MNIST dataset and were convolutional. It was assumed that the images had lighter background and darker foreground. The segmentation network took 28x28 images as input and had sixteen outputs. Segmentation training starts when a dark pixel is encountered. Taking a window(7x7) over that pixel as focus, the eight neighborhood of the focus was checked for further dark pixels. The segmentation network was then trained to move in those directions which had dark pixels. To this end the segmentation network had 16 outputs. They were arranged as “go east”, ”don’t go east ”, “go south east”, “don’t go south east”, “go south”, “don’t go south” and so on w.r.t focus window. The focus window was resized into a 28x28 image and the network was trained to consider those neighborhoods which had dark pixels. The neighborhoods which had dark pixels were pushed into a queue in a particular order. The neighborhoods were then popped one at a time stitched to the existing partial image of the number one at a time and trained on which neighborhoods to consider when the new partial image was presented. The above process was repeated until the image was fully covered by the 7x7 neighborhoods and there were no more uncovered black pixels. During testing the network scans and looks for the first dark pixel. From here on the network predicts which neighborhoods to consider and segments the image. After this step the group of neighborhoods are passed into the detection network. The detection network took 28x28 images as input and had two outputs denoting whether a number was detected or not. Since the ground truth of the bounds of a number was known during training the detection network outputted in favor of number not found until the bounds were not met and vice versa. The recognition network was a standard CNN that also took 28x28 images and had 10 outputs for recognition of numbers from 0 to 9. This network was activated only when the detection network votes in favor of number detected. The above methodology could segment connected and overlapping numbers. Additionally the recognition unit was only invoked when a number was detected which minimized false positives. It also eliminated the need for rules of thumb as segmentation is learned. The strategy can also be extended to other characters as well.

Keywords: convolutional neural networks, OCR, text detection, text segmentation

Procedia PDF Downloads 150
907 Phenolic Composition and Antioxidant Activity of Sorbus L. Fruits and Leaves

Authors: Raudone Lina, Raudonis Raimondas, Gaivelyte Kristina, Pukalskas Audrius, Janulis Valdimaras, Viskelis Pranas

Abstract:

Sorbus L. species are widely distributed in the Northern hemisphere and have been used for medicinal purposes in various traditional medicine systems and as food ingredients. Various Sorbus L. raw materials, fruits, leaves, inflorescences, barks, possess diuretic, anti-inflammatory, hypoglycemic, anti-diarrheal and vasoprotective activities. Phenolics, to whom main pharmacological activities are attributed, are compounds of interest due to their notable antioxidant activity. The aim of this study was to determine the antioxidant profiles of fruits and leaves of selected Sorbus L. species (S. anglica, S. aria f. latifolia, S. arranensis, S. aucuparia, S. austriaca, S. caucasica, S. commixta, S. discolor, S. gracilis, S. hostii, S. semi-incisa, S. tianschanica) and to identify the phenolic compounds with potent contribution to antioxidant activity. Twenty two constituents were identified in Sorbus L. species using ultra high performance liquid chromatography coupled to quadruple and time-of-flight mass spectrometers (UPLC–QTOF–MS). Reducing activity of individual constituents was determined using high performance liquid chromatography (HPLC) coupled to post-column FRAP assay. Signicantly greatest trolox equivalent values corresponding up to 45% of contribution to antioxidant activity were assessed for neochlorogenic and chlorogenic acids, which were determined as markers of antioxidant activity in samples of leaves and fruits. Characteristic patterns of antioxidant profiles obtained using HPLC post-column FRAP assay significantly depend on specific Sorbus L. species and raw materials and are suitable for equivalency research of Sorbus L. fruits and leaves. Selecting species and target plant organs with richest phenolic composition and strongly expressed antioxidant power is the first step in further research of standardized extracts.

Keywords: FRAP, antioxidant, phenolic, Sorbus L., chlorogenic acid, neochlorogenic acid

Procedia PDF Downloads 449
906 A Simulation-Based Method for Evaluation of Energy System Cooperation between Pulp and Paper Mills and a District Heating System: A Case Study

Authors: Alexander Hedlund, Anna-Karin Stengard, Olof Björkqvist

Abstract:

A step towards reducing greenhouse gases and energy consumption is to collaborate with the energy system between several industries. This work is based on a case study on integration of pulp and paper mills with a district heating system in Sundsvall, Sweden. Present research shows that it is possible to make a significant reduction in the electricity demand in the mechanical pulping process. However, the profitability of the efficiency measures could be an issue, as the excess steam recovered from the refiners decreases with the electricity consumption. A consequence will be that the fuel demand for steam production will increase. If the fuel price is similar to the electricity price it would reduce the profit of such a project. If the paper mill can be integrated with a district heating system, it is possible to upgrade excess heat from a nearby kraft pulp mill to process steam via the district heating system in order to avoid the additional fuel need. The concept is investigated by using a simulation model describing both the mass and energy balance as well as the operating margin. Three scenarios were analyzed: reference, electricity reduction and energy substitution. The simulation show that the total input to the system is lowest in the Energy substitution scenario. Additionally, in the Energy substitution scenario the steam from the incineration boiler covers not only the steam shortage but also a part of the steam produced using the biofuel boiler, the cooling tower connected to the incineration boiler is no longer needed and the excess heat can cover the whole district heating load during the whole year. The study shows a substantial economic advantage if all stakeholders act together as one system. However, costs and benefits are unequally shared between the actors. This means that there is a need for new business models in order to share the system costs and benefits.

Keywords: energy system, cooperation, simulation method, excess heat, district heating

Procedia PDF Downloads 224
905 Approaches to Reduce the Complexity of Mathematical Models for the Operational Optimization of Large-Scale Virtual Power Plants in Public Energy Supply

Authors: Thomas Weber, Nina Strobel, Thomas Kohne, Eberhard Abele

Abstract:

In context of the energy transition in Germany, the importance of so-called virtual power plants in the energy supply continues to increase. The progressive dismantling of the large power plants and the ongoing construction of many new decentralized plants result in great potential for optimization through synergies between the individual plants. These potentials can be exploited by mathematical optimization algorithms to calculate the optimal application planning of decentralized power and heat generators and storage systems. This also includes linear or linear mixed integer optimization. In this paper, procedures for reducing the number of decision variables to be calculated are explained and validated. On the one hand, this includes combining n similar installation types into one aggregated unit. This aggregated unit is described by the same constraints and target function terms as a single plant. This reduces the number of decision variables per time step and the complexity of the problem to be solved by a factor of n. The exact operating mode of the individual plants can then be calculated in a second optimization in such a way that the output of the individual plants corresponds to the calculated output of the aggregated unit. Another way to reduce the number of decision variables in an optimization problem is to reduce the number of time steps to be calculated. This is useful if a high temporal resolution is not necessary for all time steps. For example, the volatility or the forecast quality of environmental parameters may justify a high or low temporal resolution of the optimization. Both approaches are examined for the resulting calculation time as well as for optimality. Several optimization models for virtual power plants (combined heat and power plants, heat storage, power storage, gas turbine) with different numbers of plants are used as a reference for the investigation of both processes with regard to calculation duration and optimality.

Keywords: CHP, Energy 4.0, energy storage, MILP, optimization, virtual power plant

Procedia PDF Downloads 169
904 The Organizational Structure, Development Features, and Metadiscoursal Elements in the Expository Writing of College Freshman Students

Authors: Lota Largavista

Abstract:

This study entitled, ‘The Organizational Structure, Development Features, and Metadiscoursal Elements in the Expository Writing of Freshman College Writers’ aimed to examine essays written by college students. It seeks to examine the organizational structure and development features of the essays and describe their defining characteristics, the linguistic elements at both macrostructural and microstructural discourse levels and the types of textual and interpersonal metadiscourse markers that are employed in order to negotiate meanings with their prospective readers. The different frameworks used to analyze the essays include Toulmin’s ( 1984) model for argument structure, Olson’s ( 2003) three-part essay structure; Halliday and Matthiesen (2004) in Herriman (2011) notions of thematic structure, Danes (1974) thematic progression or method of development, Halliday’s (2004) concept of grammatical and lexical cohesion ;Hyland’s (2005) metadiscourse strategies; and Chung and Nation’s( 2003) four-step scale for technical vocabulary. This descriptive study analyzes qualitatively and quantitatively how freshman students generally express their written compositions. Coding of units is done to determine what linguistic features are present in the essays. Findings revealed that students’ expository essays observe a three-part structure having all three moves, the Introduction, the Body and the Conclusion. Stance assertion, stance support, and emerging moves/strategies are found to be employed in the essays. Students have more marked themes on the essays and also prefer constant theme progression as their method of development. The analysis of salient linguistic elements reveals frequently used cohesive devices and metadiscoursal strategies. Based on the findings, an instructional learning plan is being proposed. This plan is characterized by a genre approach that focuses on expository and linguistic conventions.

Keywords: metadiscourse, organization, theme progression, structure

Procedia PDF Downloads 232
903 Building a Parametric Link between Mapping and Planning: A Sunlight-Adaptive Urban Green System Plan Formation Process

Authors: Chenhao Zhu

Abstract:

Quantitative mapping is playing a growing role in guiding urban planning, such as using a heat map created by CFX, CFD2000, or Envi-met, to adjust the master plan. However, there is no effective quantitative link between the mappings and planning formation. So, in many cases, the decision-making is still based on the planner's subjective interpretation and understanding of these mappings, which limits the improvement of scientific and accuracy brought by the quantitative mapping. Therefore, in this paper, an effort has been made to give a methodology of building a parametric link between the mapping and planning formation. A parametric planning process based on radiant mapping has been proposed for creating an urban green system. In the first step, a script is written in Grasshopper to build a road network and form the block, while the Ladybug Plug-in is used to conduct a radiant analysis in the form of mapping. Then, the research creatively transforms the radiant mapping from a polygon into a data point matrix, because polygon is hard to engage in the design formation. Next, another script is created to select the main green spaces from the road network based on the criteria of radiant intensity and connect the green spaces' central points to generate a green corridor. After that, a control parameter is introduced to adjust the corridor's form based on the radiant intensity. Finally, a green system containing greenspace and green corridor is generated under the quantitative control of the data matrix. The designer only needs to modify the control parameter according to the relevant research results and actual conditions to realize the optimization of the green system. This method can also be applied to much other mapping-based analysis, such as wind environment analysis, thermal environment analysis, and even environmental sensitivity analysis. The parameterized link between the mapping and planning will bring about a more accurate, objective, and scientific planning.

Keywords: parametric link, mapping, urban green system, radiant intensity, planning strategy, grasshopper

Procedia PDF Downloads 136
902 Humins: From Industrial By-Product to High Value Polymers

Authors: Pierluigi Tosi, Ed de Jong, Gerard van Klink, Luc Vincent, Alice Mija

Abstract:

During the last decades renewable and low-cost resources have attracted increasingly interest. Carbohydrates can be derived by lignocellulosic biomasses, which is an attractive option since they represent the most abundant carbon source available in nature. Carbohydrates can be converted in a plethora of industrially relevant compounds, such as 5-hydroxymethylfurfural (HMF) and levulinic acid (LA), within acid catalyzed dehydration of sugars with mineral acids. Unfortunately, these acid catalyzed conversions suffer of the unavoidable formation of highly viscous heterogeneous poly-disperse carbon based materials known as humins. This black colored low value by-product is made by a complex mixture of macromolecules built by covalent random condensations of the several compounds present during the acid catalyzed conversion. Humins molecular structure is still under investigation but seems based on furanic rings network linked by aliphatic chains and decorated by several reactive moieties (ketones, aldehydes, hydroxyls, …). Despite decades of research, currently there is no way to avoid humins formation. The key parameter for enhance the economic viability of carbohydrate conversion processes is, therefore, increasing the economic value of the humins by-product. Herein are presented new humins based polymeric materials that can be prepared starting from the raw by-product by thermal treatment, without any step of purification or pretreatment. Humins foams can be produced with the control of reaction key parameters, obtaining polymeric porous materials with designed porosity, density, thermal and electrical conductivity, chemical and electrical stability, carbon amount and mechanical properties. Physico chemical properties can be enhanced by modifications on the starting raw material or adding different species during the polymerization. A comparisons on the properties of different compositions will be presented, along with tested applications. The authors gratefully acknowledge the European Community for financial support through Marie-Curie H2020-MSCA-ITN-2015 "HUGS" Project.

Keywords: by-product, humins, polymers, valorization

Procedia PDF Downloads 137
901 Protein-Enrichment of Oilseed Meals by Triboelectrostatic Separation

Authors: Javier Perez-Vaquero, Katryn Junker, Volker Lammers, Petra Foerst

Abstract:

There is increasing importance to accelerate the transition to sustainable food systems by including environmentally friendly technologies. Our work focuses on protein enrichment and fractionation of agricultural side streams by dry triboelectrostatic separation technology. Materials are fed in particulate form into a system dispersed in a highly turbulent gas stream, whereby the high collision rate of particles against surfaces and other particles greatly enhances the electrostatic charge build-up over the particle surface. A subsequent step takes the charged particles to a delimited zone in the system where there is a highly uniform, intense electric field applied. Because the charge polarity acquired by a particle is influenced by its chemical composition, morphology, and structure, the protein-rich and fiber-rich particles of the starting material get opposite charge polarities, thus following different paths as they move through the region where the electric field is present. The output is two material fractions, which differ in their respective protein content. One is a fiber-rich, low-protein fraction, while the other is a high-protein, low-fiber composition. Prior to testing, materials undergo a milling process, and some samples are stored under controlled humidity conditions. In this way, the influence of both particle size and humidity content was established. We used two oilseed meals: lupine and rapeseed. In addition to a lab-scale separator to perform the experiments, the triboelectric separation process could be successfully scaled up to a mid-scale belt separator, increasing the mass feed from g/sec to kg/hour. The triboelectrostatic separation technology opens a huge potential for the exploitation of so far underutilized alternative protein sources. Agricultural side-streams from cereal and oil production, which are generated in high volumes by the industries, can further be valorized by this process.

Keywords: bench-scale processing, dry separation, protein-enrichment, triboelectrostatic separation

Procedia PDF Downloads 185
900 Topography Effects on Wind Turbines Wake Flow

Authors: H. Daaou Nedjari, O. Guerri, M. Saighi

Abstract:

A numerical study was conducted to optimize the positioning of wind turbines over complex terrains. Thus, a two-dimensional disk model was used to calculate the flow velocity deficit in wind farms for both flat and complex configurations. The wind turbine wake was assessed using the hybrid methods that combine CFD (Computational Fluid Dynamics) with the actuator disc model. The wind turbine rotor has been defined with a thrust force, coupled with the Navier-Stokes equations that were resolved by an open source computational code (Code_Saturne V3.0 developed by EDF) The simulations were conducted in atmospheric boundary layer condition considering a two-dimensional region located at the north of Algeria at 36.74°N longitude, 02.97°E latitude. The topography elevation values were collected according to a longitudinal direction of 1km downwind. The wind turbine sited over topography was simulated for different elevation variations. The main of this study is to determine the topography effect on the behavior of wind farm wake flow. For this, the wake model applied in complex terrain needs to selects the singularity effects of topography on the vertical wind flow without rotor disc first. This step allows to determine the existence of mixing scales and friction forces zone near the ground. So, according to the ground relief the wind flow waS disturbed by turbulence and a significant speed variation. Thus, the singularities of the velocity field were thoroughly collected and thrust coefficient Ct was calculated using the specific speed. In addition, to evaluate the land effect on the wake shape, the flow field was also simulated considering different rotor hub heights. Indeed, the distance between the ground and the hub height of turbine (Hhub) was tested in a flat terrain for different locations as Hhub=1.125D, Hhub = 1.5D and Hhub=2D (D is rotor diameter) considering a roughness value of z0=0.01m. This study has demonstrated that topographical farm induce a significant effect on wind turbines wakes, compared to that on flat terrain.

Keywords: CFD, wind turbine wake, k-epsilon model, turbulence, complex topography

Procedia PDF Downloads 558
899 Implementation of Dozer Push Measurement under Payment Mechanism in Mining Operation

Authors: Anshar Ajatasatru

Abstract:

The decline of coal prices over past years have been significantly increasing the awareness of effective mining operation. A viable step must be undertaken in becoming more cost competitive while striving for best mining practice especially at Melak Coal Mine in East Kalimantan, Indonesia. This paper aims to show how effective dozer push measurement method can be implemented as it is controlled by contract rate on the unit basis of USD ($) per bcm. The method emerges from an idea of daily dozer push activity that continually shifts the overburden until final target design by mine planning. Volume calculation is then performed by calculating volume of each time overburden is removed within determined distance using cut and fill method from a high precision GNSS system which is applied into dozer as a guidance to ensure the optimum result of overburden removal. Accumulation of daily to weekly dozer push volume is found 95 bcm which is multiplied by average sell rate of $ 0,95, thus the amount monthly revenue is $ 90,25. Furthermore, the payment mechanism is then based on push distance and push grade. The push distance interval will determine the rates that vary from $ 0,9 - $ 2,69 per bcm and are influenced by certain push slope grade from -25% until +25%. The amount payable rates for dozer push operation shall be specifically following currency adjustment and is to be added to the monthly overburden volume claim, therefore, the sell rate of overburden volume per bcm may fluctuate depends on the real time exchange rate of Jakarta Interbank Spot Dollar Rate (JISDOR). The result indicates that dozer push measurement can be one of the surface mining alternative since it has enabled to refine method of work, operating cost and productivity improvement apart from exposing risk of low rented equipment performance. In addition, payment mechanism of contract rate by dozer push operation scheduling will ultimately deliver clients by almost 45% cost reduction in the form of low and consistent cost.

Keywords: contract rate, cut-fill method, dozer push, overburden volume

Procedia PDF Downloads 309
898 Investigation into the Socio-ecological Impact of Migration of Fulani Herders in Anambra State of Nigeria Through a Climate Justice Lens

Authors: Anselm Ego Onyimonyi, Maduako Johnpaul O.

Abstract:

The study was designed to investigate into the socio-ecological impact of migration of Fulani herders in Anambra state of Nigeria, through a climate justice lens. Nigeria is one of the world’s most densely populated countries with a population of over 284 million people, half of which are considered to be in abject poverty. There is no doubt that livestock production provides sustainable contributions to food security and poverty reduction to Nigeria economy, but not without some environmental implications like any other economic activities. Nigeria is recognized as being vulnerable to climate change. Climate change and global warming if left unchecked will cause adverse effects on livelihoods in Nigeria, such as livestock production, crop production, fisheries, forestry and post-harvest activities, because the rainfall regimes and patterns will be altered, floods which devastate farmlands would occur, increase in temperature and humidity which increases pest and disease would occur and other natural disasters like desertification, drought, floods, ocean and storm surges, which not only damage Nigerians’ livelihood but also cause harm to life and property, would occur. This and other climatic issue as it affects Fulani herdsmen was what this study investigated. In carrying out this research, a survey research design was adopted. A simple sampling technique was used. One local government area (LGA) was selected purposively from each of the four agricultural zone in the state based on its predominance of Fulani herders. For appropriate sampling, 25 respondents from each of the four Agricultural zones in the state were randomly selected making up the 100 respondent being sampled. Primary data were generated by using a set of structured 5-likert scale questionnaire. Data generated were analyzed using SPSS and the result presented using descriptive statistics. From the data analyzed, the study indentified; Unpredicted rainfall (mean = 3.56), Forest fire (mean = 4.63), Drying Water Source (mean = 3.99), Dwindling Grazing (mean 4.43), Desertification (mean = 4.44), Fertile land scarcity (mean = 3.42) as major factor predisposing Fulani herders to migrate southward while rejecting Natural inclination to migrate (mean = 2.38) and migration to cause trouble as a factor. On the reason why Fulani herders are trying to establish a permanent camp in Anambra state; Moderate temperature (mean= 3.60), Avoiding overgrazing (4.42), Search for fodder and water (mean = 4.81) and (mean = 4.70) respectively, Need for market (4.28), Favorable environment (mean = 3.99) and Access to fertile land (3.96) were identified. It was concluded that changing climatic variables necessitated the migration of herders from Northern Nigeria to areas in the South were the variables are most favorable to the herders and their animals.

Keywords: socio-ecological, migration, fulani, climate, justice, lens

Procedia PDF Downloads 29
897 A Methodological Approach to Digital Engineering Adoption and Implementation for Organizations

Authors: Sadia H. Syeda, Zain H. Malik

Abstract:

As systems continue to become more complex and the interdependencies of processes and sub-systems continue to grow and transform, the need for a comprehensive method of tracking and linking the lifecycle of the systems in a digital form becomes ever more critical. Digital Engineering (DE) provides an approach to managing an authoritative data source that links, tracks, and updates system data as it evolves and grows throughout the system development lifecycle. DE enables the developing, tracking, and sharing system data, models, and other related artifacts in a digital environment accessible to all necessary stakeholders. The DE environment provides an integrated electronic repository that enables traceability between design, engineering, and sustainment artifacts. The DE activities' primary objective is to develop a set of integrated, coherent, and consistent system models for the program. It is envisioned to provide a collaborative information-sharing environment for various stakeholders, including operational users, acquisition personnel, engineering personnel, and logistics and sustainment personnel. Examining the processes that DE can support in the systems engineering life cycle (SELC) is a primary step in the DE adoption and implementation journey. Through an analysis of the U.S Department of Defense’s (DoD) Office of the Secretary of Defense (OSD’s) Digital Engineering Strategy and their implementation, examples of DE implementation by the industry and technical organizations, this paper will provide descriptions of the current DE processes and best practices of implementing DE across an enterprise. This will help identify the capabilities, environment, and infrastructure needed to develop a potential roadmap for implementing DE practices consistent with its business strategy. A capability maturity matrix will be provided to assess the organization’s DE maturity emphasizing how all the SELC elements interlink to form a cohesive ecosystem. If implemented, DE can increase efficiency and improve the systems engineering processes' quality and outcomes.

Keywords: digital engineering, digital environment, digital maturity model, single source of truth, systems engineering life-cycle

Procedia PDF Downloads 87
896 Investigate the Side Effects of Patients With Severe COVID-19 and Choose the Appropriate Medication Regimens to Deal With Them

Authors: Rasha Ahmadi

Abstract:

In December 2019, a coronavirus, currently identified as SARS-CoV-2, produced a series of acute atypical respiratory illnesses in Wuhan, Hubei Province, China. The sickness induced by this virus was named COVID-19. The virus is transmittable between humans and has caused pandemics worldwide. The number of death tolls continues to climb and a huge number of countries have been obliged to perform social isolation and lockdown. Lack of focused therapy continues to be a problem. Epidemiological research showed that senior patients were more susceptible to severe diseases, whereas children tend to have milder symptoms. In this study, we focus on other possible side effects of COVID-19 and more detailed treatment strategies. Using bioinformatics analysis, we first isolated the gene expression profile of patients with severe COVID-19 from the GEO database. Patients' blood samples were used in the GSE183071 dataset. We then categorized the genes with high and low expression. In the next step, we uploaded the genes separately to the Enrichr database and evaluated our data for signs and symptoms as well as related medication regimens. The results showed that 138 genes with high expression and 108 genes with low expression were observed differentially in the severe COVID-19 VS control group. Symptoms and diseases such as embolism and thrombosis of the abdominal aorta, ankylosing spondylitis, suicidal ideation or attempt, regional enteritis were observed in genes with high expression and in genes with low expression of acute and subacute forms of ischemic heart, CNS infection and poliomyelitis, synovitis and tenosynovitis. Following the detection of diseases and possible signs and symptoms, Carmustine, Bithionol, Leflunomide were evaluated more significantly for high-expression genes and Chlorambucil, Ifosfamide, Hydroxyurea, Bisphenol for low-expression genes. In general, examining the different and invisible aspects of COVID-19 and identifying possible treatments can help us significantly in the emergency and hospitalization of patients.

Keywords: phenotypes, drug regimens, gene expression profiles, bioinformatics analysis, severe COVID-19

Procedia PDF Downloads 131
895 Sustainability of the Built Environment of Ranchi District

Authors: Vaidehi Raipat

Abstract:

A city is an expression of coexistence between its users and built environment. The way in which its spaces are animated signify the quality of this coexistence. Urban sustainability is the ability of a city to respond efficiently towards its people, culture, environment, visual image, history, visions and identity. The quality of built environment determines the quality of our lifestyles, but poor ability of the built environment to adapt and sustain itself through the changes leads to degradation of cities. Ranchi was created in November 2000, as the capital of the newly formed state Jharkhand, located on eastern side of India. Before this Ranchi was known as summer capital of Bihar and was a little larger than a town in terms of development. But since then it has been vigorously expanding in size, infrastructure as well as population. This sudden expansion has created a stress on existing built environment. The large forest covers, agricultural land, diverse culture and pleasant climatic conditions have degraded and decreased to a large extent. Narrow roads and old buildings are unable to bear the load of the changing requirements, fast improving technology and growing population. The built environment has hence been rendered unsustainable and unadaptable through fastidious changes of present era. Some of the common hazards that can be easily spotted in the built environment are half-finished built forms, pedestrians and vehicles moving on the same part of the road. Unpaved areas on street edges. Over-sized, bright and randomly placed hoardings. Negligible trees or green spaces. The old buildings have been poorly maintained and the new ones are being constructed over them. Roads are too narrow to cater to the increasing traffic, both pedestrian and vehicular. The streets have a large variety of activities taking place on them, but haphazardly. Trees are being cut down for road widening and new constructions. There is no space for greenery in the commercial as well as old residential areas. The old infrastructure is deteriorating because of poor maintenance and the economic limitations. Pseudo understanding of functionality as well as aesthetics drive the new infrastructure. It is hence necessary to evaluate the extent of sustainability of existing built environment of the city and create or regenerate the existing built environment into a more sustainable and adaptable one. For this purpose, research titled “Sustainability of the Built Environment of Ranchi District” has been carried out. In this research the condition of the built environment of Ranchi are explored so as to figure out the problems and shortcomings existing in the city and provide for design strategies that can make the existing built-environment sustainable. The built environment of Ranchi that include its outdoor spaces like streets, parks, other open areas, its built forms as well as its users, has been analyzed in terms of various urban design parameters. Based on which strategies have been suggested to make the city environmentally, socially, culturally and economically sustainable.

Keywords: adaptable, built-environment, sustainability, urban

Procedia PDF Downloads 233
894 Copula Autoregressive Methodology for Simulation of Solar Irradiance and Air Temperature Time Series for Solar Energy Forecasting

Authors: Andres F. Ramirez, Carlos F. Valencia

Abstract:

The increasing interest in renewable energies strategies application and the path for diminishing the use of carbon related energy sources have encouraged the development of novel strategies for integration of solar energy into the electricity network. A correct inclusion of the fluctuating energy output of a photovoltaic (PV) energy system into an electric grid requires improvements in the forecasting and simulation methodologies for solar energy potential, and the understanding not only of the mean value of the series but the associated underlying stochastic process. We present a methodology for synthetic generation of solar irradiance (shortwave flux) and air temperature bivariate time series based on copula functions to represent the cross-dependence and temporal structure of the data. We explore the advantages of using this nonlinear time series method over traditional approaches that use a transformation of the data to normal distributions as an intermediate step. The use of copulas gives flexibility to represent the serial variability of the real data on the simulation and allows having more control on the desired properties of the data. We use discrete zero mass density distributions to assess the nature of solar irradiance, alongside vector generalized linear models for the bivariate time series time dependent distributions. We found that the copula autoregressive methodology used, including the zero mass characteristics of the solar irradiance time series, generates a significant improvement over state of the art strategies. These results will help to better understand the fluctuating nature of solar energy forecasting, the underlying stochastic process, and quantify the potential of a photovoltaic (PV) energy generating system integration into a country electricity network. Experimental analysis and real data application substantiate the usage and convenience of the proposed methodology to forecast solar irradiance time series and solar energy across northern hemisphere, southern hemisphere, and equatorial zones.

Keywords: copula autoregressive, solar irradiance forecasting, solar energy forecasting, time series generation

Procedia PDF Downloads 312
893 Differentially Expressed Protein Biomarkers in Early and Advanced Stage Young Triple-Negative Breast Cancer Patients

Authors: Shamim Mushtaq, Moazzam Shahid

Abstract:

Breast cancer (BC) claims the lives of half a million women every year and is the most common cause of death in the developing world. In 2019, it was estimated that BC alone accounts for 15% of all cancer deaths in younger women (aged < 45 years old) with advanced-stage lung metastasis. According to the World Health Organization & International Union against Cancer, in Asia, a high number of cancer-related deaths will be observed in 2020, whereas the burden will be reduced in Western countries due to awareness about the disease, better health facilities and advanced treatments. In the last 15 years, it has been reported that the incidence of BC has increased by 1.1% among Asian compared to the US population from 2003 to 2012. To date, several BC biological subtypes have been reported so far, which are associated with different treatment responses. The heterogeneity and diversity of BC reflected these different subtypes, including Luminal A (23.7% prevalence) and B (38.8% prevalence) that have pathological estrogen receptor (ER+)-positive tumors, the human epidermal growth factor receptor 2 (HER2) (11.2% prevalence) and triple-negative breast cancer (TNBC) (25% prevalence). According to Shaukat Khanum Memorial Cancer Hospital and Research Centre – Pakistan, ten years of data showed that among 636 BC patients, 30.5% had TNBC who were <40 years of age, which is an extremely alarming situation. Therefore, there is a dire need to explore and develop therapeutic targets for the treatment of early TNBC. Since the last decade, unfortunately, there has been little success in understanding the complexity of TNBC and in discovering new biological therapeutic targets. However, conventional chemotherapy is the only choice of treatment for TNBC patients. Many investigators revealed advances in multi-omics (multiple "omes", e.g., genome, proteome, transcriptome, epigenome, and microbiome) which were later identified as actionable targets and increased prevalence in TNBC patients. However, various drugs have been identified so far which are related to a particular diagnostic and prognostic biomarker. For example, Epidermal growth factor receptor ( EGFR or ErbB-1), HER-2/neu (ErbB-2), HER-3 (ErbB-3), and HER-4 (ErbB-4). Protein Transglin-2 (TAGLN 2 ) and Profilins-1 (Pfn-1 ) are the ubiquitously expressed large family of proteins present in all eukaryotes, enabling actin cytoskeletal reorganization. It is known that the oncogenic transformation of cells is accompanied by alteration in the actin cytoskeleton. There are causal connections between altered expression of actin cytoskeletal regulators and cancer progression. Our case-control study identified TAGLN-2 and Pfn-1 proteins in TNBC blood by mass spectrometry. Both TAGLN-2 and Pfn-1 proteins are differentially expressed in early and advanced stages of TNBS patients, which could be potential predictors or therapeutic targets for TNBC.

Keywords: TNBC, blood biomarkers, mass spectrometry, qPCR, ELISA

Procedia PDF Downloads 40
892 Tropical Squall Lines in Brazil: A Methodology for Identification and Analysis Based on ISCCP Tracking Database

Authors: W. A. Gonçalves, E. P. Souza, C. R. Alcântara

Abstract:

The ISCCP-Tracking database offers an opportunity to study physical and morphological characteristics of Convective Systems based on geostationary meteorological satellites. This database contains 26 years of tracking of Convective Systems for the entire globe. Then, Tropical Squall Lines which occur in Brazil are certainly within the database. In this study, we propose a methodology for identification of these systems based on the ISCCP-Tracking database. A physical and morphological characterization of these systems is also shown. The proposed methodology is firstly based on the year of 2007. The Squall Lines were subjectively identified by visually analyzing infrared images from GOES-12. Based on this identification, the same systems were identified within the ISCCP-Tracking database. It is known, and it was also observed that the Squall Lines which occur on the north coast of Brazil develop parallel to the coast, influenced by the sea breeze. In addition, it was also observed that the eccentricity of the identified systems was greater than 0.7. Then, a methodology based on the inclination (based on the coast) and eccentricity (greater than 0.7) of the Convective Systems was applied in order to identify and characterize Tropical Squall Lines in Brazil. These thresholds were applied back in the ISCCP-Tracking database for the year of 2007. It was observed that other systems, which were not Squall Lines, were also identified. Then, we decided to call all systems identified by the inclination and eccentricity thresholds as Linear Convective Systems, instead of Squall Lines. After this step, the Linear Convective Systems were identified and characterized for the entire database, from 1983 to 2008. The physical and morphological characteristics of these systems were compared to those systems which did not have the required inclination and eccentricity to be called Linear Convective Systems. The results showed that the convection associated with the Linear Convective Systems seems to be more intense and organized than in the other systems. This affirmation is based on all ISCCP-Tracking variables analyzed. This type of methodology, which explores 26 years of satellite data by an objective analysis, was not previously explored in the literature. The physical and morphological characterization of the Linear Convective Systems based on 26 years of data is of a great importance and should be used in many branches of atmospheric sciences.

Keywords: squall lines, convective systems, linear convective systems, ISCCP-Tracking

Procedia PDF Downloads 296
891 Ibrutinib and the Potential Risk of Cardiac Failure: A Review of Pharmacovigilance Data

Authors: Abdulaziz Alakeel, Roaa Alamri, Abdulrahman Alomair, Mohammed Fouda

Abstract:

Introduction: Ibrutinib is a selective, potent, and irreversible small-molecule inhibitor of Bruton's tyrosine kinase (BTK). It forms a covalent bond with a cysteine residue (CYS-481) at the active site of Btk, leading to inhibition of Btk enzymatic activity. The drug is indicated to treat certain type of cancers such as mantle cell lymphoma (MCL), chronic lymphocytic leukaemia and Waldenström's macroglobulinaemia (WM). Cardiac failure is a condition referred to inability of heart muscle to pump adequate blood to human body organs. There are multiple types of cardiac failure including left and right-sided heart failure, systolic and diastolic heart failures. The aim of this review is to evaluate the risk of cardiac failure associated with the use of ibrutinib and to suggest regulatory recommendations if required. Methodology: Signal Detection team at the National Pharmacovigilance Center (NPC) of Saudi Food and Drug Authority (SFDA) performed a comprehensive signal review using its national database as well as the World Health Organization (WHO) database (VigiBase), to retrieve related information for assessing the causality between cardiac failure and ibrutinib. We used the WHO- Uppsala Monitoring Centre (UMC) criteria as standard for assessing the causality of the reported cases. Results: Case Review: The number of resulted cases for the combined drug/adverse drug reaction are 212 global ICSRs as of July 2020. The reviewers have selected and assessed the causality for the well-documented ICSRs with completeness scores of 0.9 and above (35 ICSRs); the value 1.0 presents the highest score for best-written ICSRs. Among the reviewed cases, more than half of them provides supportive association (four probable and 15 possible cases). Data Mining: The disproportionality of the observed and the expected reporting rate for drug/adverse drug reaction pair is estimated using information component (IC), a tool developed by WHO-UMC to measure the reporting ratio. Positive IC reflects higher statistical association while negative values indicates less statistical association, considering the null value equal to zero. The results of (IC=1.5) revealed a positive statistical association for the drug/ADR combination, which means “Ibrutinib” with “Cardiac Failure” have been observed more than expected when compared to other medications available in WHO database. Conclusion: Health regulators and health care professionals must be aware for the potential risk of cardiac failure associated with ibrutinib and the monitoring of any signs or symptoms in treated patients is essential. The weighted cumulative evidences identified from causality assessment of the reported cases and data mining are sufficient to support a causal association between ibrutinib and cardiac failure.

Keywords: cardiac failure, drug safety, ibrutinib, pharmacovigilance, signal detection

Procedia PDF Downloads 121
890 Exposure to Ionizing Radiation Resulting from the Chernobyl Fallout and Childhood Cardiac Arrhythmia: A Population Based Study

Authors: Geraldine Landon, Enora Clero, Jean-Rene Jourdain

Abstract:

In 2005, the Institut de Radioprotection et de Sûreté Nucléaire (IRSN, France) launched a research program named EPICE (acronym for 'Evaluation of Pathologies potentially Induced by CaEsium') to collect scientific information on non-cancer effects possibly induced by chronic exposures to low doses of ionizing radiation with the view of addressing a question raised by several French NGOs related to health consequences of the Chernobyl nuclear accident in children. The implementation of the program was preceded by a pilot phase to ensure that the project would be feasible and determine the conditions for implementing an epidemiological study on a population of several thousand children. The EPICE program focused on childhood cardiac arrhythmias started in May 2009 for 4 years, in partnership with the Russian Bryansk Diagnostic Center. The purpose of this cross-sectional study was to determine the prevalence of cardiac arrhythmias in the Bryansk oblast (depending on the contamination of the territory and the caesium-137 whole-body burden) and to assess whether caesium-137 was or not a factor associated with the onset of cardiac arrhythmias. To address these questions, a study bringing together 18 152 children aged 2 to 18 years was initiated; each child received three medical examinations (ECG, echocardiography, and caesium-137 whole-body activity measurement) and some of them were given with a 24-hour Holter monitoring and blood tests. The findings of the study, currently submitted to an international journal justifying that no results can be given at this step, allow us to answer clearly to the issue of radiation-induced childhood arrhythmia, a subject that has been debated for many years. Our results will be certainly helpful for health professionals responsible for the monitoring of population exposed to the releases from the Fukushima Dai-ichi nuclear power plant and also useful for future comparative study in children exposed to ionizing radiation in other contexts, such as cancer radiation therapies.

Keywords: Caesium-137, cardiac arrhythmia, Chernobyl, children

Procedia PDF Downloads 240
889 Constructing Practices for Lifestyle Journalism Education

Authors: Lucia Vodanovic, Bryan Pirolli

Abstract:

The London College of Communication is one of the only universities in the world to offer a lifestyle journalism master’s degree. A hybrid originally constructed largely out of a generic journalism program crossed with numerous cultural studies approaches, the degree has developed into a leading lifestyle journalism education attracting students worldwide. This research project seeks to present a framework for structuring the degree as well as to understand how students in this emerging field of study value the program. While some researchers have addressed questions about journalism and higher education, none have looked specifically at the increasingly important genre of lifestyle journalism, which Folker Hanusch defines as including notions of consumerism and critique among other identifying traits. Lifestyle journalism, itself poorly researched by scholars, can relate to topics including travel, fitness, and entertainment, and as such, arguably a lifestyle journalism degree should prepare students to engage with these topics. This research uses the existing Masters of Arts and Lifestyle Journalism at the London College of Communications as a case study to examine the school’s approach. Furthering Hanusch’s original definition, this master’s program attempts to characterizes lifestyle journalism by a specific voice or approach, as reflected in the diversity of student’s final projects. This framework echoes the ethos and ideas of the university, which focuses on creativity, design, and experimentation. By analyzing the current degree as well as student feedback, this research aims to assist future educators in pursuing the often neglected field of lifestyle journalism. Through a discovery of the unique mix of practical coursework, theoretical lessons, and broad scope of student work presented in this degree program, researchers strive to develop a framework for lifestyle journalism education, referring to Mark Deuze’s ten questions for journalism education development. While Hanusch began the discussion to legitimize the study of lifestyle journalism, this project strives to go one step further and open up a discussion about teaching of lifestyle journalism at the university level.

Keywords: education, journalism, lifestyle, university

Procedia PDF Downloads 295
888 Multimodal Rhetoric in the Wildlife Documentary, “My Octopus Teacher”

Authors: Visvaganthie Moodley

Abstract:

While rhetoric goes back as far as Aristotle who focalised its meaning as the “art of persuasion”, most scholars have focused on elocutio and dispositio canons, neglecting the rhetorical impact of multimodal texts, such as documentaries. Film documentaries are being increasingly rhetoric, often used by wildlife conservationists for influencing people to become more mindful about humanity’s connection with nature. This paper examines the award-winning film documentary, “My Octopus Teacher”, which depicts naturalist, Craig Foster’s unique discovery and relationship with a female octopus in the southern tip of Africa, the Cape of Storms in South Africa. It is anchored in Leech and Short’s (2007) framework of linguistic and stylistic categories – comprising lexical items, grammatical features, figures of speech and other rhetoric features, and cohesiveness – with particular foci on diction, anthropomorphic language, metaphors and symbolism. It also draws on Kress and van Leeuwen’s (2006) multimodal analysis to show how verbal cues (the narrator’s commentary), visual images in motion, visual images as metaphors and symbolism, and aural sensory images such as music and sound synergise for rhetoric effect. In addition, the analysis of “My Octopus Teacher” is guided by Nichol’s (2010) narrative theory; features of a documentary which foregrounds the credibility of the narrative as a text that represents real events with real people; and its modes of construction, viz., the poetic mode, the expository mode, observational mode and participatory mode, and their integration – forging documentaries as multimodal texts. This paper presents a multimodal rhetoric discussion on the sequence of salient episodes captured in the slow moving one-and-a-half-hour documentary. These are: (i) The prologue: on the brink of something extraordinary; (ii) The day it all started; (iii) The narrator’s turmoil: getting back into the ocean; (iv) The incredible encounter with the octopus; (v) Establishing a relationship; (vi) Outwitting the predatory pyjama shark; (vii) The cycle of life; and (viii) The conclusion: lessons from an octopus. The paper argues that wildlife documentaries, characterized by plausibility and which provide researchers the lens to examine the ideologies about animals and humans, offer an assimilation of the various senses – vocal, visual and audial – for engaging viewers in stylized compelling way; they have the ability to persuade people to think and act in particular ways. As multimodal texts, with its use of lexical items; diction; anthropomorphic language; linguistic, visual and aural metaphors and symbolism; and depictions of anthropocentrism, wildlife documentaries are powerful resources for promoting wildlife conservation and conscientizing people of the need for establishing a harmonious relationship with nature and humans alike.

Keywords: documentaries, multimodality, rhetoric, style, wildlife, conservation

Procedia PDF Downloads 87
887 ICAM1 Expression is Enhanced by TNFa through Histone Methylation in Human Brain Microvessel Cells

Authors: Ji-Young Choi, Jungjin Kim, Sang-Sun Yun, Sangmee Ahn Jo

Abstract:

Intracellular adhesion molecule1 (ICAM1) is a mediator of inflammation and involved in adhesion and transmigration of leukocytes to endothelial cells, resulting in enhancement of brain inflammation. We hypothesized that increase of ICAM1 expression in endothelial cells is an early step in the pathogenesis of brain diseases such as Alzheimer’s disease. Here, we report that ICAM1 expression is regulated by pro-inflammatory cytokine TNFa in human microvascular endothelial cell (HBMVEC). TNFa significantly increased ICAM1 mRNA and protein levels at the concentrations showing no cell toxicity. This increase was also shown in micro vessels of mouse brain 24 hours after treatment with TNFa (8 mg/kg, i.v). We then investigated the epigenetic mechanism involved in the induction of ICAM1 expression. Chromatin immunoprecipitation assay revealed that TNFa reduced methylation of histone3K9 (H3K9-2me) and histone3K27 (H3K27-3me), well-known modification as gene suppression, with in the ICAM1 promoter region. However, acetylation of H3K9 and H3K14, well-known modification as gene activation, was not changed by TNFa. Treatment of BIX01294, a specific inhibitor of histone methyltransferase G9a responsible for H3K9-2me, dramatically increased in ICAM1 mRNA and protein levels and overexpression of G9a gene suppressed TNFa-induced ICAM1 expression. In contrast, GSK126, an inhibitor of histone methyltransferase EZH2 responsible for H3K27-3me and valproic acid, an inhibitor of histone deacetylase (HDAC) did not affect ICAM1 expression. These results suggested that histone3 methylation is involved in ICAM1 repression. Moreover, TNFa or BIX01294-induced ICAM induction resulted in both enhancements in adhesion and transmigration of leukocyte on endothelial cell. This study demonstrates that TNFa upregulates ICAM1 expression through H3K9-2me and H3K27-3me within the ICAM1 promoter region, in which G9a is likely to play a pivotal role in ICAM1 transcription. Our study provides a novel mechanism for ICAM1 transcription regulation in HBMVEC.

Keywords: ICAM1, TNFa, HBMVEC, H3K9-2me

Procedia PDF Downloads 326
886 Neural Network and Support Vector Machine for Prediction of Foot Disorders Based on Foot Analysis

Authors: Monireh Ahmadi Bani, Adel Khorramrouz, Lalenoor Morvarid, Bagheri Mahtab

Abstract:

Background:- Foot disorders are common in musculoskeletal problems. Plantar pressure distribution measurement is one the most important part of foot disorders diagnosis for quantitative analysis. However, the association of plantar pressure and foot disorders is not clear. With the growth of dataset and machine learning methods, the relationship between foot disorders and plantar pressures can be detected. Significance of the study:- The purpose of this study was to predict the probability of common foot disorders based on peak plantar pressure distribution and center of pressure during walking. Methodologies:- 2323 participants were assessed in a foot therapy clinic between 2015 and 2021. Foot disorders were diagnosed by an experienced physician and then they were asked to walk on a force plate scanner. After the data preprocessing, due to the difference in walking time and foot size, we normalized the samples based on time and foot size. Some of force plate variables were selected as input to a deep neural network (DNN), and the probability of any each foot disorder was measured. In next step, we used support vector machine (SVM) and run dataset for each foot disorder (classification of yes or no). We compared DNN and SVM for foot disorders prediction based on plantar pressure distributions and center of pressure. Findings:- The results demonstrated that the accuracy of deep learning architecture is sufficient for most clinical and research applications in the study population. In addition, the SVM approach has more accuracy for predictions, enabling applications for foot disorders diagnosis. The detection accuracy was 71% by the deep learning algorithm and 78% by the SVM algorithm. Moreover, when we worked with peak plantar pressure distribution, it was more accurate than center of pressure dataset. Conclusion:- Both algorithms- deep learning and SVM will help therapist and patients to improve the data pool and enhance foot disorders prediction with less expense and error after removing some restrictions properly.

Keywords: deep neural network, foot disorder, plantar pressure, support vector machine

Procedia PDF Downloads 336