Search results for: market comparison
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8291

Search results for: market comparison

431 Celebrity Culture and Social Role of Celebrities in Türkiye during the 1990s: The Case of Türkiye, Newspaper, Radio, Televison (TGRT) Channel

Authors: Yelda Yenel, Orkut Acele

Abstract:

In a media-saturated world, celebrities have become ubiquitous figures, encountered both in public spaces and within the privacy of our homes, seamlessly integrating into daily life. From Alexander the Great to contemporary media personalities, the image of celebrity has persisted throughout history, manifesting in various forms and contexts. Over time, as the relationship between society and the market evolved, so too did the roles and behaviors of celebrities. These transformations offer insights into the cultural climate, revealing shifts in habits and worldviews. In Türkiye, the emergence of private television channels brought an influx of celebrities into everyday life, making them a pervasive part of daily routines. To understand modern celebrity culture, it is essential to examine the ideological functions of media within political, economic, and social contexts. Within this framework, celebrities serve as both reflections and creators of cultural values and, at times, act as intermediaries, offering insights into the society of their era. Starting its broadcasting life in 1992 with religious films and religious conversation, Türkiye Newspaper, Radio, Television channel (TGRT) later changed its appearance, slogan, and the celebrities it featured in response to the political atmosphere. Celebrities played a critical role in transforming from the existing slogan 'Peace has come to the screen' to 'Watch and see what will happen”. Celebrities hold significant roles in society, and their images are produced and circulated by various actors, including media organizations and public relations teams. Understanding these dynamics is crucial for analyzing their influence and impact. This study aims to explore Turkish society in the 1990s, focusing on TGRT and its visual and discursive characteristics regarding celebrity figures such as Seda Sayan. The first section examines the historical development of celebrity culture and its transformations, guided by the conceptual framework of celebrity studies. The complex and interconnected image of celebrity, as introduced by post-structuralist approaches, plays a fundamental role in making sense of existing relationships. This section traces the existence and functions of celebrities from antiquity to the present day. The second section explores the economic, social, and cultural contexts of 1990s Türkiye, focusing on the media landscape and visibility that became prominent in the neoliberal era following the 1980s. This section also discusses the political factors underlying TGRT's transformation, such as the 1997 military memorandum. The third section analyzes TGRT as a case study, focusing on its significance as an Islamic television channel and the shifts in its public image, categorized into two distinct periods. The channel’s programming, which aligned with Islamic teachings, and the celebrities who featured prominently during these periods became the public face of both TGRT and the broader society. In particular, the transition to a more 'secular' format during TGRT's second phase is analyzed, focusing on changes in celebrity attire and program formats. This study reveals that celebrities are used as indicators of ideology, benefiting from this instrumentalization by enhancing their own fame and reflecting the prevailing cultural hegemony in society.

Keywords: celebrity culture, media, neoliberalism, TGRT

Procedia PDF Downloads 15
430 Prolactin and Its Abnormalities: Its Implications on the Male Reproductive Tract and Male Factor Infertility

Authors: Rizvi Hasan

Abstract:

Male factor infertility due to abnormalities in prolactin levels is encountered in a significant proportion. This was a case-control study carried out to determine the effects of prolactin abnormalities in normal males with infertility, recruiting 297 male infertile patients with informed written consent. All underwent a Basic Seminal Fluid Analysis (BSA) and endocrine profiles of FSH, LH, testosterone and prolactin (PRL) hormones using the random access chemiluminescent immunoassay method (normal range 2.5-17ng/ml). Age, weight, and height matched voluntary controls were recruited for comparison. None of the cases had anatomical, medical or surgical disorders related to infertility. Among the controls; mean age 33.2yrs ± 5.2, BMI 21.04 ± 1.39kgm-2, BSA 34×106, a number of children fathered 2±1, PRL 6.78 ± 2.92ng/ml. Of the 297 patients, 28 were hyperprolactinaemic while one was hypoprolactinaemic. All the hyperprolactinaemic patients had oligoasthenospermia, abnormal morphology and decreased viability. The serum testosterone levels were markedly lowered in 26 (92.86%) of the hyperprolactinaemic subjects. In the other 2 hyperprolactinaemic subjects and the single hypoprolactinaemic subject, the serum testosterone levels were normal. FSH and LH were normal in all patients. The 29 male patients with abnormalities in their serum PRL profiles were followed up for 12 months. The 28 patients suffering from hyperprolactinaemia were treated with oral bromocriptine in a dose of 2.5 mg twice daily. The hypoprolactinaemic patient defaulted treatment. From the follow-up, it was evident that 19 (67.86%) of the treated patients responded after 3 months of therapy while 4 (14.29%) showed improvement after approximately 6 months of bromocriptine therapy. One patient responded after 1 year of therapy while 2 patients showed improvements although not up to normal levels within the same period. Response to treatment was assessed by improvement in their BSA parameters. Prolactin abnormalities affect the male reproductive system and semen parameters necessitating further studies to ascertain the exact role of prolactin on the male reproductive tract. A parallel study was carried out incorporating 200 male white rats that were grouped and subjected to variations in their serum PRL levels. At the end of 100 days of treatment, these rats were subjected to morphological studies of their male reproductive tracts.Varying morphological changes depending on the levels of PRL changes induced were evident. Notable changes were arrest of spermatogenesis at the spermatid stage, a reduced testicular cellularity, a reduction in microvilli of the pseudostratified epithelial lining of the epididymis, while measurement of the tubular diameter showed a 30% reduction compared to normal tissue. There were no changes in the vas deferens, seminal vesicles, and the prostate. It is evident that both hyperprolactinaemia and hypoprolactinaemia have a direct effect on the morphology and function of the male reproductive tract. The morphological studies carried out on the groups of rats who were subjected to variations in their PRL levels could be the basis for infertility in male human beings.

Keywords: male factor infertility, morphological studies, prolactin, seminal fluid analysis

Procedia PDF Downloads 341
429 Urban Park Characteristics Defining Avian Community Structure

Authors: Deepti Kumari, Upamanyu Hore

Abstract:

Cities are an example of a human-modified environment with few fragments of urban green spaces, which are widely considered for urban biodiversity. The study aims to address the avifaunal diversity in urban parks based on the park size and their urbanization intensity. Also, understanding the key factors affecting species composition and structure as birds are a good indicator of a healthy ecosystem, and they are sensitive to changes in the environment. A 50 m-long line-transect method is used to survey birds in 39 urban parks in Delhi, India. Habitat variables, including vegetation (percentage of non-native trees, percentage of native trees, top canopy cover, sub-canopy cover, diameter at breast height, ground vegetation cover, shrub height) were measured using the quadrat method along the transect, and disturbance variables (distance from water, distance from road, distance from settlement, park area, visitor rate, and urbanization intensity) were measured using ArcGIS and google earth. We analyzed species data for diversity and richness. We explored the relation of species diversity and richness to habitat variables using the multi-model inference approach. Diversity and richness are found significant in different park sizes and their urbanization intensity. Medium size park supports more diversity, whereas large size park has more richness. However, diversity and richness both declined with increasing urbanization intensity. The result of CCA revealed that species composition in urban parks was positively associated with tree diameter at breast height and distance from the settlement. On the model selection approach, disturbance variables, especially distance from road, urbanization intensity, and visitors are the best predictors for the species richness of birds in urban parks. In comparison, multiple regression analysis between habitat variables and bird diversity suggested that native tree species in the park may explain the diversity pattern of birds in urban parks. Feeding guilds such as insectivores, omnivores, carnivores, granivores, and frugivores showed a significant relation with vegetation variables, while carnivores and scavenger bird species mainly responded with disturbance variables. The study highlights the importance of park size in urban areas and their urbanization intensity. It also indicates that distance from the settlement, distance from the road, urbanization intensity, visitors, diameter at breast height, and native tree species can be important determining factors for bird richness and diversity in urban parks. The study also concludes that the response of feeding guilds to vegetation and disturbance in urban parks varies. Therefore, we recommend that park size and surrounding urban matrix should be considered in order to increase bird diversity and richness in urban areas for designing and planning.

Keywords: diversity, feeding guild, urban park, urbanization intensity

Procedia PDF Downloads 111
428 DeepNIC a Method to Transform Each Tabular Variable into an Independant Image Analyzable by Basic CNNs

Authors: Nguyen J. M., Lucas G., Ruan S., Digonnet H., Antonioli D.

Abstract:

Introduction: Deep Learning (DL) is a very powerful tool for analyzing image data. But for tabular data, it cannot compete with machine learning methods like XGBoost. The research question becomes: can tabular data be transformed into images that can be analyzed by simple CNNs (Convolutional Neuron Networks)? Will DL be the absolute tool for data classification? All current solutions consist in repositioning the variables in a 2x2 matrix using their correlation proximity. In doing so, it obtains an image whose pixels are the variables. We implement a technology, DeepNIC, that offers the possibility of obtaining an image for each variable, which can be analyzed by simple CNNs. Material and method: The 'ROP' (Regression OPtimized) model is a binary and atypical decision tree whose nodes are managed by a new artificial neuron, the Neurop. By positioning an artificial neuron in each node of the decision trees, it is possible to make an adjustment on a theoretically infinite number of variables at each node. From this new decision tree whose nodes are artificial neurons, we created the concept of a 'Random Forest of Perfect Trees' (RFPT), which disobeys Breiman's concepts by assembling very large numbers of small trees with no classification errors. From the results of the RFPT, we developed a family of 10 statistical information criteria, Nguyen Information Criterion (NICs), which evaluates in 3 dimensions the predictive quality of a variable: Performance, Complexity and Multiplicity of solution. A NIC is a probability that can be transformed into a grey level. The value of a NIC depends essentially on 2 super parameters used in Neurops. By varying these 2 super parameters, we obtain a 2x2 matrix of probabilities for each NIC. We can combine these 10 NICs with the functions AND, OR, and XOR. The total number of combinations is greater than 100,000. In total, we obtain for each variable an image of at least 1166x1167 pixels. The intensity of the pixels is proportional to the probability of the associated NIC. The color depends on the associated NIC. This image actually contains considerable information about the ability of the variable to make the prediction of Y, depending on the presence or absence of other variables. A basic CNNs model was trained for supervised classification. Results: The first results are impressive. Using the GSE22513 public data (Omic data set of markers of Taxane Sensitivity in Breast Cancer), DEEPNic outperformed other statistical methods, including XGBoost. We still need to generalize the comparison on several databases. Conclusion: The ability to transform any tabular variable into an image offers the possibility of merging image and tabular information in the same format. This opens up great perspectives in the analysis of metadata.

Keywords: tabular data, CNNs, NICs, DeepNICs, random forest of perfect trees, classification

Procedia PDF Downloads 118
427 Finite Element Analysis of the Anaconda Device: Efficiently Predicting the Location and Shape of a Deployed Stent

Authors: Faidon Kyriakou, William Dempster, David Nash

Abstract:

Abdominal Aortic Aneurysm (AAA) is a major life-threatening pathology for which modern approaches reduce the need for open surgery through the use of stenting. The success of stenting though is sometimes jeopardized by the final position of the stent graft inside the human artery which may result in migration, endoleaks or blood flow occlusion. Herein, a finite element (FE) model of the commercial medical device AnacondaTM (Vascutek, Terumo) has been developed and validated in order to create a numerical tool able to provide useful clinical insight before the surgical procedure takes place. The AnacondaTM device consists of a series of NiTi rings sewn onto woven polyester fabric, a structure that despite its column stiffness is flexible enough to be used in very tortuous geometries. For the purposes of this study, a FE model of the device was built in Abaqus® (version 6.13-2) with the combination of beam, shell and surface elements; the choice of these building blocks was made to keep the computational cost to a minimum. The validation of the numerical model was performed by comparing the deployed position of a full stent graft device inside a constructed AAA with a duplicate set-up in Abaqus®. Specifically, an AAA geometry was built in CAD software and included regions of both high and low tortuosity. Subsequently, the CAD model was 3D printed into a transparent aneurysm, and a stent was deployed in the lab following the steps of the clinical procedure. Images on the frontal and sagittal planes of the experiment allowed the comparison with the results of the numerical model. By overlapping the experimental and computational images, the mean and maximum distances between the rings of the two models were measured in the longitudinal, and the transverse direction and, a 5mm upper bound was set as a limit commonly used by clinicians when working with simulations. The two models showed very good agreement of their spatial positioning, especially in the less tortuous regions. As a result, and despite the inherent uncertainties of a surgical procedure, the FE model allows confidence that the final position of the stent graft, when deployed in vivo, can also be predicted with significant accuracy. Moreover, the numerical model run in just a few hours, an encouraging result for applications in the clinical routine. In conclusion, the efficient modelling of a complicated structure which combines thin scaffolding and fabric has been demonstrated to be feasible. Furthermore, the prediction capabilities of the location of each stent ring, as well as the global shape of the graft, has been shown. This can allow surgeons to better plan their procedures and medical device manufacturers to optimize their designs. The current model can further be used as a starting point for patient specific CFD analysis.

Keywords: AAA, efficiency, finite element analysis, stent deployment

Procedia PDF Downloads 188
426 Role of Lipid-Lowering Treatment in the Monocyte Phenotype and Chemokine Receptor Levels after Acute Myocardial Infarction

Authors: Carolina N. França, Jônatas B. do Amaral, Maria C.O. Izar, Ighor L. Teixeira, Francisco A. Fonseca

Abstract:

Introduction: Atherosclerosis is a progressive disease, characterized by lipid and fibrotic element deposition in large-caliber arteries. Conditions related to the development of atherosclerosis, as dyslipidemia, hypertension, diabetes, and smoking are associated with endothelial dysfunction. There is a frequent recurrence of cardiovascular outcomes after acute myocardial infarction and, at this sense, cycles of mobilization of monocyte subtypes (classical, intermediate and nonclassical) secondary to myocardial infarction may determine the colonization of atherosclerotic plaques in different stages of the development, contributing to early recurrence of ischemic events. The recruitment of different monocyte subsets during inflammatory process requires the expression of chemokine receptors CCR2, CCR5, and CX3CR1, to promote the migration of monocytes to the inflammatory site. The aim of this study was to evaluate the effect of lipid-lowering treatment by six months in the monocyte phenotype and chemokine receptor levels of patients after Acute Myocardial Infarction (AMI). Methods: This is a PROBE (prospective, randomized, open-label trial with blinded endpoints) study (ClinicalTrials.gov Identifier: NCT02428374). Adult patients (n=147) of both genders, ageing 18-75 years, were randomized in a 2x2 factorial design for treatment with rosuvastatin 20 mg/day or simvastatin 40 mg/day plus ezetimibe 10 mg/day as well as ticagrelor 90 mg 2x/day and clopidogrel 75 mg, in addition to conventional AMI therapy. Blood samples were collected at baseline, after one month and six months of treatment. Monocyte subtypes (classical - inflammatory, intermediate - phagocytic and nonclassical – anti-inflammatory) were identified, quantified and characterized by flow cytometry, as well as the expressions of the chemokine receptors (CCR2, CCR5 and CX3CR1) were also evaluated in the mononuclear cells. Results: After six months of treatment, there was an increase in the percentage of classical monocytes and reduction in the nonclassical monocytes (p=0.038 and p < 0.0001 Friedman Test), without differences for intermediate monocytes. Besides, classical monocytes had higher expressions of CCR5 and CX3CR1 after treatment, without differences related to CCR2 (p < 0.0001 for CCR5 and CX3CR1; p=0.175 for CCR2). Intermediate monocytes had higher expressions of CCR5 and CX3CR1 and lower expression of CCR2 (p = 0.003; p < 0.0001 and p = 0.011, respectively). Nonclassical monocytes had lower expressions of CCR2 and CCR5, without differences for CX3CR1 (p < 0.0001; p = 0.009 and p = 0.138, respectively). There were no differences after the comparison between the four treatment arms. Conclusion: The data suggest a time-dependent modulation of classical and nonclassical monocytes and chemokine receptor levels. The higher percentage of classical monocytes (inflammatory cells) suggest a residual inflammatory risk, even under preconized treatments to AMI. Indeed, these changes do not seem to be affected by choice of the lipid-lowering strategy.

Keywords: acute myocardial infarction, chemokine receptors, lipid-lowering treatment, monocyte subtypes

Procedia PDF Downloads 114
425 An Initial Assessment of the Potential Contibution of 'Community Empowerment' to Mitigating the Drivers of Deforestation and Forest Degradation, in Giam Siak Kecil-Bukit Batu Biosphere Reserve

Authors: Arzyana Sunkar, Yanto Santosa, Siti Badriyah Rushayati

Abstract:

Indonesia has experienced annual forest fires that have rapidly destroyed and degraded its forests. Fires in the peat swamp forests of Riau Province, have set the stage for problems to worsen, this being the ecosystem most prone to fires (which are also the most difficult, to extinguish). Despite various efforts to curb deforestation, and forest degradation processes, severe forest fires are still occurring. To find an effective solution, the basic causes of the problems must be identified. It is therefore critical to have an in-depth understanding of the underlying causal factors that have contributed to deforestation and forest degradation as a whole, in order to attain reductions in their rates. An assessment of the drivers of deforestation and forest degradation was carried out, in order to design and implement measures that could slow these destructive processes. Research was conducted in Giam Siak Kecil–Bukit Batu Biosphere Reserve (GSKBB BR), in the Riau Province of Sumatera, Indonesia. A biosphere reserve was selected as the study site because such reserves aim to reconcile conservation with sustainable development. A biosphere reserve should promote a range of local human activities, together with development values that are in line spatially and economically with the area conservation values, through use of a zoning system. Moreover, GSKBB BR is an area with vast peatlands, and is experiencing forest fires annually. Various factors were analysed to assess the drivers of deforestation and forest degradation in GSKBB BR; data were collected from focus group discussions with stakeholders, key informant interviews with key stakeholders, field observation and a literature review. Landsat satellite imagery was used to map forest-cover changes for various periods. Analysis of landsat images, taken during the period 2010-2014, revealed that within the non-protected area of core zone, there was a trend towards decreasing peat swamp forest areas, increasing land clearance, and increasing areas of community oil-palm and rubber plantations. Fire was used for land clearing and most of the forest fires occurred in the most populous area (the transition area). The study found a relationship between the deforested/ degraded areas, and certain distance variables, i.e. distance from roads, villages and the borders between the core area and the buffer zone. The further the distance from the core area of the reserve, the higher was the degree of deforestation and forest degradation. Research findings suggested that agricultural expansion may be the direct cause of deforestation and forest degradation in the reserve, whereas socio-economic factors were the underlying driver of forest cover changes; such factors consisting of a combination of socio-cultural, infrastructural, technological, institutional (policy and governance), demographic (population pressure) and economic (market demand) considerations. These findings indicated that local factors/problems were the critical causes of deforestation and degradation in GSKBB BR. This research therefore concluded that reductions in deforestation and forest degradation in GSKBB BR could be achieved through ‘local actor’-tailored approaches such as community empowerment

Keywords: Actor-led solution, community empowerment, drivers of deforestation and forest degradation, Giam Siak Kecil – Bukit Batu Biosphere Reserve

Procedia PDF Downloads 344
424 Comparison of Two Transcranial Magnetic Stimulation Protocols on Spasticity in Multiple Sclerosis - Pilot Study of a Randomized and Blind Cross-over Clinical Trial

Authors: Amanda Cristina da Silva Reis, Bruno Paulino Venâncio, Cristina Theada Ferreira, Andrea Fialho do Prado, Lucimara Guedes dos Santos, Aline de Souza Gravatá, Larissa Lima Gonçalves, Isabella Aparecida Ferreira Moretto, João Carlos Ferrari Corrêa, Fernanda Ishida Corrêa

Abstract:

Objective: To compare two protocols of Transcranial Magnetic Stimulation (TMS) on quadriceps muscle spasticity in individuals diagnosed with Multiple Sclerosis (MS). Method: Clinical, crossover study, in which six adult individuals diagnosed with MS and spasticity in the lower limbs were randomized to receive one session of high-frequency (≥5Hz) and low-frequency (≤ 1Hz) TMS on motor cortex (M1) hotspot for quadriceps muscle, with a one-week interval between the sessions. To assess the spasticity was applied the Ashworth scale and were analyzed the latency time (ms) of the motor evoked potential (MEP) and the central motor conduction time (CMCT) of the bilateral quadriceps muscle. Assessments were performed before and after each intervention. The difference between groups was analyzed using the Friedman test, with a significance level of 0.05 adopted. Results: All statistical analyzes were performed using the SPSS Statistic version 26 programs, with a significance level established for the analyzes at p<0.05. Shapiro Wilk normality test. Parametric data were represented as mean and standard deviation for non-parametric variables, median and interquartile range, and frequency and percentage for categorical variables. There was no clinical change in quadriceps spasticity assessed using the Ashworth scale for the 1 Hz (p=0.813) and 5 Hz (p= 0.232) protocols for both limbs. Motor Evoked Potential latency time: in the 5hz protocol, there was no significant change for the contralateral side from pre to post-treatment (p>0.05), and for the ipsilateral side, there was a decrease in latency time of 0.07 seconds (p<0.05 ); for the 1Hz protocol there was an increase of 0.04 seconds in the latency time (p<0.05) for the contralateral side to the stimulus, and for the ipsilateral side there was a decrease in the latency time of 0.04 seconds (p=<0.05), with a significant difference between the contralateral (p=0.007) and ipsilateral (p=0.014) groups. Central motor conduction time in the 1Hz protocol, there was no change for the contralateral side (p>0.05) and for the ipsilateral side (p>0.05). In the 5Hz protocol for the contralateral side, there was a small decrease in latency time (p<0.05) and for the ipsilateral side, there was a decrease of 0.6 seconds in the latency time (p<0.05) with a significant difference between groups (p=0.019). Conclusion: A high or low-frequency session does not change spasticity, but it is observed that when the low-frequency protocol was performed, there was an increase in latency time on the stimulated side, and a decrease in latency time on the non-stimulated side, considering then that inhibiting the motor cortex increases cortical excitability on the opposite side.

Keywords: multiple sclerosis, spasticity, motor evoked potential, transcranial magnetic stimulation

Procedia PDF Downloads 83
423 Gender and Asylum: A Critical Reassessment of the Case Law of the European Court of Human Right and of United States Courts Concerning Gender-Based Asylum Claims

Authors: Athanasia Petropoulou

Abstract:

While there is a common understanding that a person’s sex, gender, gender identity, and sexual orientation shape every stage of the migration experience, theories of international migration had until recently not been focused on exploring and incorporating a gender perspective in their analysis. In a similar vein, refugee law has long been the object of criticisms for failing to recognize and respond appropriately to women’s and sexual minorities’ experiences of persecution. The present analysis attempts to depict the challenges faced by the European Court of Human Rights (ECtHR) and U.S. courts when adjudicating in cases involving asylum claims with a gendered perspective. By providing a comparison between adjudicating strategies of international and national jurisdictions, the article aims to identify common or distinctive approaches in addressing gendered based claims. The paper argues that, despite the different nature of the judicial bodies and the different legal instruments applied respectively, judges face similar challenges in this context and often fail to qualify and address the gendered dimensions of asylum claims properly. The ECtHR plays a fundamental role in safeguarding human rights protection in Europe not only for European citizens but also for people fleeing violence, war, and dire living conditions. However, this role becomes more difficult to fulfill, not only because of the obvious institutional constraints but also because cases related to claims of asylum seekers concern a domain closely linked to State sovereignty. Amid the current “refugee crisis,” risk assessment performed by national authorities, like in the process of asylum determination, is shaped by wider geopolitical and economic considerations. The failure to recognize and duly address the gendered dimension of non - refoulement claims, one of the many shortcomings of these processes, is reflected in the decisions of the ECtHR. As regards U.S. case law, the study argues that U.S. courts either fail to apply any connection between asylum claims and their gendered dimension or tend to approach gendered based claims through the lens of the “political opinion” or “membership of a particular social group” reasons of fear of persecution. This exercise becomes even more difficult, taking into account that the U.S. asylum law inappropriately qualifies gendered-based claims. The paper calls for more sociologically informed decision-making practices and for a more contextualized and relational approach in the assessment of the risk of ill-treatment and persecution. Such an approach is essential for unearthing the gendered patterns of persecution and addressing effectively related claims, thus securing the human rights of asylum seekers.

Keywords: asylum, European court of human rights, gender, human rights, U.S. courts

Procedia PDF Downloads 107
422 A Machine Learning Approach for Assessment of Tremor: A Neurological Movement Disorder

Authors: Rajesh Ranjan, Marimuthu Palaniswami, A. A. Hashmi

Abstract:

With the changing lifestyle and environment around us, the prevalence of the critical and incurable disease has proliferated. One such condition is the neurological disorder which is rampant among the old age population and is increasing at an unstoppable rate. Most of the neurological disorder patients suffer from some movement disorder affecting the movement of their body parts. Tremor is the most common movement disorder which is prevalent in such patients that infect the upper or lower limbs or both extremities. The tremor symptoms are commonly visible in Parkinson’s disease patient, and it can also be a pure tremor (essential tremor). The patients suffering from tremor face enormous trouble in performing the daily activity, and they always need a caretaker for assistance. In the clinics, the assessment of tremor is done through a manual clinical rating task such as Unified Parkinson’s disease rating scale which is time taking and cumbersome. Neurologists have also affirmed a challenge in differentiating a Parkinsonian tremor with the pure tremor which is essential in providing an accurate diagnosis. Therefore, there is a need to develop a monitoring and assistive tool for the tremor patient that keep on checking their health condition by coordinating them with the clinicians and caretakers for early diagnosis and assistance in performing the daily activity. In our research, we focus on developing a system for automatic classification of tremor which can accurately differentiate the pure tremor from the Parkinsonian tremor using a wearable accelerometer-based device, so that adequate diagnosis can be provided to the correct patient. In this research, a study was conducted in the neuro-clinic to assess the upper wrist movement of the patient suffering from Pure (Essential) tremor and Parkinsonian tremor using a wearable accelerometer-based device. Four tasks were designed in accordance with Unified Parkinson’s disease motor rating scale which is used to assess the rest, postural, intentional and action tremor in such patient. Various features such as time-frequency domain, wavelet-based and fast-Fourier transform based cross-correlation were extracted from the tri-axial signal which was used as input feature vector space for the different supervised and unsupervised learning tools for quantification of severity of tremor. A minimum covariance maximum correlation energy comparison index was also developed which was used as the input feature for various classification tools for distinguishing the PT and ET tremor types. An automatic system for efficient classification of tremor was developed using feature extraction methods, and superior performance was achieved using K-nearest neighbors and Support Vector Machine classifiers respectively.

Keywords: machine learning approach for neurological disorder assessment, automatic classification of tremor types, feature extraction method for tremor classification, neurological movement disorder, parkinsonian tremor, essential tremor

Procedia PDF Downloads 151
421 Overview of Research Contexts about XR Technologies in Architectural Practice

Authors: Adeline Stals

Abstract:

The transformation of architectural design practices has been underway for almost forty years due to the development and democratization of computer technology. New and more efficient tools are constantly being proposed to architects, amplifying a technological wave that sometimes stimulates them, sometimes overwhelms them, depending essentially on their digital culture and the context (socio-economic, structural, organizational) in which they work on a daily basis. Our focus is on VR, AR, and MR technologies dedicated to architecture. The commercialization of affordable headsets like the Oculus Rift, the HTC Vive or more low-tech like the Google CardBoard, makes it more accessible to benefit from these technologies. In that regard, researchers report the growing interest of these tools for architects, given the new perspectives they open up in terms of workflow, representation, collaboration, and client’s involvement. However, studies rarely mention the consequences of the sample studied on results. Our research provides an overview of VR, AR, and MR researches among a corpus of papers selected from conferences and journals. A closer look at the sample of these research projects highlights the necessity to take into consideration the context of studies in order to develop tools truly dedicated to the real practices of specific architect profiles. This literature review formalizes milestones for future challenges to address. The methodology applied is based on a systematic review of two sources of publications. The first one is the Cumincad database, which regroups publications from conferences exclusively about digital in architecture. Additionally, the second part of the corpus is based on journal publications. Journals have been selected considering their ranking on Scimago. Among the journals in the predefined category ‘architecture’ and in Quartile 1 for 2018 (last update when consulted), we have retained the ones related to the architectural design process: Design Studies, CoDesign, Architectural Science Review, Frontiers of Architectural Research and Archnet-IJAR. Beside those journals, IJAC, not classified in the ‘architecture’ category, is selected by the author for its adequacy with architecture and computing. For all requests, the search terms were ‘virtual reality’, ‘augmented reality’, and ‘mixed reality’ in title and/or keywords for papers published between 2015 and 2019 (included). This frame time is defined considering the fast evolution of these technologies in the past few years. Accordingly, the systematic review covers 202 publications. The literature review on studies about XR technologies establishes the state of the art of the current situation. It highlights that studies are mostly based on experimental contexts with controlled conditions (pedagogical, e.g.) or on practices established in large architectural offices of international renown. However, few studies focus on the strategies and practices developed by offices of smaller size, which represent the largest part of the market. Indeed, a European survey studying the architectural profession in Europe in 2018 reveals that 99% of offices are composed of less than ten people, and 71% of only one person. The study also showed that the number of medium-sized offices is continuously decreasing in favour of smaller structures. In doing so, a frontier seems to remain between the worlds of research and practice, especially for the majority of small architectural practices having a modest use of technology. This paper constitutes a reference for the next step of the research and for further worldwide researches by facilitating their contextualization.

Keywords: architectural design, literature review, SME, XR technologies

Procedia PDF Downloads 105
420 Experimental Study of the Antibacterial Activity and Modeling of Non-isothermal Crystallization Kinetics of Sintered Seashell Reinforced Poly(Lactic Acid) And Poly(Butylene Succinate) Biocomposites Planned for 3D Printing

Authors: Mohammed S. Razali, Kamel Khimeche, Dahah Hichem, Ammar Boudjellal, Djamel E. Kaderi, Nourddine Ramdani

Abstract:

The use of additive manufacturing technologies has revolutionized various aspects of our daily lives. In particular, 3D printing has greatly advanced biomedical applications. While fused filament fabrication (FFF) technologies have made it easy to produce or prototype various medical devices, it is crucial to minimize the risk of contamination. New materials with antibacterial properties, such as those containing compounded silver nanoparticles, have emerged on the market. In a previous study, we prepared a newly sintered seashell filler (SSh) from bio-based seashells found along the Mediterranean coast using a suitable heat treatment process. We then prepared a series of polylactic acid (PLA) and polybutylene succinate (PBS) biocomposites filled with these SSh particles using a melt mixing technique with a twin-screw extruder to use them as feedstock filaments for 3D printing. The study consisted of two parts: evaluating the antibacterial activity of newly prepared biocomposites made of PLA and PBS reinforced with a sintered seashell in the first part and experimental and modeling analysis of the non-isothermal crystallization kinetics of these biocomposites in the second part. In the first part, the bactericidal activity of the biocomposites against three different bacteria, including Gram-negative bacteria such as (E. coli and Pseudomonas aeruginosa), as well as Gram-positive bacteria such as (Staphylococcus aureus), was examined. The PLA-based biocomposite containing 20 wt.% of SSh particles exhibited an inhibition zone with radial diameters of 8mm and 6mm against E. coli and Pseudo. Au, respectively, while no bacterial activity was observed against Staphylococcus aureus. In the second part, the focus was on investigating the effect of the sintered seashell filler particles on the non-isothermal crystallization kinetics of PLA and PBS 3D-printing composite materials. The objective was to understand the impact of the filler particles on the crystallization mechanism of both PLA and PBS during the cooling process of a melt-extruded filament in (FFF) to manage the dimensional accuracy and mechanical properties of the final printed part. We conducted a non-isothermal melt crystallization kinetic study of a series of PLA-SS and PBS-SS composites using differential scanning calorimetry at various cooling rates. We analyzed the obtained kinetic data using different crystallization kinetic models such as modified Avrami, Ozawa, and Mo's methods. Dynamic mode describes the relative crystallinity as a function of temperature; it found that time half crystallinity (t1/2) of neat PLA decreased from 17 min to 7.3 min for PLA+5 SSh and the (t1/2) of virgin PBS was reduced from 3.5 min to 2.8 min for the composite containing 5wt.% of SSh. We found that the coated SS particles with stearic acid acted as nucleating agents and had a nucleation activity, as observed through polarized optical microscopy. Moreover, we evaluated the effective energy barrier of the non-isothermal crystallization process using the Iso conversional methods of Flynn-Wall-Ozawa (F-W-O) and Kissinger-Akahira-Sunose (K-A-S). The study provides significant insights into the crystallization behavior of PLA and PBS biocomposites.

Keywords: avrami model, bio-based reinforcement, dsc, gram-negative bacteria, gram-positive bacteria, isoconversional methods, non-isothermal crystallization kinetics, poly(butylene succinate), poly(lactic acid), antbactirial activity

Procedia PDF Downloads 79
419 Influence of Structured Capillary-Porous Coatings on Cryogenic Quenching Efficiency

Authors: Irina P. Starodubtseva, Aleksandr N. Pavlenko

Abstract:

Quenching is a term generally accepted for the process of rapid cooling of a solid that is overheated above the thermodynamic limit of the liquid superheat. The main objective of many previous studies on quenching is to find a way to reduce the total time of the transient process. Computational experiments were performed to simulate quenching by a falling liquid nitrogen film of an extremely overheated vertical copper plate with a structured capillary-porous coating. The coating was produced by directed plasma spraying. Due to the complexities in physical pattern of quenching from chaotic processes to phase transition, the mechanism of heat transfer during quenching is still not sufficiently understood. To our best knowledge, no information exists on when and how the first stable liquid-solid contact occurs and how the local contact area begins to expand. Here we have more models and hypotheses than authentically established facts. The peculiarities of the quench front dynamics and heat transfer in the transient process are studied. The created numerical model determines the quench front velocity and the temperature fields in the heater, varying in space and time. The dynamic pattern of the running quench front obtained numerically satisfactorily correlates with the pattern observed in experiments. Capillary-porous coatings with straight and reverse orientation of crests are investigated. The results show that the cooling rate is influenced by thermal properties of the coating as well as the structure and geometry of the protrusions. The presence of capillary-porous coating significantly affects the dynamics of quenching and reduces the total quenching time more than threefold. This effect is due to the fact that the initialization of a quench front on a plate with a capillary-porous coating occurs at a temperature significantly higher than the thermodynamic limit of the liquid superheat, when a stable solid-liquid contact is thermodynamically impossible. Waves present on the liquid-vapor interface and protrusions on the complex micro-structured surface cause destabilization of the vapor film and the appearance of local liquid-solid micro-contacts even though the average integral surface temperature is much higher than the liquid superheat limit. The reliability of the results is confirmed by direct comparison with experimental data on the quench front velocity, the quench front geometry, and the surface temperature change over time. Knowledge of the quench front velocity and total time of transition process is required for solving practically important problems of nuclear reactors safety.

Keywords: capillary-porous coating, heat transfer, Leidenfrost phenomenon, numerical simulation, quenching

Procedia PDF Downloads 128
418 Comparison of Equivalent Linear and Non-Linear Site Response Model Performance in Kathmandu Valley

Authors: Sajana Suwal, Ganesh R. Nhemafuki

Abstract:

Evaluation of ground response under earthquake shaking is crucial in geotechnical earthquake engineering. Damage due to seismic excitation is mainly correlated to local geological and geotechnical conditions. It is evident from the past earthquakes (e.g. 1906 San Francisco, USA, 1923 Kanto, Japan) that the local geology has strong influence on amplitude and duration of ground motions. Since then significant studies has been conducted on ground motion amplification revealing the importance of influence of local geology on ground. Observations from the damaging earthquakes (e.g. Nigata and San Francisco, 1964; Irpinia, 1980; Mexico, 1985; Kobe, 1995; L’Aquila, 2009) divulged that non-uniform damage pattern, particularly in soft fluvio-lacustrine deposit is due to the local amplification of seismic ground motion. Non-uniform damage patterns are also observed in Kathmandu Valley during 1934 Bihar Nepal earthquake and recent 2015 Gorkha earthquake seemingly due to the modification of earthquake ground motion parameters. In this study, site effects resulting from amplification of soft soil in Kathmandu are presented. A large amount of subsoil data was collected and used for defining the appropriate subsoil model for the Kathamandu valley. A comparative study of one-dimensional total-stress equivalent linear and non-linear site response is performed using four strong ground motions for six sites of Kathmandu valley. In general, one-dimensional (1D) site-response analysis involves the excitation of a soil profile using the horizontal component and calculating the response at individual soil layers. In the present study, both equivalent linear and non-linear site response analyses were conducted using the computer program DEEPSOIL. The results show that there is no significant deviation between equivalent linear and non-linear site response models until the maximum strain reaches to 0.06-0.1%. Overall, it is clearly observed from the results that non-linear site response model perform better as compared to equivalent linear model. However, the significant deviation between two models is resulted from other influencing factors such as assumptions made in 1D site response, lack of accurate values of shear wave velocity and nonlinear properties of the soil deposit. The results are also presented in terms of amplification factors which are predicted to be around four times more in case of non-linear analysis as compared to equivalent linear analysis. Hence, the nonlinear behavior of soil prevails the urgent need of study of dynamic characteristics of the soft soil deposit that can specifically represent the site-specific design spectra for the Kathmandu valley for building resilient structures from future damaging earthquakes.

Keywords: deep soil, equivalent linear analysis, non-linear analysis, site response

Procedia PDF Downloads 287
417 User-Controlled Color-Changing Textiles: From Prototype to Mass Production

Authors: Joshua Kaufman, Felix Tan, Morgan Monroe, Ayman Abouraddy

Abstract:

Textiles and clothing have been a staple of human existence for millennia, yet the basic structure and functionality of textile fibers and yarns has remained unchanged. While color and appearance are essential characteristics of a textile, an advancement in the fabrication of yarns that allows for user-controlled dynamic changes to the color or appearance of a garment has been lacking. Touch-activated and photosensitive pigments have been used in textiles, but these technologies are passive and cannot be controlled by the user. The technology described here allows the owner to control both when and in what pattern the fabric color-change takes place. In addition, the manufacturing process is compatible with mass-producing the user-controlled, color-changing yarns. The yarn fabrication utilizes a fiber spinning system that can produce either monofilament or multifilament yarns. For products requiring a more robust fabric (backpacks, purses, upholstery, etc.), larger-diameter monofilament yarns with a coarser weave are suitable. Such yarns are produced using a thread-coater attachment to encapsulate a 38-40 AWG metal wire inside a polymer sheath impregnated with thermochromic pigment. Conversely, products such as shirts and pants requiring yarns that are more flexible and soft against the skin comprise multifilament yarns of much smaller-diameter individual fibers. Embedding a metal wire in a multifilament fiber spinning process has not been realized to date. This research has required collaboration with Hills, Inc., to design a liquid metal-injection system to be combined with fiber spinning. The new system injects molten tin into each of 19 filaments being spun simultaneously into a single yarn. The resulting yarn contains 19 filaments, each with a tin core surrounded by a polymer sheath impregnated with thermochromic pigment. The color change we demonstrate is distinct from garments containing LEDs that emit light in various colors. The pigment itself changes its optical absorption spectrum to appear a different color. The thermochromic color-change is induced by a temperature change in the inner metal wire within each filament when current is applied from a small battery pack. The temperature necessary to induce the color change is near body temperature and not noticeable by touch. The prototypes already developed either use a simple push button to activate the battery pack or are wirelessly activated via a smart-phone app over Wi-Fi. The app allows the user to choose from different activation patterns of stripes that appear in the fabric continuously. The power requirements are mitigated by a large hysteresis in the activation temperature of the pigment and the temperature at which there is full color return. This was made possible by a collaboration with Chameleon International to develop a new, customized pigment. This technology enables a never-before seen capability: user-controlled, dynamic color and pattern change in large-area woven and sewn textiles and fabrics with wide-ranging applications from clothing and accessories to furniture and fixed-installation housing and business décor. The ability to activate through Wi-Fi opens up possibilities for the textiles to be part of the ‘Internet of Things.’ Furthermore, this technology is scalable to mass-production levels for wide-scale market adoption.

Keywords: activation, appearance, color, manufacturing

Procedia PDF Downloads 276
416 Addressing the Gap in Health and Wellbeing Evidence for Urban Real Estate Brownfield Asset Management Social Needs and Impact Analysis Using Systems Mapping Approach

Authors: Kathy Pain, Nalumino Akakandelwa

Abstract:

The study explores the potential to fill a gap in health and wellbeing evidence for purposeful urban real estate asset management to make investment a powerful force for societal good. Part of a five-year programme investigating the root causes of unhealthy urban development funded by the United Kingdom Prevention Research Partnership (UKPRP), the study pilots the use of a systems mapping approach to identify drivers and barriers to the incorporation of health and wellbeing evidence in urban brownfield asset management decision-making. Urban real estate not only provides space for economic production but also contributes to the quality of life in the local community. Yet market approaches to urban land use have, until recently, insisted that neo-classical technology-driven efficient allocation of economic resources should inform acquisition, operational, and disposal decisions. Buildings in locations with declining economic performance have thus been abandoned, leading to urban decay. Property investors are recognising the inextricable connection between sustainable urban production and quality of life in local communities. The redevelopment and operation of brownfield assets recycle existing buildings, minimising embodied carbon emissions. It also retains established urban spaces with which local communities identify and regenerate places to create a sense of security, economic opportunity, social interaction, and quality of life. Social implications of urban real estate on health and wellbeing and increased adoption of benign sustainability guidance in urban production are driving the need to consider how they affect brownfield real estate asset management decisions. Interviews with real estate upstream decision-makers in the study, find that local social needs and impact analysis is becoming a commercial priority for large-scale urban real estate development projects. Evidence of the social value-added of proposed developments is increasingly considered essential to secure local community support and planning permissions, and to attract sustained inward long-term investment capital flows for urban projects. However, little is known about the contribution of population health and wellbeing to socially sustainable urban projects and the monetary value of the opportunity this presents to improve the urban environment for local communities. We report early findings from collaborations with two leading property companies managing major investments in brownfield urban assets in the UK to consider how the inclusion of health and wellbeing evidence in social valuation can inform perceptions of brownfield development social benefit for asset managers, local communities, public authorities and investors for the benefit of all parties. Using holistic case studies and systems mapping approaches, we explore complex relationships between public health considerations and asset management decisions in urban production. Findings indicate a strong real estate investment industry appetite and potential to include health as a vital component of sustainable real estate social value creation in asset management strategies.

Keywords: brownfield urban assets, health and wellbeing, social needs and impact, social valuation, sustainable real estate, systems mapping

Procedia PDF Downloads 64
415 Development and Experimental Evaluation of a Semiactive Friction Damper

Authors: Juan S. Mantilla, Peter Thomson

Abstract:

Seismic events may result in discomfort on occupants of the buildings, structural damage or even buildings collapse. Traditional design aims to reduce dynamic response of structures by increasing stiffness, thus increasing the construction costs and the design forces. Structural control systems arise as an alternative to reduce these dynamic responses. A commonly used control systems in buildings are the passive friction dampers, which adds energy dissipation through damping mechanisms induced by sliding friction between their surfaces. Passive friction dampers are usually implemented on the diagonal of braced buildings, but such devices have the disadvantage that are optimal for a range of sliding force and out of that range its efficiency decreases. The above implies that each passive friction damper is designed, built and commercialized for a specific sliding/clamping force, in which the damper shift from a locked state to a slip state, where dissipates energy through friction. The risk of having a variation in the efficiency of the device according to the sliding force is that the dynamic properties of the building can change as result of many factor, even damage caused by a seismic event. In this case the expected forces in the building can change and thus considerably reduce the efficiency of the damper (that is designed for a specific sliding force). It is also evident than when a seismic event occurs the forces in each floor varies in the time what means that the damper's efficiency is not the best at all times. Semi-Active Friction devices adapt its sliding force trying to maintain its motion in the slipping phase as much as possible, because of this, the effectiveness of the device depends on the control strategy used. This paper deals with the development and performance evaluation of a low cost Semiactive Variable Friction Damper (SAVFD) in reduced scale to reduce vibrations of structures subject to earthquakes. The SAVFD consist in a (1) hydraulic brake adapted to (2) a servomotor which is controlled with an (3) Arduino board and acquires accelerations or displacement from (4) sensors in the immediately upper and lower floors and a (5) power supply that can be a pair of common batteries. A test structure, based on a Benchmark structure for structural control, was design and constructed. The SAVFD and the structure are experimentally characterized. A numerical model of the structure and the SAVFD is developed based on the dynamic characterization. Decentralized control algorithms were modeled and later tested experimentally using shaking table test using earthquake and frequency chirp signals. The controlled structure with the SAVFD achieved reductions greater than 80% in relative displacements and accelerations in comparison to the uncontrolled structure.

Keywords: earthquake response, friction damper, semiactive control, shaking table

Procedia PDF Downloads 377
414 Arc Plasma Application for Solid Waste Processing

Authors: Vladimir Messerle, Alfred Mosse, Alexandr Ustimenko, Oleg Lavrichshev

Abstract:

Hygiene and sanitary study of typical medical-biological waste made in Kazakhstan, Russia, Belarus and other countries show that their risk to the environment is much higher than that of most chemical wastes. For example, toxicity of solid waste (SW) containing cytotoxic drugs and antibiotics is comparable to toxicity of radioactive waste of high and medium level activity. This report presents the results of the thermodynamic analysis of thermal processing of SW and experiments at the developed plasma unit for SW processing. Thermodynamic calculations showed that the maximum yield of the synthesis gas at plasma gasification of SW in air and steam mediums is achieved at a temperature of 1600K. At the air plasma gasification of SW high-calorific synthesis gas with a concentration of 82.4% (СO – 31.7%, H2 – 50.7%) can be obtained, and at the steam plasma gasification – with a concentration of 94.5% (СO – 33.6%, H2 – 60.9%). Specific heat of combustion of the synthesis gas produced by air gasification amounts to 14267 kJ/kg, while by steam gasification - 19414 kJ/kg. At the optimal temperature (1600 K), the specific power consumption for air gasification of SW constitutes 1.92 kWh/kg, while for steam gasification - 2.44 kWh/kg. Experimental study was carried out in a plasma reactor. This is device of periodic action. The arc plasma torch of 70 kW electric power is used for SW processing. Consumption of SW was 30 kg/h. Flow of plasma-forming air was 12 kg/h. Under the influence of air plasma flame weight average temperature in the chamber reaches 1800 K. Gaseous products are taken out of the reactor into the flue gas cooling unit, and the condensed products accumulate in the slag formation zone. The cooled gaseous products enter the gas purification unit, after which via gas sampling system is supplied to the analyzer. Ventilation system provides a negative pressure in the reactor up to 10 mm of water column. Condensed products of SW processing are removed from the reactor after its stopping. By the results of experiments on SW plasma gasification the reactor operating conditions were determined, the exhaust gas analysis was performed and the residual carbon content in the slag was determined. Gas analysis showed the following composition of the gas at the exit of gas purification unit, (vol.%): СO – 26.5, H2 – 44.6, N2–28.9. The total concentration of the syngas was 71.1%, which agreed well with the thermodynamic calculations. The discrepancy between experiment and calculation by the yield of the target syngas did not exceed 16%. Specific power consumption for SW gasification in the plasma reactor according to the results of experiments amounted to 2.25 kWh/kg of working substance. No harmful impurities were found in both gas and condensed products of SW plasma gasification. Comparison of experimental results and calculations showed good agreement. Acknowledgement—This work was supported by Ministry of Education and Science of the Republic of Kazakhstan and Ministry of Education and Science of the Russian Federation (Agreement on grant No. 14.607.21.0118, project RFMEF160715X0118).

Keywords: coal, efficiency, ignition, numerical modeling, plasma-fuel system, plasma generator

Procedia PDF Downloads 247
413 Effects of Prescribed Surface Perturbation on NACA 0012 at Low Reynolds Number

Authors: Diego F. Camacho, Cristian J. Mejia, Carlos Duque-Daza

Abstract:

The recent widespread use of Unmanned Aerial Vehicles (UAVs) has fueled a renewed interest in efficiency and performance of airfoils, particularly for applications at low and moderate Reynolds numbers, typical of this kind of vehicles. Most of previous efforts in the aeronautical industry, regarding aerodynamic efficiency, had been focused on high Reynolds numbers applications, typical of commercial airliners and large size aircrafts. However, in order to increase the levels of efficiency and to boost the performance of these UAV, it is necessary to explore new alternatives in terms of airfoil design and application of drag reduction techniques. The objective of the present work is to carry out the analysis and comparison of performance levels between a standard NACA0012 profile against another one featuring a wall protuberance or surface perturbation. A computational model, based on the finite volume method, is employed to evaluate the effect of the presence of geometrical distortions on the wall. The performance evaluation is achieved in terms of variations of drag and lift coefficients for the given profile. In particular, the aerodynamic performance of the new design, i.e. the airfoil with a surface perturbation, is examined under conditions of incompressible and subsonic flow in transient state. The perturbation considered is a shaped protrusion prescribed as a small surface deformation on the top wall of the aerodynamic profile. The ultimate goal by including such a controlled smooth artificial roughness was to alter the turbulent boundary layer. It is shown in the present work that such a modification has a dramatic impact on the aerodynamic characteristics of the airfoil, and if properly adjusted, in a positive way. The computational model was implemented using the unstructured, FVM-based open source C++ platform OpenFOAM. A number of numerical experiments were carried out at Reynolds number 5x104, based on the length of the chord and the free-stream velocity, and angles of attack 6° and 12°. A Large Eddy Simulation (LES) approach was used, together with the dynamic Smagorinsky approach as subgrid scale (SGS) model, in order to account for the effect of the small turbulent scales. The impact of the surface perturbation on the performance of the airfoil is judged in terms of changes in the drag and lift coefficients, as well as in terms of alterations of the main characteristics of the turbulent boundary layer on the upper wall. A dramatic change in the whole performance can be appreciated, including an arguably large level of lift-to-drag coefficient ratio increase for all angles and a size reduction of laminar separation bubble (LSB) for a twelve-angle-of-attack.

Keywords: CFD, LES, Lift-to-drag ratio, LSB, NACA 0012 airfoil

Procedia PDF Downloads 383
412 Application of the Pattern Method to Form the Stable Neural Structures in the Learning Process as a Way of Solving Modern Problems in Education

Authors: Liudmyla Vesper

Abstract:

The problems of modern education are large-scale and diverse. The aspirations of parents, teachers, and experts converge - everyone interested in growing up a generation of whole, well-educated persons. Both the family and society are expected in the future generation to be self-sufficient, desirable in the labor market, and capable of lifelong learning. Today's children have a powerful potential that is difficult to realize in the conditions of traditional school approaches. Focusing on STEM education in practice often ends with the simple use of computers and gadgets during class. "Science", "technology", "engineering" and "mathematics" are difficult to combine within school and university curricula, which have not changed much during the last 10 years. Solving the problems of modern education largely depends on teachers - innovators, teachers - practitioners who develop and implement effective educational methods and programs. Teachers who propose innovative pedagogical practices that allow students to master large-scale knowledge and apply it to the practical plane. Effective education considers the creation of stable neural structures during the learning process, which allow to preserve and increase knowledge throughout life. The author proposed a method of integrated lessons – cases based on the maths patterns for forming a holistic perception of the world. This method and program are scientifically substantiated and have more than 15 years of practical application experience in school and student classrooms. The first results of the practical application of the author's methodology and curriculum were announced at the International Conference "Teaching and Learning Strategies to Promote Elementary School Success", 2006, April 22-23, Yerevan, Armenia, IREX-administered 2004-2006 Multiple Component Education Project. This program is based on the concept of interdisciplinary connections and its implementation in the process of continuous learning. This allows students to save and increase knowledge throughout life according to a single pattern. The pattern principle stores information on different subjects according to one scheme (pattern), using long-term memory. This is how neural structures are created. The author also admits that a similar method can be successfully applied to the training of artificial intelligence neural networks. However, this assumption requires further research and verification. The educational method and program proposed by the author meet the modern requirements for education, which involves mastering various areas of knowledge, starting from an early age. This approach makes it possible to involve the child's cognitive potential as much as possible and direct it to the preservation and development of individual talents. According to the methodology, at the early stages of learning students understand the connection between school subjects (so-called "sciences" and "humanities") and in real life, apply the knowledge gained in practice. This approach allows students to realize their natural creative abilities and talents, which makes it easier to navigate professional choices and find their place in life.

Keywords: science education, maths education, AI, neuroplasticity, innovative education problem, creativity development, modern education problem

Procedia PDF Downloads 57
411 Analysis of Aspergillus fumigatus IgG Serologic Cut-Off Values to Increase Diagnostic Specificity of Allergic Bronchopulmonary Aspergillosis

Authors: Sushmita Roy Chowdhury, Steve Holding, Sujoy Khan

Abstract:

The immunogenic responses of the lung towards the fungus Aspergillus fumigatus may range from invasive aspergillosis in the immunocompromised, fungal ball or infection within a cavity in the lung in those with structural lung lesions, or allergic bronchopulmonary aspergillosis (ABPA). Patients with asthma or cystic fibrosis are particularly predisposed to ABPA. There are consensus guidelines that have established criteria for diagnosis of ABPA, but uncertainty remains on the serologic cut-off values that would increase the diagnostic specificity of ABPA. We retrospectively analyzed 80 patients with severe asthma and evidence of peripheral blood eosinophilia ( > 500) over the last 3 years who underwent all serologic tests to exclude ABPA. Total IgE, specific IgE and specific IgG levels against Aspergillus fumigatus were measured using ImmunoCAP Phadia-100 (Thermo Fisher Scientific, Sweden). The Modified ISHAM working group 2013 criteria (obligate criteria: asthma or cystic fibrosis, total IgE > 1000 IU/ml or > 417 kU/L and positive specific IgE Aspergillus fumigatus or skin test positivity; with ≥ 2 of peripheral eosinophilia, positive specific IgG Aspergillus fumigatus and consistent radiographic opacities) was used in the clinical workup for the final diagnosis of ABPA. Patients were divided into 3 groups - definite, possible, and no evidence of ABPA. Specific IgG Aspergillus fumigatus levels were not used to assign the patients into any of the groups. Of 80 patients (males 48, females 32; mean age 53.9 years ± SD 15.8) selected for the analysis, there were 30 patients who had positive specific IgE against Aspergillus fumigatus (37.5%). 13 patients fulfilled the Modified ISHAM working group 2013 criteria of ABPA (‘definite’), while 15 patients were ‘possible’ ABPA and 52 did not fulfill the criteria (not ABPA). As IgE levels were not normally distributed, median levels were used in the analysis. Median total IgE levels of patients with definite and possible ABPA were 2144 kU/L and 2597 kU/L respectively (non-significant), while median specific IgE Aspergillus fumigatus at 4.35 kUA/L and 1.47 kUA/L respectively were significantly different (comparison of standard deviations F-statistic 3.2267, significance level p=0.040). Mean levels of IgG anti-Aspergillus fumigatus in the three groups (definite, possible and no evidence of ABPA) were compared using ANOVA (Statgraphics Centurion Professional XV, Statpoint Inc). Mean levels of IgG anti-Aspergillus fumigatus (Gm3) in definite ABPA was 125.17 mgA/L ( ± SD 54.84, with 95%CI 92.03-158.32), while mean Gm3 levels in possible and no ABPA were 18.61 mgA/L and 30.05 mgA/L respectively. ANOVA showed a significant difference between the definite group and the other groups (p < 0.001). This was confirmed using multiple range tests (Fisher's least significant difference procedure). There was no significant difference between the possible ABPA and not ABPA groups (p > 0.05). The study showed that a sizeable proportion of patients with asthma are sensitized to Aspergillus fumigatus in this part of India. A higher cut-off value of Gm3 ≥ 80 mgA/L provides a higher serologic specificity towards definite ABPA. Long-term studies would provide us more information if those patients with 'possible' APBA and positive Gm3 later develop clear ABPA, and are different from the Gm3 negative group in this respect. Serologic testing with clear defined cut-offs are a valuable adjunct in the diagnosis of ABPA.

Keywords: allergic bronchopulmonary aspergillosis, Aspergillus fumigatus, asthma, IgE level

Procedia PDF Downloads 207
410 Approach-Avoidance Conflict in the T-Maze: Behavioral Validation for Frontal EEG Activity Asymmetries

Authors: Eva Masson, Andrea Kübler

Abstract:

Anxiety disorders (AD) are the most prevalent psychological disorders. However, far from most affected individuals are diagnosed and receive treatment. This gap is probably due to the diagnosis criteria, relying on symptoms (according to the DSM-5 definition) with no objective biomarker. Approach-avoidance conflict tasks are one common approach to simulate such disorders in a lab setting, with most of the paradigms focusing on the relationships between behavior and neurophysiology. Approach-avoidance conflict tasks typically place participants in a situation where they have to make a decision that leads to both positive and negative outcomes, thereby sending conflicting signals that trigger the Behavioral Inhibition System (BIS). Furthermore, behavioral validation of such paradigms adds credibility to the tasks – with overt conflict behavior, it is safer to assume that the task actually induced a conflict. Some of those tasks have linked asymmetrical frontal brain activity to induced conflicts and the BIS. However, there is currently no consensus for the direction of the frontal activation. The authors present here a modified version of the T-Maze paradigm, a motivational conflict desktop task, in which behavior is recorded simultaneously to the recording of high-density EEG (HD-EEG). Methods: In this within-subject design, HD-EEG and behavior of 35 healthy participants was recorded. EEG data was collected with a 128 channels sponge-based system. The motivational conflict desktop task consisted of three blocks of repeated trials. Each block was designed to record a slightly different behavioral pattern, to increase the chances of eliciting conflict. This variety of behavioral patterns was however similar enough to allow comparison of the number of trials categorized as ‘overt conflict’ between the blocks. Results: Overt conflict behavior was exhibited in all blocks, but always for under 10% of the trials, in average, in each block. However, changing the order of the paradigms successfully introduced a ‘reset’ of the conflict process, therefore providing more trials for analysis. As for the EEG correlates, the authors expect a different pattern for trials categorized as conflict, compared to the other ones. More specifically, we expect an elevated alpha frequency power in the left frontal electrodes at around 200ms post-cueing, compared to the right one (relative higher right frontal activity), followed by an inversion around 600ms later. Conclusion: With this comprehensive approach of a psychological mechanism, new evidence would be brought to the frontal asymmetry discussion, and its relationship with the BIS. Furthermore, with the present task focusing on a very particular type of motivational approach-avoidance conflict, it would open the door to further variations of the paradigm to introduce different kinds of conflicts involved in AD. Even though its application as a potential biomarker sounds difficult, because of the individual reliability of both the task and peak frequency in the alpha range, we hope to open the discussion for task robustness for neuromodulation and neurofeedback future applications.

Keywords: anxiety, approach-avoidance conflict, behavioral inhibition system, EEG

Procedia PDF Downloads 35
409 The Functionality of Ovarian Follicle on Steroid Hormone Secretion under Heat Stress

Authors: Petnamnueng Dettipponpong, Shuen E. Chen

Abstract:

Heat stress is known to have negative effects on reproductive functions, such as follicular development and ovulation. This study aimed to investigate the specific effects of heat stress on steroid hormone secretion of ovarian follicle cells, particularly in relation to the expression of Apolipoprotein B (ApoB) and microsomal triglyceride transfer protein (MTP). The aim of the study was to understand the impact of heat stress on steroid hormone secretion in ovarian follicle cells and to explore the role of ApoB and MTP in this process. Primary granulosa and theca cells were collected from follicles and cultured under heat stress conditions (42 °C) for various time periods. Controls were maintained under normal conditions (37.5 °C ). The culture medium was collected at different time points to measure levels of progesterone and estradiol using ELISA kits. ApoB and MTP expression levels were analyzed using homemade antibodies and western blot. Data were assessed by a one-way ANOVA comparison test with Duncan’s new multiple-range test. Results were expressed as mean±S.E. Difference was considered significant at P<0.05. The results showed that heat stress significantly increased progesterone secretion in granulosa cells, with the peak observed after 13 hours of recovery under thermoneutral conditions. Estradiol secretion by theca cells was not affected. Heat stress also had a significant negative effect on granulosa cell viability. Additionally, the expression of ApoB and MTP was found to be differentially regulated by heat stress. ApoB expression in theca cells was transiently promoted, while ApoB expression in granulosa cells was consistently suppressed. MTP expression increased after 5 hours of recovery in both cell types. These findings suggest a mechanism by which chicken follicle cells export cellular lipids as very low-density lipoprotein (VLDL) in response to thermal stress. These contribute to our understanding of the role of ApoB and MTP steroidogenesis and lipid metabolism under heat stress conditions. The study involved the collection of primary granulosa and theca cells, culture under different temperature conditions, and analysis of the culture medium for hormone levels using ELISA kits. ApoB and MTP expression levels were assessed using homemade antibodies and western blot. This study aimed to address the effects of heat stress on steroid hormone secretion in ovarian follicle cells, as well as the role of ApoB and MTP in this process. The study demonstrates that heat stress stimulates steroidogenesis in granulosa cells, affecting progesterone secretion. ApoB and MTP expression were found to be differentially regulated by heat stress, indicating a potential mechanism for the export of cellular lipids in response to thermal stress.

Keywords: heat stress, granulosa cells, theca cells, steroidogenesis, chicken, apolipoprotein B, microsomal triglyceride transfer protein

Procedia PDF Downloads 70
408 Seroprevalence of Middle East Respiratory Syndrome Coronavirus (MERS-Cov) Infection among Healthy and High Risk Individuals in Qatar

Authors: Raham El-Kahlout, Hadi Yassin, Asmaa Athani, Marwan Abou Madi, Gheyath Nasrallah

Abstract:

Background: Since its first isolation in September 2012, Middle East respiratory syndrome coronavirus (MERS-CoV) has diffused across 27 countries infecting more than two thousand individuals with a high case fatality rate. MERS-CoV–specific antibodies are widely found in Dromedary camel along with viral shedding of similar viruses detected in human at same region, suggesting that MERS epidemiology may be central role by camel. Interestingly, MERS-CoV has also been also reported to be asymptomatic or to cause influenza-like mild illnesses. Therefore, in a country like Qatar (bordered Saudi Arabia), where camels are widely spread, serological surveys are important to explore the role of camels in MERS-CoV transmission. However, widespread strategic serological surveillances of MERS-CoV among populations, particularly in endemic country, are infrequent. In the absence of clear epidemiological view, cross-sectional MERS antibody surveillances in human populations are of global concern. Method: We performed a comparative serological screening of 4719 healthy blood donors, 135 baseline case contacts (high risk individual), and four MERS confirmed patients (by PCR) for the presence of anti-MERS IgG. Initially, samples were screened using Euroimmune anti- MERS-CoV IgG ELISA kit, the only commercial kit available in the market and recommended by the CDC as a screening kit. To confirm ELISA test results, farther serological testing was performed for all borderline and positive samples using two assays; the anti MERS-CoV IgG and IgM Euroimmune indirect immunofluorescent test (IIFT) and pseudoviral particle neutralizing assay (PPNA). Additionally, to test cross reactivity of anti-MERS-CoV antibody with other family members of coronavirus, borderline and positive samples were tested for the presence of the of IgG antibody of the following viruses; SARS, HCoV-229E, HKU1 using the Euroimmune IIFT for SARS and HCoV-229E and ELISA for HKU1. Results: In all of 4858 screened 15 samples [10 donors (0.21%, 10/4719), 1 case contact (0.77 %, 1/130), 3 patients (75%, 3/4)] anti-MERS IgG reactive/borderline samples were seen in ELISA. However, only 7 (0.14%) of them gave positive with in IIFT and only 3 (0.06%) was confirmed by the specific anti-MERS PPNA. One of the interesting findings was, a donor, who was selected in the control group as a negative anti-MERS IgG ELISA, yield reactive for anti-MERS IgM IIFT and was confirmed with the PPNA. Further, our preliminary results showed that there was a strong cross reactivity between anti- MERS-COV IgG with both HCoV-229E or anti-HKU1 IgG, yet, no cross reactivity of SARS were found. Conclusions: Our findings suggest that MERS-CoV is not heavily circulated among the population of Qatar and this is also indicated by low number of confirmed cases (only 18) since 2012. Additionally, the presence of antibody of other pathogenic human coronavirus may cause false positive results of both ELISA and IIFT, which stress the need for more evaluation studies for the available serological assays. Conclusion: this study provides an insight about the epidemiological view for MERS-CoV in Qatar population. It also provides a performance evaluation for the available serologic tests for MERS-CoV in a view of serologic status to other human coronaviruses.

Keywords: seroprevalence, MERS-CoV, healthy individuals, Qatar

Procedia PDF Downloads 267
407 Comparison between Two Software Packages GSTARS4 and HEC-6 about Prediction of the Sedimentation Amount in Dam Reservoirs and to Estimate Its Efficient Life Time in the South of Iran

Authors: Fatemeh Faramarzi, Hosein Mahjoob

Abstract:

Building dams on rivers for utilization of water resources causes problems in hydrodynamic equilibrium and results in leaving all or part of the sediments carried by water in dam reservoir. This phenomenon has also significant impacts on water and sediment flow regime and in the long term can cause morphological changes in the environment surrounding the river, reducing the useful life of the reservoir which threatens sustainable development through inefficient management of water resources. In the past, empirical methods were used to predict the sedimentation amount in dam reservoirs and to estimate its efficient lifetime. But recently the mathematical and computational models are widely used in sedimentation studies in dam reservoirs as a suitable tool. These models usually solve the equations using finite element method. This study compares the results from tow software packages, GSTARS4 & HEC-6, in the prediction of the sedimentation amount in Dez dam, southern Iran. The model provides a one-dimensional, steady-state simulation of sediment deposition and erosion by solving the equations of momentum, flow and sediment continuity and sediment transport. GSTARS4 (Generalized Sediment Transport Model for Alluvial River Simulation) which is based on a one-dimensional mathematical model that simulates bed changes in both longitudinal and transverse directions by using flow tubes in a quasi-two-dimensional scheme to calibrate a period of 47 years and forecast the next 47 years of sedimentation in Dez Dam, Southern Iran. This dam is among the highest dams all over the world (with its 203 m height), and irrigates more than 125000 square hectares of downstream lands and plays a major role in flood control in the region. The input data including geometry, hydraulic and sedimentary data, starts from 1955 to 2003 on a daily basis. To predict future river discharge, in this research, the time series data were assumed to be repeated after 47 years. Finally, the obtained result was very satisfactory in the delta region so that the output from GSTARS4 was almost identical to the hydrographic profile in 2003. In the Dez dam due to the long (65 km) and a large tank, the vertical currents are dominant causing the calculations by the above-mentioned method to be inaccurate. To solve this problem, we used the empirical reduction method to calculate the sedimentation in the downstream area which led to very good answers. Thus, we demonstrated that by combining these two methods a very suitable model for sedimentation in Dez dam for the study period can be obtained. The present study demonstrated successfully that the outputs of both methods are the same.

Keywords: Dez Dam, prediction, sedimentation, water resources, computational models, finite element method, GSTARS4, HEC-6

Procedia PDF Downloads 309
406 The Photovoltaic Panel at End of Life: Experimental Study of Metals Release

Authors: M. Tammaro, S. Manzo, J. Rimauro, A. Salluzzo, S. Schiavo

Abstract:

The solar photovoltaic (PV) modules are considered to have a negligible environmental impact compared to the fossil energy. Therefore also the waste management and the corresponding potential environmental hazard needs to be considered. The case of the photovoltaic panel is unique because the time lag from the manufacturing to the decommissioning as waste usually takes 25-30 years. Then the environmental hazard associated with end life of PV panels has been largely related to their metal contents. The principal concern regards the presence of heavy metals as Cd in thin film (TF) modules or Pb and Cr in crystalline silicon (c-Si) panels. At the end of life of PV panels, these dangerous substances could be released in the environment, if special requirements for their disposal are not adopted. Nevertheless, in literature, only a few experimental study about metal emissions from silicon crystalline/thin film panels and the corresponding environmental effect are present. As part of a study funded by the Italian national consortium for the waste collection and recycling (COBAT), the present work was aimed to analyze experimentally the potential release into the environment of hazardous elements, particularly metals, from PV waste. In this paper, for the first time, eighteen releasable metals a large number of photovoltaic panels, by c-Si and TF, manufactured in the last 30 years, together with the environmental effects by a battery of ecotoxicological tests, were investigated. Leaching tests are conducted on the crushed samples of PV module. The test is conducted according to Italian and European Standard procedure for hazard assessment of the granular waste and of the sludge. The sample material is shaken for 24 hours in HDPE bottles with an overhead mixer Rotax 6.8 VELP at indoor temperature and using pure water (18 MΩ resistivity) as leaching solution. The liquid-to-solid ratio was 10 (L/S=10, i.e. 10 liters of water per kg of solid). The ecotoxicological tests were performed in the subsequent 24 hours. A battery of toxicity test with bacteria (Vibrio fisheri), algae (Pseudochirneriella subcapitata) and crustacea (Daphnia magna) was carried out on PV panel leachates obtained as previously described and immediately stored in dark and at 4°C until testing (in the next 24 hours). For understand the actual pollution load, a comparison with the current European and Italian benchmark limits was performed. The trend of leachable metal amount from panels in relation to manufacturing years was then highlighted in order to assess the environmental sustainability of PV technology over time. The experimental results were very heterogeneous and show that the photovoltaic panels could represent an environmental hazard. The experimental results showed that the amounts of some hazardous metals (Pb, Cr, Cd, Ni), for c-Si and TF, exceed the law limits and they are a clear indication of the potential environmental risk of photovoltaic panels "as a waste" without a proper management.

Keywords: photovoltaic panel, environment, ecotoxicity, metals emission

Procedia PDF Downloads 257
405 Widely Diversified Macroeconomies in the Super-Long Run Casts a Doubt on Path-Independent Equilibrium Growth Model

Authors: Ichiro Takahashi

Abstract:

One of the major assumptions of mainstream macroeconomics is the path independence of capital stock. This paper challenges this assumption by employing an agent-based approach. The simulation results showed the existence of multiple "quasi-steady state" equilibria of the capital stock, which may cast serious doubt on the validity of the assumption. The finding would give a better understanding of many phenomena that involve hysteresis, including the causes of poverty. The "market-clearing view" has been widely shared among major schools of macroeconomics. They understand that the capital stock, the labor force, and technology, determine the "full-employment" equilibrium growth path and demand/supply shocks can move the economy away from the path only temporarily: the dichotomy between the short-run business cycles and the long-run equilibrium path. The view then implicitly assumes the long-run capital stock to be independent of how the economy has evolved. In contrast, "Old Keynesians" have recognized fluctuations in output as arising largely from fluctuations in real aggregate demand. It will then be an interesting question to ask if an agent-based macroeconomic model, which is known to have path dependence, can generate multiple full-employment equilibrium trajectories of the capital stock in the super-long run. If the answer is yes, the equilibrium level of capital stock, an important supply-side factor, would no longer be independent of the business cycle phenomenon. This paper attempts to answer the above question by using the agent-based macroeconomic model developed by Takahashi and Okada (2010). The model would serve this purpose well because it has neither population growth nor technology progress. The objective of the paper is twofold: (1) to explore the causes of long-term business cycle, and (2) to examine the super-long behaviors of the capital stock of full-employment economies. (1) The simulated behaviors of the key macroeconomic variables such as output, employment, real wages showed widely diversified macro-economies. They were often remarkably stable but exhibited both short-term and long-term fluctuations. The long-term fluctuations occur through the following two adjustments: the quantity and relative cost adjustments of capital stock. The first one is obvious and assumed by many business cycle theorists. The reduced aggregate demand lowers prices, which raises real wages, thereby decreasing the relative cost of capital stock with respect to labor. (2) The long-term business cycles/fluctuations were synthesized with the hysteresis of real wages, interest rates, and investments. In particular, a sequence of the simulation runs with a super-long simulation period generated a wide range of perfectly stable paths, many of which achieved full employment: all the macroeconomic trajectories, including capital stock, output, and employment, were perfectly horizontal over 100,000 periods. Moreover, the full-employment level of capital stock was influenced by the history of unemployment, which was itself path-dependent. Thus, an experience of severe unemployment in the past kept the real wage low, which discouraged a relatively costly investment in capital stock. Meanwhile, a history of good performance sometimes brought about a low capital stock due to a high-interest rate that was consistent with a strong investment.

Keywords: agent-based macroeconomic model, business cycle, hysteresis, stability

Procedia PDF Downloads 206
404 Effect of Formulated Insect Enriched Sprouted Soybean /Millet Based Food on Gut Health Markers in Albino Wistar Rats

Authors: Gadanya, A.M., Ponfa, S., Jibril, M.M., Abubakar, S. M.

Abstract:

Background: Edible insects such as grasshopper are important sources of food for humans, and have been consumed as traditional foods by many indigenous communities especially in Africa, Asia, and Latin America. These communities have developed their skills and techniques in harvesting, preparing, consuming, and preserving edible insects, widely contributing to the role played by the use of insects in human nutrition. Aim/ objective: This study was aimed at determining the effect of insect enriched sprouted soyabean /millet based food on some gut health markers in albino rats. Methods. Four different formulations of Complementary foods (i.e Complementary Food B (CFB): sprouted millet (SM), Complementary Food C (CFC): sprouted soyabean (SSB), Complementary Food D (CFD): sprouted soybean and millet (SSBM) in a ratio of (50:50) and Complementary Food E (CFE): insect (grasshopper) enriched sprouted soybean and millet (SSBMI) in a ratio of (50:25:25)) were prepared. Proximate composition and short chain fatty acid contents were determined. Thirty albino rats were divided into5 groups of six rats each. Group 1(CDA) were fed with basal diet and served as a control group, while groups 2,3,4 and 5 were fed with the corresponding complimentary foods CFB, CFC, CFD and CFE respectively daily for four weeks. Concentrations of fecal protein, serum total carotenoids and nitric oxide were determined. DNA extraction for molecular isolation and characterization were carried out followed by PCR, the use of mega 11 software and NCBI blast for construction of the phylogenetic tree and organism identification respectively. Results: Significant increase (P<0.05) in percentage ash, fat, protein and moisture contents, as well as short chain fatty acid (acetate, butyrate and propionate) concentrations were recorded in the insect enriched sprouted composite food (CFE) when compared with the CFA, CFB, CFC and CFD composite food. Faecal protein, carotenoid and nitric oxide concentrations were significantly lower (P>0.05) in group 5 in comparison to groups 1to 4. Ruminococcus bromii and Bacteroidetes were molecularly isolated and characterized by 16s rRNA from the sprouted millet/sprouted soybean and the insect enriched sprouted soybean/sprouted millet based food respectively. The presence of these bacterial strains in the feaces of the treated rats is an indication that the gut of the treated rats is colonized by good gut bacteria, hence, an improved gut health. Conclusion: Insect enriched sprouted soya bean/sprouted millet based complementary diet showed a high composition of ash, fat, protein and fiber. Thus, could increase the availability of short chain fatty acids whose role to the host organism cannot be overemphasized. It was also found to have decrease the level of faecal protein, carotenoid and nitric oxide in the serum which is an indication of an improvement in the immune system function.

Keywords: gut-health, insect, millet, soybean, sprouted

Procedia PDF Downloads 63
403 Statistical Comparison of Ensemble Based Storm Surge Forecasting Models

Authors: Amin Salighehdar, Ziwen Ye, Mingzhe Liu, Ionut Florescu, Alan F. Blumberg

Abstract:

Storm surge is an abnormal water level caused by a storm. Accurate prediction of a storm surge is a challenging problem. Researchers developed various ensemble modeling techniques to combine several individual forecasts to produce an overall presumably better forecast. There exist some simple ensemble modeling techniques in literature. For instance, Model Output Statistics (MOS), and running mean-bias removal are widely used techniques in storm surge prediction domain. However, these methods have some drawbacks. For instance, MOS is based on multiple linear regression and it needs a long period of training data. To overcome the shortcomings of these simple methods, researchers propose some advanced methods. For instance, ENSURF (Ensemble SURge Forecast) is a multi-model application for sea level forecast. This application creates a better forecast of sea level using a combination of several instances of the Bayesian Model Averaging (BMA). An ensemble dressing method is based on identifying best member forecast and using it for prediction. Our contribution in this paper can be summarized as follows. First, we investigate whether the ensemble models perform better than any single forecast. Therefore, we need to identify the single best forecast. We present a methodology based on a simple Bayesian selection method to select the best single forecast. Second, we present several new and simple ways to construct ensemble models. We use correlation and standard deviation as weights in combining different forecast models. Third, we use these ensembles and compare with several existing models in literature to forecast storm surge level. We then investigate whether developing a complex ensemble model is indeed needed. To achieve this goal, we use a simple average (one of the simplest and widely used ensemble model) as benchmark. Predicting the peak level of Surge during a storm as well as the precise time at which this peak level takes place is crucial, thus we develop a statistical platform to compare the performance of various ensemble methods. This statistical analysis is based on root mean square error of the ensemble forecast during the testing period and on the magnitude and timing of the forecasted peak surge compared to the actual time and peak. In this work, we analyze four hurricanes: hurricanes Irene and Lee in 2011, hurricane Sandy in 2012, and hurricane Joaquin in 2015. Since hurricane Irene developed at the end of August 2011 and hurricane Lee started just after Irene at the beginning of September 2011, in this study we consider them as a single contiguous hurricane event. The data set used for this study is generated by the New York Harbor Observing and Prediction System (NYHOPS). We find that even the simplest possible way of creating an ensemble produces results superior to any single forecast. We also show that the ensemble models we propose generally have better performance compared to the simple average ensemble technique.

Keywords: Bayesian learning, ensemble model, statistical analysis, storm surge prediction

Procedia PDF Downloads 306
402 Comparison of Gestational Diabetes Influence on the Ultrastructure of Rectus Abdominis Muscle in Women and Rats

Authors: Giovana Vesentini, Fernanda Piculo, Gabriela Marini, Debora Damasceno, Angelica Barbosa, Selma Martheus, Marilza Rudge

Abstract:

Problem statement: Skeletal muscle is highly adaptable, muscle fiber composition and size can respond to a variety of stimuli, such physiologic, as pregnancy, and metabolic abnormalities, as Diabetes mellitus. This study aimed to analyze the effects of pregnancy-associated diabetes on the rectus abdominis muscle (RA), and to compare this changes in rats and women. Methods: Female Wistar rats were maintained under controlled conditions and distributed in Pregnant (P) and Long-term mild pregnant diabetic (LTMd) (n=3 r/group). Diabetes in rats was induced by streptozotocin (100mg/Kg, sc) on the first day of life, for a hyperglycemic state between 120-300 mg/dL in adult life. Female rats were mated overnight, at day 21 of pregnancy were anesthetized, and killed for the harvesting of maternal RA. Pregnant women who attended the Diabetes Prenatal Care Clinic of Botucatu Medical School were distributed in Pregnant non-diabetic (Pnd) and Gestational Diabetic (GDM) (n=3 w/group). The diagnosis of GDM was established according to ADA’s criteria (2016). The harvesting of RA was during the cesarean section. Transversal cross-sections of the RA of both women and rats were analyzed by transmission electron microscopy. All procedures were approved by the Ethics Committee on Animal Experiments of the Botucatu Medical School (Protocol Number 1003/2013) and by the Botucatu Medical School Ethical Committee for Human Research in Medical Sciences (CAAE: 41570815.0.0000.5411). Results: The photomicrographs of the RA of rats revealed disorganized Z lines, thinning sarcomeres, and a usual quantity of intermyofibrillar mitochondria in the P group. The LTMd group showed swollen sarcoplasmic reticulum, dilated T tubes and areas with sarcomere disruption. The ultrastructural analysis of Pnd non-diabetic women in the RA showed well-organized myofibrils forming intact sarcomeres, organized Z lines and a normal distribution of intermyofibrillar mitochondria. The GDM group revealed increase in intermyofibrillar mitochondria, areas with sarcomere disruption and increased lipid droplets. Conclusion: Pregnancy and diabetes induce adaptations in the ultrastructure of the rectus abdominis muscle for both women and rats, changing the architectural design of these tissues. However, in rats these changes are more severe maybe because, besides the high blood glucose levels, the quadrupedal animal may suffer an excessive mechanical tension during pregnancy by gravity. Probably, these findings may suggest that these alterations are a risk factor that contributes to the development of muscle dysfunction in women with GDM and may motivate treatment strategies in these patients.

Keywords: gestational diabetes, muscle dysfunction, pregnancy, rectus abdominis

Procedia PDF Downloads 288