Search results for: software comparison
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9452

Search results for: software comparison

692 Cultural Identity of Mainland Chinese, Hongkonger and Taiwanese: A Glimpse from Hollywood Film Title Translation

Authors: Ling Yu Debbie Tsoi

Abstract:

After China has just exceeded the USA as the top Hollywood film market in 2018, Hollywood studios have been adapting the taste, preference, casting and even film title translation to resonate with the Chinese audience. Due to the huge foreign demands, Hollywood film directors are paying closer attention to the translation of their products, as film titles are entry gates to the film and serve advertising, informative, aesthetic functions. Other than film directors and studios, comments over quality film title translation also appear on various online clip viewing platforms, online media, and magazines. In particular, netizens in mainland China, Hong Kong, and Taiwan seems to defend film titles in their own region while despising the other two regions. In view of the endless debates and lack of systematic analysis on film title translation in Greater China, the study aims at investigating the translation of Hollywood film titles (from English to Chinese) across Greater China based on Venuti’s (1991; 1995; 1998; 2001) concept of domestication and foreignization. To offer a comparison over time, a mini-corpus was built comprised of the top 70 most popular Hollywood film titles in 1987- 1988, 1997- 1998, 2007- 2008 and 2017- 2018 of Greater China respectively. Altogether, 560 source texts and 1680 target texts of mainland China, Hong Kong, and Taiwan were compared against each other. The three regions are found to have a distinctive style and patterns of translation. For instance, a sizable number of film titles are foreignized in mainland China by adopting literal translation and transliteration, whereas Hong Kong and Taiwan prefer domestication. Hong Kong tends to adopt a more vulgar style by using colloquial Cantonese slangs and even swear words, associating characters with negative connotations. Also, English is used as a form of domestication in Hong Kong from 1987 till 2018. Use of English as a strategy of domestication was never found in mainland nor Taiwan. On the contrary, Taiwanese target texts tend to adopt a cute and child-like style by using repetitive words and positive connotations. Even if English was used, it was used as foreignization. As film titles represent cultural products of popular culture, it is suspected that Hongkongers would like to develop cultural identity via adopting style distinctive from mainland China by vulgarization and negativity. Hongkongers also identify themselves as international cosmopolitan, leading to their identification with English. It is also suspected that due to former colonial rule of Japan, Taiwan adopts a popular culture similar to Japan, with cute and childlike expressions.

Keywords: cultural identification, ethnic identification, Greater China, film title translation

Procedia PDF Downloads 146
691 The Environmental Impact of Sustainability Dispersion of Chlorine Releases in Coastal Zone of Alexandra: Spatial-Ecological Modeling

Authors: Mohammed El Raey, Moustafa Osman Mohammed

Abstract:

The spatial-ecological modeling is relating sustainable dispersions with social development. Sustainability with spatial-ecological model gives attention to urban environments in the design review management to comply with Earth’s System. Naturally exchange patterns of ecosystems have consistent and periodic cycles to preserve energy flows and materials in Earth’s System. The probabilistic risk assessment (PRA) technique is utilized to assess the safety of industrial complex. The other analytical approach is the Failure-Safe Mode and Effect Analysis (FMEA) for critical components. The plant safety parameters are identified for engineering topology as employed in assessment safety of industrial ecology. In particular, the most severe accidental release of hazardous gaseous is postulated, analyzed and assessment in industrial region. The IAEA- safety assessment procedure is used to account the duration and rate of discharge of liquid chlorine. The ecological model of plume dispersion width and concentration of chlorine gas in the downwind direction is determined using Gaussian Plume Model in urban and ruler areas and presented with SURFER®. The prediction of accident consequences is traced in risk contour concentration lines. The local greenhouse effect is predicted with relevant conclusions. The spatial-ecological model is also predicted the distribution schemes from the perspective of pollutants that considered multiple factors of multi-criteria analysis. The data extends input–output analysis to evaluate the spillover effect, and conducted Monte Carlo simulations and sensitivity analysis. Their unique structure is balanced within “equilibrium patterns”, such as the biosphere and collective a composite index of many distributed feedback flows. These dynamic structures are related to have their physical and chemical properties and enable a gradual and prolonged incremental pattern. While this spatial model structure argues from ecology, resource savings, static load design, financial and other pragmatic reasons, the outcomes are not decisive in artistic/ architectural perspective. The hypothesis is an attempt to unify analytic and analogical spatial structure for development urban environments using optimization software and applied as an example of integrated industrial structure where the process is based on engineering topology as optimization approach of systems ecology.

Keywords: spatial-ecological modeling, spatial structure orientation impact, composite structure, industrial ecology

Procedia PDF Downloads 76
690 Chemical Analysis of Particulate Matter (PM₂.₅) and Volatile Organic Compound Contaminants

Authors: S. Ebadzadsahraei, H. Kazemian

Abstract:

The main objective of this research was to measure particulate matter (PM₂.₅) and Volatile Organic Compound (VOCs) as two classes of air pollutants, at Prince George (PG) neighborhood in warm and cold seasons. To fulfill this objective, analytical protocols were developed for accurate sampling and measurement of the targeted air pollutants. PM₂.₅ samples were analyzed for their chemical composition (i.e., toxic trace elements) in order to assess their potential source of emission. The City of Prince George, widely known as the capital of northern British Columbia (BC), Canada, has been dealing with air pollution challenges for a long time. The city has several local industries including pulp mills, a refinery, and a couple of asphalt plants that are the primary contributors of industrial VOCs. In this research project, which is the first study of this kind in this region it measures physical and chemical properties of particulate air pollutants (PM₂.₅) at the city neighborhood. Furthermore, this study quantifies the percentage of VOCs at the city air samples. One of the outcomes of this project is updated data about PM₂.₅ and VOCs inventory in the selected neighborhoods. For examining PM₂.₅ chemical composition, an elemental analysis methodology was developed to measure major trace elements including but not limited to mercury and lead. The toxicity of inhaled particulates depends on both their physical and chemical properties; thus, an understanding of aerosol properties is essential for the evaluation of such hazards, and the treatment of such respiratory and other related diseases. Mixed cellulose ester (MCE) filters were selected for this research as a suitable filter for PM₂.₅ air sampling. Chemical analyses were conducted using Inductively Coupled Plasma Mass Spectrometry (ICP-MS) for elemental analysis. VOCs measurement of the air samples was performed using a Gas Chromatography-Flame Ionization Detector (GC-FID) and Gas Chromatography-Mass Spectrometry (GC-MS) allowing for quantitative measurement of VOC molecules in sub-ppb levels. In this study, sorbent tube (Anasorb CSC, Coconut Charcoal), 6 x 70-mm size, 2 sections, 50/100 mg sorbent, 20/40 mesh was used for VOCs air sampling followed by using solvent extraction and solid-phase micro extraction (SPME) techniques to prepare samples for measuring by a GC-MS/FID instrument. Air sampling for both PM₂.₅ and VOC were conducted in summer and winter seasons for comparison. Average concentrations of PM₂.₅ are very different between wildfire and daily samples. At wildfire time average of concentration is 83.0 μg/m³ and daily samples are 23.7 μg/m³. Also, higher concentrations of iron, nickel and manganese found at all samples and mercury element is found in some samples. It is able to stay too high doses negative effects.

Keywords: air pollutants, chemical analysis, particulate matter (PM₂.₅), volatile organic compound, VOCs

Procedia PDF Downloads 139
689 Strategic Asset Allocation Optimization: Enhancing Portfolio Performance Through PCA-Driven Multi-Objective Modeling

Authors: Ghita Benayad

Abstract:

Asset allocation, which affects the long-term profitability of portfolios by distributing assets to fulfill a range of investment objectives, is the cornerstone of investment management in the dynamic and complicated world of financial markets. This paper offers a technique for optimizing strategic asset allocation with the goal of improving portfolio performance by addressing the inherent complexity and uncertainty of the market through the use of Principal Component Analysis (PCA) in a multi-objective modeling framework. The study's first section starts with a critical evaluation of conventional asset allocation techniques, highlighting how poorly they are able to capture the intricate relationships between assets and the volatile nature of the market. In order to overcome these challenges, the project suggests a PCA-driven methodology that isolates important characteristics influencing asset returns by decreasing the dimensionality of the investment universe. This decrease provides a stronger basis for asset allocation decisions by facilitating a clearer understanding of market structures and behaviors. Using a multi-objective optimization model, the project builds on this foundation by taking into account a number of performance metrics at once, including risk minimization, return maximization, and the accomplishment of predetermined investment goals like regulatory compliance or sustainability standards. This model provides a more comprehensive understanding of investor preferences and portfolio performance in comparison to conventional single-objective optimization techniques. While applying the PCA-driven multi-objective optimization model to historical market data, aiming to construct portfolios better under different market situations. As compared to portfolios produced from conventional asset allocation methodologies, the results show that portfolios optimized using the proposed method display improved risk-adjusted returns, more resilience to market downturns, and better alignment with specified investment objectives. The study also looks at the implications of this PCA technique for portfolio management, including the prospect that it might give investors a more advanced framework for navigating financial markets. The findings suggest that by combining PCA with multi-objective optimization, investors may obtain a more strategic and informed asset allocation that is responsive to both market conditions and individual investment preferences. In conclusion, this capstone project improves the field of financial engineering by creating a sophisticated asset allocation optimization model that integrates PCA with multi-objective optimization. In addition to raising concerns about the condition of asset allocation today, the proposed method of portfolio management opens up new avenues for research and application in the area of investment techniques.

Keywords: asset allocation, portfolio optimization, principle component analysis, multi-objective modelling, financial market

Procedia PDF Downloads 42
688 Prominent Lipid Parameters Correlated with Trunk-to-Leg and Appendicular Fat Ratios in Severe Pediatric Obesity

Authors: Mustafa M. Donma, Orkide Donma

Abstract:

The examination of both serum lipid fractions and body’s lipid composition are quite informative during the evaluation of obesity stages. Within this context, alterations in lipid parameters are commonly observed. The variations in the fat distribution of the body are also noteworthy. Total cholesterol (TC), triglycerides (TRG), low density lipoprotein-cholesterol (LDL-C), high density lipoprotein-cholesterol (HDL-C) are considered as the basic lipid fractions. Fat deposited in trunk and extremities may give considerable amount of information and different messages during discrete health states. Ratios are also derived from distinct fat distribution in these areas. Trunk-to-leg fat ratio (TLFR) and trunk-to-appendicular fat ratio (TAFR) are the most recently introduced ratios. In this study, lipid fractions and TLFR, as well as TAFR, were evaluated, and the distinctions among healthy, obese (OB), and morbid obese (MO) groups were investigated. Three groups [normal body mass index (N-BMI), OB, MO] were constituted from a population aged 6 to 18 years. Ages and sexes of the groups were matched. The study protocol was approved by the Non-interventional Ethics Committee of Tekirdag Namik Kemal University. Written informed consent forms were obtained from the parents of the participants. Anthropometric measurements (height, weight, waist circumference, hip circumference, head circumference, neck circumference) were obtained and recorded during the physical examination. Body mass index values were calculated. Total, trunk, leg, and arm fat mass values were obtained by TANITA Bioelectrical Impedance Analysis. These values were used to calculate TLFR and TAFR. Systolic (SBP) and diastolic blood pressures (DBP) were measured. Routine biochemical tests including TC, TRG, LDL-C, HDL-C, and insulin were performed. Data were evaluated using SPSS software. p value smaller than 0.05 was accepted as statistically significant. There was no difference among the age values and gender ratios of the groups. Any statistically significant difference was not observed in terms of DBP, TLFR as well as serum lipid fractions. Higher SBP values were measured both in OB and MO children than those with N-BMI. TAFR showed a significant difference between N-BMI and OB groups. Statistically significant increases were detected between insulin values of N-BMI group and OB as well as MO groups. There were bivariate correlations between LDL and TLFR (r=0.396; p=0.037) as well as TAFR values (r=0.413; p=0.029) in MO group. When adjusted for SBP and DBP, partial correlations were calculated as (r=0.421; p=0.032) and (r=0.438; p=0.025) for LDL-TLFR as well as LDL-TAFR, respectively. Much stronger partial correlations were obtained for the same couples (r=0.475; p=0.019 and r=0.473; p=0.020, respectively) upon controlling for TRG and HDL-C. Much stronger partial correlations observed in MO children emphasize the potential transition from morbid obesity to metabolic syndrome. These findings have concluded that LDL-C may be suggested as a discriminating parameter between OB and MO children.

Keywords: children, lipid parameters, obesity, trunk-to-leg fat ratio, trunk-to-appendicular fat ratio

Procedia PDF Downloads 108
687 Analysis of Differentially Expressed Genes in Spontaneously Occurring Canine Melanoma

Authors: Simona Perga, Chiara Beltramo, Floriana Fruscione, Isabella Martini, Federica Cavallo, Federica Riccardo, Paolo Buracco, Selina Iussich, Elisabetta Razzuoli, Katia Varello, Lorella Maniscalco, Elena Bozzetta, Angelo Ferrari, Paola Modesto

Abstract:

Introduction: Human and canine melanoma have common clinical, histologic characteristics making dogs a good model for comparative oncology. The identification of specific genes and a better understanding of the genetic landscape, signaling pathways, and tumor–microenvironmental interactions involved in the cancer onset and progression is essential for the development of therapeutic strategies against this tumor in both species. In the present study, the differential expression of genes in spontaneously occurring canine melanoma and in paired normal tissue was investigated by targeted RNAseq. Material and Methods: Total RNA was extracted from 17 canine malignant melanoma (CMM) samples and from five paired normal tissues stored in RNA-later. In order to capture the greater genetic variability, gene expression analysis was carried out using two panels (Qiagen): Human Immuno-Oncology (HIO) and Mouse-Immuno-Oncology (MIO) and the miSeq platform (Illumina). These kits allow the detection of the expression profile of 990 genes involved in the immune response against tumors in humans and mice. The data were analyzed through the CLCbio Genomics Workbench (Qiagen) software using the Canis lupus familiaris genome as a reference. Data analysis were carried out both comparing the biologic group (tumoral vs. healthy tissues) and comparing neoplastic tissue vs. paired healthy tissue; a Fold Change greater than two and a p-value less than 0.05 were set as the threshold to select interesting genes. Results and Discussion: Using HIO 63, down-regulated genes were detected; 13 of those were also down-regulated comparing neoplastic sample vs. paired healthy tissue. Eighteen genes were up-regulated, 14 of those were also down-regulated comparing neoplastic sample vs. paired healthy tissue. Using the MIO, 35 down regulated-genes were detected; only four of these were down-regulated, also comparing neoplastic sample vs. paired healthy tissue. Twelve genes were up-regulated in both types of analysis. Considering the two kits, the greatest variation in Fold Change was in up-regulated genes. Dogs displayed a greater genetic homology with humans than mice; moreover, the results have shown that the two kits are able to detect different genes. Most of these genes have specific cellular functions or belong to some enzymatic categories; some have already been described to be correlated to human melanoma and confirm the validity of the dog as a model for the study of molecular aspects of human melanoma.

Keywords: animal model, canine melanoma, gene expression, spontaneous tumors, targeted RNAseq

Procedia PDF Downloads 193
686 Different Types of Bismuth Selenide Nanostructures for Targeted Applications: Synthesis and Properties

Authors: Jana Andzane, Gunta Kunakova, Margarita Baitimirova, Mikelis Marnauza, Floriana Lombardi, Donats Erts

Abstract:

Bismuth selenide (Bi₂Se₃) is known as a narrow band gap semiconductor with pronounced thermoelectric (TE) and topological insulator (TI) properties. Unique TI properties offer exciting possibilities for fundamental research as observing the exciton condensate and Majorana fermions, as well as practical application in spintronic and quantum information. In turn, TE properties of this material can be applied for wide range of thermoelectric applications, as well as for broadband photodetectors and near-infrared sensors. Nanostructuring of this material results in improvement of TI properties due to suppression of the bulk conductivity, and enhancement of TE properties because of increased phonon scattering at the nanoscale grains and interfaces. Regarding TE properties, crystallographic growth direction, as well as orientation of the nanostructures relative to the growth substrate, play significant role in improvement of TE performance of nanostructured material. For instance, Bi₂Se₃ layers consisting of randomly oriented nanostructures and/or of combination of them with planar nanostructures show significantly enhanced in comparison with bulk and only planar Bi₂Se₃ nanostructures TE properties. In this work, a catalyst-free vapour-solid deposition technique was applied for controlled obtaining of different types of Bi₂Se₃ nanostructures and continuous nanostructured layers for targeted applications. For example, separated Bi₂Se₃ nanoplates, nanobelts and nanowires can be used for investigations of TI properties; consisting from merged planar and/or randomly oriented nanostructures Bi₂Se₃ layers are useful for applications in heat-to-power conversion devices and infrared detectors. The vapour-solid deposition was carried out using quartz tube furnace (MTI Corp), equipped with an inert gas supply and pressure/temperature control system. Bi₂Se₃ nanostructures/nanostructured layers of desired type were obtained by adjustment of synthesis parameters (process temperature, deposition time, pressure, carrier gas flow) and selection of deposition substrate (glass, quartz, mica, indium-tin-oxide, graphene and carbon nanotubes). Morphology, structure and composition of obtained Bi₂Se₃ nanostructures and nanostructured layers were inspected using SEM, AFM, EDX and HRTEM techniques, as well as home-build experimental setup for thermoelectric measurements. It was found that introducing of temporary carrier gas flow into the process tube during the synthesis and deposition substrate choice significantly influence nanostructures formation mechanism. Electrical, thermoelectric, and topological insulator properties of different types of deposited Bi₂Se₃ nanostructures and nanostructured coatings are characterized as a function of thickness and discussed.

Keywords: bismuth seleinde, nanostructures, topological insulator, vapour-solid deposition

Procedia PDF Downloads 228
685 Airport Pavement Crack Measurement Systems and Crack Density for Pavement Evaluation

Authors: Ali Ashtiani, Hamid Shirazi

Abstract:

This paper reviews the status of existing practice and research related to measuring pavement cracking and using crack density as a pavement surface evaluation protocol. Crack density for pavement evaluation is currently not widely used within the airport community and its use by the highway community is limited. However, surface cracking is a distress that is closely monitored by airport staff and significantly influences the development of maintenance, rehabilitation and reconstruction plans for airport pavements. Therefore crack density has the potential to become an important indicator of pavement condition if the type, severity and extent of surface cracking can be accurately measured. A pavement distress survey is an essential component of any pavement assessment. Manual crack surveying has been widely used for decades to measure pavement performance. However, the accuracy and precision of manual surveys can vary depending upon the surveyor and performing surveys may disrupt normal operations. Given the variability of manual surveys, this method has shown inconsistencies in distress classification and measurement. This can potentially impact the planning for pavement maintenance, rehabilitation and reconstruction and the associated funding strategies. A substantial effort has been devoted for the past 20 years to reduce the human intervention and the error associated with it by moving toward automated distress collection methods. The automated methods refer to the systems that identify, classify and quantify pavement distresses through processes that require no or very minimal human intervention. This principally involves the use of a digital recognition software to analyze and characterize pavement distresses. The lack of established protocols for measurement and classification of pavement cracks captured using digital images is a challenge to developing a reliable automated system for distress assessment. Variations in types and severity of distresses, different pavement surface textures and colors and presence of pavement joints and edges all complicate automated image processing and crack measurement and classification. This paper summarizes the commercially available systems and technologies for automated pavement distress evaluation. A comprehensive automated pavement distress survey involves collection, interpretation, and processing of the surface images to identify the type, quantity and severity of the surface distresses. The outputs can be used to quantitatively calculate the crack density. The systems for automated distress survey using digital images reviewed in this paper can assist the airport industry in the development of a pavement evaluation protocol based on crack density. Analysis of automated distress survey data can lead to a crack density index. This index can be used as a means of assessing pavement condition and to predict pavement performance. This can be used by airport owners to determine the type of pavement maintenance and rehabilitation in a more consistent way.

Keywords: airport pavement management, crack density, pavement evaluation, pavement management

Procedia PDF Downloads 183
684 Radioprotective Efficacy of Costus afer against the Radiation-Induced Hematology and Histopathology Damage in Mice

Authors: Idowu R. Akomolafe, Naven Chetty

Abstract:

Background: The widespread medical application of ionizing radiation has raised public concern about radiation exposure and, thus, associated cancer risk. The production of reactive oxygen species and free radicals as a result of radiation exposure can cause severe damage to deoxyribonucleic acid (DNA) of cells, thus leading to biological effect. Radiotherapy is an excellent modality in the treatment of cancerous cells, comes with a few challenges. A significant challenge is the exposure of healthy cells surrounding the tumour to radiation. The last few decades have witnessed lots of attention shifted to plants, herbs, and natural product as an alternative to synthetic compound for radioprotection. Thus, the study investigated the radioprotective efficacy of Costus afer against whole-body radiation-induced haematological, histopathological disorder in mice. Materials and Method: Fifty-four mice were randomly divided into nine groups. Animals were pretreated with the extract of Costus afer by oral gavage for six days before irradiation. Control: 6 mice received feed and water only; 6 mice received feed, water, and 3Gy; 6 mice received feed, water, and 6Gy; experimental: 6 mice received 250 mg/kg extract; 6 mice received 500 mg/kg extract; 6 mice received 250 mg/kg extract and 3Gy; 6 mice received 500 mg/kg extract and 3Gy; 6 mice received 250 mg/kg extract and 6Gy; 6 mice received 500 mg/kg extract and 6Gy in addition to feeding and water. The irradiation was done at the Radiotherapy and Oncology Department of Grey's Hospital using linear accelerator (LINAC). Thirty-six mice were sacrificed by cervical dislocation 48 hours after irradiation, and blood was collected for haematology tests. Also, the liver and kidney of the sacrificed mice were surgically removed for histopathology tests. The remaining eighteen (18) mice were used for mortality and survival studies. Data were analysed by one-way ANOVA, followed by Tukey's multiple comparison test. Results: Prior administration of Costus afer extract decreased the symptoms of radiation sickness and caused a significant delay in the mortality as demonstrated in the experimental mice. The first mortality was recorded on day-5 post irradiation, and this happened to the group E- that is, mice that received 6Gy but no extract. There was significant protection in the experimental mice, as demonstrated in the blood counts against hematopoietic and gastrointestinal damage when compared with the control. The protection was seen in the increase in blood counts of experimental animals and the number of survivor. The protection offered by Costus afer may be due to its ability to scavenge free radicals and restore gastrointestinal and bone marrow damage produced by radiation. Conclusions: The study has demonstrated that exposure of mice to radiation could cause modifications in the haematological and histopathological parameters of irradiated mice. However, the changes were relieved by the methanol extract of Costus afer, probably through its free radical scavenging and antioxidant properties.

Keywords: costus afer, hematological, mortality, radioprotection, radiotherapy

Procedia PDF Downloads 137
683 Qualitative Characterization of Proteins in Common and Quality Protein Maize Corn by Mass Spectrometry

Authors: Benito Minjarez, Jesse Haramati, Yury Rodriguez-Yanez, Florencio Recendiz-Hurtado, Juan-Pedro Luna-Arias, Salvador Mena-Munguia

Abstract:

During the last decades, the world has experienced a rapid industrialization and an expanding economy favoring a demographic boom. As a consequence, countries around the world have focused on developing new strategies related to the production of different farm products in order to meet future demands. Consequently, different strategies have been developed seeking to improve the major food products for both humans and livestock. Corn, after wheat and rice, is the third most important crop globally and is the primary food source for both humans and livestock in many regions around the globe. In addition, maize (Zea mays) is an important source of protein accounting for up to 60% of the daily human protein supply. Generally, many of the cereal grains have proteins with relatively low nutritional value, when they are compared with proteins from meat. In the case of corn, much of the protein is found in the endosperm (75 to 85%) and is deficient in two essential amino acids, lysine, and tryptophan. This deficiency results in an imbalance of amino acids and low protein content; normal maize varieties have less than half of the recommended amino acids for human nutrition. In addition, studies have shown that this deficiency has been associated with symptoms of growth impairment, anemia, hypoproteinemia, and fatty liver. Due to the fact that most of the presently available maize varieties do not contain the quality and quantity of proteins necessary for a balanced diet, different countries have focused on the research of quality protein maize (QPM). Researchers have characterized QPM noting that these varieties may contain between 70 to 100% more residues of the amino acids essential for animal and human nutrition, lysine, and tryptophan, than common corn. Several countries in Africa, Latin America, as well as China, have incorporated QPM in their agricultural development plan. Large parts of these countries have chosen a specific QPM variety based on their local needs and climate. Reviews have described the breeding methods of maize and have revealed the lack of studies on genetic and proteomic diversity of proteins in QPM varieties, and their genetic relationships with normal maize varieties. Therefore, molecular marker identification using tools such as mass spectrometry may accelerate the selection of plants that carry the desired proteins with high lysine and tryptophan concentration. To date, QPM maize lines have played a very important role in alleviating the malnutrition, and better characterization of these lines would provide a valuable nutritional enhancement for use in the resource-poor regions of the world. Thus, the objectives of this study were to identify proteins in QPM maize in comparison with a common maize line as a control.

Keywords: corn, mass spectrometry, QPM, tryptophan

Procedia PDF Downloads 284
682 Gender Specific Differences in Clinical Outcomes of Knee Osteoarthritis Treated with Micro-Fragmented Adipose Tissue

Authors: Tiffanie-Marie Borg, Yasmin Zeinolabediny, Nima Heidari, Ali Noorani, Mark Slevin, Angel Cullen, Stefano Olgiati, Alberto Zerbi, Alessandro Danovi, Adrian Wilson

Abstract:

Knee Osteoarthritis (OA) is a critical cause of disability globally. In recent years, there has been growing interest in non-invasive treatments, such as intra-articular injection of micro-fragmented fat (MFAT), showing great potential in treating OA. Mesenchymal stem cells (MSCs), originating from pericytes of micro-vessels in MFAT, can differentiate into mesenchymal lineage cells such as cartilage, osteocytes, adipocytes, and osteoblasts. Secretion of growth factor and cytokines from MSCs have the capability to inhibit T cell growth, reduced pain and inflammation, and create a micro-environment that through paracrine signaling, can promote joint repair and cartilage regeneration. Here we have shown, for the first time, data supporting the hypothesis that women respond better in terms of improvements in pain and function to MFAT injection compared to men. Historically, women have been underrepresented in studies, and studies with both sexes regularly fail to analyse the results by sex. To mitigate this bias and quantify it, we describe a technique using reproducible statistical analysis and replicable results with Open Access statistical software R to calculate the magnitude of this difference. Genetic, hormonal, environmental, and age factors play a role in our observed difference between the sexes. This observational, intention-to-treat study included the complete sample of 456 patients who agreed to be scored for pain (visual analogue scale (VAS)) and function (Oxford knee score (OKS)) at baseline regardless of subsequent changes to adherence or status during follow-up. We report that a significantly larger number of women responded to treatment than men: [90% vs. 60% change in VAS scores with 87% vs. 65% change in OKS scores, respectively]. Women overall had a stronger positive response to treatment with reduced pain and improved mobility and function. Pre-injection, our cohort of women were in more pain with worse joint function which is quite common to see in orthopaedics. However, during the 2-year follow-up, they consistently maintained a lower incidence of discomfort with superior joint function. This data clearly identifies a clear need for further studies to identify the cell and molecular biological and other basis for these differences and be able to utilize this information for stratification in order to improve outcome for both women and men.

Keywords: gender differences, micro-fragmented adipose tissue, knee osteoarthritis, stem cells

Procedia PDF Downloads 180
681 Prolactin and Its Abnormalities: Its Implications on the Male Reproductive Tract and Male Factor Infertility

Authors: Rizvi Hasan

Abstract:

Male factor infertility due to abnormalities in prolactin levels is encountered in a significant proportion. This was a case-control study carried out to determine the effects of prolactin abnormalities in normal males with infertility, recruiting 297 male infertile patients with informed written consent. All underwent a Basic Seminal Fluid Analysis (BSA) and endocrine profiles of FSH, LH, testosterone and prolactin (PRL) hormones using the random access chemiluminescent immunoassay method (normal range 2.5-17ng/ml). Age, weight, and height matched voluntary controls were recruited for comparison. None of the cases had anatomical, medical or surgical disorders related to infertility. Among the controls; mean age 33.2yrs ± 5.2, BMI 21.04 ± 1.39kgm-2, BSA 34×106, a number of children fathered 2±1, PRL 6.78 ± 2.92ng/ml. Of the 297 patients, 28 were hyperprolactinaemic while one was hypoprolactinaemic. All the hyperprolactinaemic patients had oligoasthenospermia, abnormal morphology and decreased viability. The serum testosterone levels were markedly lowered in 26 (92.86%) of the hyperprolactinaemic subjects. In the other 2 hyperprolactinaemic subjects and the single hypoprolactinaemic subject, the serum testosterone levels were normal. FSH and LH were normal in all patients. The 29 male patients with abnormalities in their serum PRL profiles were followed up for 12 months. The 28 patients suffering from hyperprolactinaemia were treated with oral bromocriptine in a dose of 2.5 mg twice daily. The hypoprolactinaemic patient defaulted treatment. From the follow-up, it was evident that 19 (67.86%) of the treated patients responded after 3 months of therapy while 4 (14.29%) showed improvement after approximately 6 months of bromocriptine therapy. One patient responded after 1 year of therapy while 2 patients showed improvements although not up to normal levels within the same period. Response to treatment was assessed by improvement in their BSA parameters. Prolactin abnormalities affect the male reproductive system and semen parameters necessitating further studies to ascertain the exact role of prolactin on the male reproductive tract. A parallel study was carried out incorporating 200 male white rats that were grouped and subjected to variations in their serum PRL levels. At the end of 100 days of treatment, these rats were subjected to morphological studies of their male reproductive tracts.Varying morphological changes depending on the levels of PRL changes induced were evident. Notable changes were arrest of spermatogenesis at the spermatid stage, a reduced testicular cellularity, a reduction in microvilli of the pseudostratified epithelial lining of the epididymis, while measurement of the tubular diameter showed a 30% reduction compared to normal tissue. There were no changes in the vas deferens, seminal vesicles, and the prostate. It is evident that both hyperprolactinaemia and hypoprolactinaemia have a direct effect on the morphology and function of the male reproductive tract. The morphological studies carried out on the groups of rats who were subjected to variations in their PRL levels could be the basis for infertility in male human beings.

Keywords: male factor infertility, morphological studies, prolactin, seminal fluid analysis

Procedia PDF Downloads 341
680 Impact of an Educational Intervention on Knowledge, Attitude and Practices of Community Members on Schistosomiasis in Nelson Mandela Bay

Authors: Prince S. Campbell, Janine B. Adams, Melusi Thwala, Opeoluwa Oyedele, Paula E. Melariri

Abstract:

Schistosomiasis, often known as bilharzia, is a parasitic water-borne disease caused by trematode flatworms of the genus Schistosoma. Schistosomiasis infection and prevention have been found to be influenced by a range of socio-cultural risk factors, including human characteristics (e.g., gender, age, education, knowledge, attitude, and practices), as well as environmental and economic elements. Lack of awareness of the disease may also contribute to an individual's tendency to participate in behaviours or activities that heighten their susceptibility to infection. The current study assessed the community knowledge, attitude and practices (KAP) on schistosomiasis and implemented an educational intervention following pre-test interviews. A cross-sectional quasi-experimental research design was used in this quantitative study. Pre- and post-intervention interview format surveys were conducted using a structured questionnaire, targeting individuals aged 18–65 years residing within 5 km of select water bodies. The questionnaire contained 54 close-ended questions about schistosomiasis causes, transmission, and clinical symptoms and the participants were interviewed face-to-face in their homes. Data was captured on Question Pro and analyzed using Microsoft Office Excel 365 (2019) and R (version 4.3.1) software. Overall, 380 individuals completed the pre and post-intervention assessments; 194 and 185 were males (51.1%) and females (48.7%), respectively. A notable 91.3% of participants did not know about schistosomiasis in the pre-intervention phase; however, the mean post-intervention test score (9.4 ± 1.4) for knowledge among participants was higher than the pre-intervention test score (2.2 ± 2.1) indicating a good and improved knowledge of schistosomiasis among the participants. Furthermore, the paired samples t-test results demonstrated that the increase in knowledge levels was statistically significant (p<0.001). Also, the post-intervention improvement of both practice (p<0.001) and attitude (p<0.001) levels was statistically significant. A positive correlation (r=0.23, p<0.001) was found between knowledge and attitude in the pre-intervention stage. Knowledgeable participants had a more positive attitude towards obtaining medical assistance and disease prevention. Moreover, attitudes and practices correlated negatively (r=-0.13, p=0.013) post-intervention; hence, those with positive attitudes did not engage in risky water-related practices, which was the desired outcome. The educational intervention had a favourable impact on the KAP of the study population as the majority were able to recall the disease aetiology, symptoms, transmission pattern, and preventative measures three months post-intervention. Nevertheless, previous research has suggested that participants were unable to recall information about the disease following the intervention. Consequently, research should prioritize behavioural modification strategies that may result in a more persistent outcome in terms of the participants' knowledge, which could ultimately contribute to the development of long-term positive attitudes and practices.

Keywords: educational intervention, knowledge, attitudes and practices, schistosomiasis

Procedia PDF Downloads 11
679 Urban Park Characteristics Defining Avian Community Structure

Authors: Deepti Kumari, Upamanyu Hore

Abstract:

Cities are an example of a human-modified environment with few fragments of urban green spaces, which are widely considered for urban biodiversity. The study aims to address the avifaunal diversity in urban parks based on the park size and their urbanization intensity. Also, understanding the key factors affecting species composition and structure as birds are a good indicator of a healthy ecosystem, and they are sensitive to changes in the environment. A 50 m-long line-transect method is used to survey birds in 39 urban parks in Delhi, India. Habitat variables, including vegetation (percentage of non-native trees, percentage of native trees, top canopy cover, sub-canopy cover, diameter at breast height, ground vegetation cover, shrub height) were measured using the quadrat method along the transect, and disturbance variables (distance from water, distance from road, distance from settlement, park area, visitor rate, and urbanization intensity) were measured using ArcGIS and google earth. We analyzed species data for diversity and richness. We explored the relation of species diversity and richness to habitat variables using the multi-model inference approach. Diversity and richness are found significant in different park sizes and their urbanization intensity. Medium size park supports more diversity, whereas large size park has more richness. However, diversity and richness both declined with increasing urbanization intensity. The result of CCA revealed that species composition in urban parks was positively associated with tree diameter at breast height and distance from the settlement. On the model selection approach, disturbance variables, especially distance from road, urbanization intensity, and visitors are the best predictors for the species richness of birds in urban parks. In comparison, multiple regression analysis between habitat variables and bird diversity suggested that native tree species in the park may explain the diversity pattern of birds in urban parks. Feeding guilds such as insectivores, omnivores, carnivores, granivores, and frugivores showed a significant relation with vegetation variables, while carnivores and scavenger bird species mainly responded with disturbance variables. The study highlights the importance of park size in urban areas and their urbanization intensity. It also indicates that distance from the settlement, distance from the road, urbanization intensity, visitors, diameter at breast height, and native tree species can be important determining factors for bird richness and diversity in urban parks. The study also concludes that the response of feeding guilds to vegetation and disturbance in urban parks varies. Therefore, we recommend that park size and surrounding urban matrix should be considered in order to increase bird diversity and richness in urban areas for designing and planning.

Keywords: diversity, feeding guild, urban park, urbanization intensity

Procedia PDF Downloads 112
678 DeepNIC a Method to Transform Each Tabular Variable into an Independant Image Analyzable by Basic CNNs

Authors: Nguyen J. M., Lucas G., Ruan S., Digonnet H., Antonioli D.

Abstract:

Introduction: Deep Learning (DL) is a very powerful tool for analyzing image data. But for tabular data, it cannot compete with machine learning methods like XGBoost. The research question becomes: can tabular data be transformed into images that can be analyzed by simple CNNs (Convolutional Neuron Networks)? Will DL be the absolute tool for data classification? All current solutions consist in repositioning the variables in a 2x2 matrix using their correlation proximity. In doing so, it obtains an image whose pixels are the variables. We implement a technology, DeepNIC, that offers the possibility of obtaining an image for each variable, which can be analyzed by simple CNNs. Material and method: The 'ROP' (Regression OPtimized) model is a binary and atypical decision tree whose nodes are managed by a new artificial neuron, the Neurop. By positioning an artificial neuron in each node of the decision trees, it is possible to make an adjustment on a theoretically infinite number of variables at each node. From this new decision tree whose nodes are artificial neurons, we created the concept of a 'Random Forest of Perfect Trees' (RFPT), which disobeys Breiman's concepts by assembling very large numbers of small trees with no classification errors. From the results of the RFPT, we developed a family of 10 statistical information criteria, Nguyen Information Criterion (NICs), which evaluates in 3 dimensions the predictive quality of a variable: Performance, Complexity and Multiplicity of solution. A NIC is a probability that can be transformed into a grey level. The value of a NIC depends essentially on 2 super parameters used in Neurops. By varying these 2 super parameters, we obtain a 2x2 matrix of probabilities for each NIC. We can combine these 10 NICs with the functions AND, OR, and XOR. The total number of combinations is greater than 100,000. In total, we obtain for each variable an image of at least 1166x1167 pixels. The intensity of the pixels is proportional to the probability of the associated NIC. The color depends on the associated NIC. This image actually contains considerable information about the ability of the variable to make the prediction of Y, depending on the presence or absence of other variables. A basic CNNs model was trained for supervised classification. Results: The first results are impressive. Using the GSE22513 public data (Omic data set of markers of Taxane Sensitivity in Breast Cancer), DEEPNic outperformed other statistical methods, including XGBoost. We still need to generalize the comparison on several databases. Conclusion: The ability to transform any tabular variable into an image offers the possibility of merging image and tabular information in the same format. This opens up great perspectives in the analysis of metadata.

Keywords: tabular data, CNNs, NICs, DeepNICs, random forest of perfect trees, classification

Procedia PDF Downloads 120
677 Children's Literature with Mathematical Dialogue for Teaching Mathematics at Elementary Level: An Exploratory First Phase about Students’ Difficulties and Teachers’ Needs in Third and Fourth Grade

Authors: Goulet Marie-Pier, Voyer Dominic, Simoneau Victoria

Abstract:

In a previous research project (2011-2019) funded by the Quebec Ministry of Education, an educational approach was developed based on the teaching and learning of place value through children's literature. Subsequently, the effect of this approach on the conceptual understanding of the concept among first graders (6-7 years old) was studied. The current project aims to create a series of children's literature to help older elementary school students (8-10 years old) in developing a conceptual understanding of complex mathematical concepts taught at their grade level rather than a more typical procedural understanding. Knowing that there are no educational material or children's books that exist to achieve our goals, four stories, accompanied by mathematical activities, will be created to support students, and their teachers, in the learning and teaching of mathematical concepts that can be challenging within their mathematic curriculum. The stories will also introduce a mathematical dialogue into the characters' discourse with the aim to address various mathematical foundations for which there are often erroneous statements among students and occasionally among teachers. In other words, the stories aim to empower students seeking a real understanding of difficult mathematical concepts, as well as teachers seeking a way to teach these difficult concepts in a way that goes beyond memorizing rules and procedures. In order to choose the concepts that will be part of the stories, it is essential to understand the current landscape regarding the main difficulties experienced by students in third and fourth grade (8-10 years old) and their teacher’s needs. From this perspective, the preliminary phase of the study, as discussed in the presentation, will provide critical insight into the mathematical concepts with which the target grade levels struggle the most. From this data, the research team will select the concepts and develop their stories in the second phase of the study. Two questions are preliminary to the implementation of our approach, namely (1) what mathematical concepts are considered the most “difficult to teach” by teachers in the third and fourth grades? and (2) according to teachers, what are the main difficulties encountered by their students in numeracy? Self-administered online questionnaires using the SimpleSondage software will be sent to all third and fourth-grade teachers in nine school service centers in the Quebec region, representing approximately 300 schools. The data that will be collected in the fall of 2022 will be used to compare the difficulties identified by the teachers with those prevalent in the scientific literature. Considering that this ensures consistency between the proposed approach and the true needs of the educational community, this preliminary phase is essential to the relevance of the rest of the project. It is also an essential first step in achieving the two ultimate goals of the research project, improving the learning of elementary school students in numeracy, and contributing to the professional development of elementary school teachers.

Keywords: children’s literature, conceptual understanding, elementary school, learning and teaching, mathematics

Procedia PDF Downloads 86
676 Role of Lipid-Lowering Treatment in the Monocyte Phenotype and Chemokine Receptor Levels after Acute Myocardial Infarction

Authors: Carolina N. França, Jônatas B. do Amaral, Maria C.O. Izar, Ighor L. Teixeira, Francisco A. Fonseca

Abstract:

Introduction: Atherosclerosis is a progressive disease, characterized by lipid and fibrotic element deposition in large-caliber arteries. Conditions related to the development of atherosclerosis, as dyslipidemia, hypertension, diabetes, and smoking are associated with endothelial dysfunction. There is a frequent recurrence of cardiovascular outcomes after acute myocardial infarction and, at this sense, cycles of mobilization of monocyte subtypes (classical, intermediate and nonclassical) secondary to myocardial infarction may determine the colonization of atherosclerotic plaques in different stages of the development, contributing to early recurrence of ischemic events. The recruitment of different monocyte subsets during inflammatory process requires the expression of chemokine receptors CCR2, CCR5, and CX3CR1, to promote the migration of monocytes to the inflammatory site. The aim of this study was to evaluate the effect of lipid-lowering treatment by six months in the monocyte phenotype and chemokine receptor levels of patients after Acute Myocardial Infarction (AMI). Methods: This is a PROBE (prospective, randomized, open-label trial with blinded endpoints) study (ClinicalTrials.gov Identifier: NCT02428374). Adult patients (n=147) of both genders, ageing 18-75 years, were randomized in a 2x2 factorial design for treatment with rosuvastatin 20 mg/day or simvastatin 40 mg/day plus ezetimibe 10 mg/day as well as ticagrelor 90 mg 2x/day and clopidogrel 75 mg, in addition to conventional AMI therapy. Blood samples were collected at baseline, after one month and six months of treatment. Monocyte subtypes (classical - inflammatory, intermediate - phagocytic and nonclassical – anti-inflammatory) were identified, quantified and characterized by flow cytometry, as well as the expressions of the chemokine receptors (CCR2, CCR5 and CX3CR1) were also evaluated in the mononuclear cells. Results: After six months of treatment, there was an increase in the percentage of classical monocytes and reduction in the nonclassical monocytes (p=0.038 and p < 0.0001 Friedman Test), without differences for intermediate monocytes. Besides, classical monocytes had higher expressions of CCR5 and CX3CR1 after treatment, without differences related to CCR2 (p < 0.0001 for CCR5 and CX3CR1; p=0.175 for CCR2). Intermediate monocytes had higher expressions of CCR5 and CX3CR1 and lower expression of CCR2 (p = 0.003; p < 0.0001 and p = 0.011, respectively). Nonclassical monocytes had lower expressions of CCR2 and CCR5, without differences for CX3CR1 (p < 0.0001; p = 0.009 and p = 0.138, respectively). There were no differences after the comparison between the four treatment arms. Conclusion: The data suggest a time-dependent modulation of classical and nonclassical monocytes and chemokine receptor levels. The higher percentage of classical monocytes (inflammatory cells) suggest a residual inflammatory risk, even under preconized treatments to AMI. Indeed, these changes do not seem to be affected by choice of the lipid-lowering strategy.

Keywords: acute myocardial infarction, chemokine receptors, lipid-lowering treatment, monocyte subtypes

Procedia PDF Downloads 114
675 Investigation of a Technology Enabled Model of Home Care: the eShift Model of Palliative Care

Authors: L. Donelle, S. Regan, R. Booth, M. Kerr, J. McMurray, D. Fitzsimmons

Abstract:

Palliative home health care provision within the Canadian context is challenged by: (i) a shortage of registered nurses (RN) and RNs with palliative care expertise, (ii) an aging population, (iii) reliance on unpaid family caregivers to sustain home care services with limited support to conduct this ‘care work’, (iv) a model of healthcare that assumes client self-care, and (v) competing economic priorities. In response, an interprofessional team of service provider organizations, a software/technology provider, and health care providers developed and implemented a technology-enabled model of home care, the eShift model of palliative home care (eShift). The eShift model combines communication and documentation technology with non-traditional utilization of health human resources to meet patient needs for palliative care in the home. The purpose of this study was to investigate the structure, processes, and outcomes of the eShift model of care. Methodology: Guided by Donebedian’s evaluation framework for health care, this qualitative-descriptive study investigated the structure, processes, and outcomes care of the eShift model of palliative home care. Interviews and focus groups were conducted with health care providers (n= 45), decision-makers (n=13), technology providers (n=3) and family care givers (n=8). Interviews were recorded, transcribed, and a deductive analysis of transcripts was conducted. Study Findings (1) Structure: The eShift model consists of a remotely-situated RN using technology to direct care provision virtually to patients in their home. The remote RN is connected virtually to a health technician (an unregulated care provider) in the patient’s home using real-time communication. The health technician uses a smartphone modified with the eShift application and communicates with the RN who uses a computer with the eShift application/dashboard. Documentation and communication about patient observations and care activities occur in the eShift portal. The RN is typically accountable for four to six health technicians and patients over an 8-hour shift. The technology provider was identified as an important member of the healthcare team. Other members of the team include family members, care coordinators, nurse practitioners, physicians, and allied health. (2) Processes: Conventionally, patient needs are the focus of care; however within eShift, the patient and the family caregiver were the focus of care. Enhanced medication administration was seen as one of the most important processes, and family caregivers reported high satisfaction with the care provided. There was perceived enhanced teamwork among health care providers. (3) Outcomes: Patients were able to die at home. The eShift model enabled consistency and continuity of care, and effective management of patient symptoms and caregiver respite. Conclusion: More than a technology solution, the eShift model of care was viewed as transforming home care practice and an innovative way to resolve the shortage of palliative care nurses within home care.

Keywords: palliative home care, health information technology, patient-centred care, interprofessional health care team

Procedia PDF Downloads 412
674 Process Safety Management Digitalization via SHEQTool based on Occupational Safety and Health Administration and Center for Chemical Process Safety, a Case Study in Petrochemical Companies

Authors: Saeed Nazari, Masoom Nazari, Ali Hejazi, Siamak Sanoobari Ghazi Jahani, Mohammad Dehghani, Javad Vakili

Abstract:

More than ever, digitization is an imperative for businesses to keep their competitive advantages, foster innovation and reduce paperwork. To design and successfully implement digital transformation initiatives within process safety management system, employees need to be equipped with the right tool, frameworks, and best practices. we developed a unique full stack application so-called SHEQTool which is entirely dynamic based on our extensive expertise, experience, and client feedback to help business processes particularly operations safety management. We use our best knowledge and scientific methodologies published by CCPS and OSHA Guidelines to streamline operations and integrated them into task management within Petrochemical Companies. We digitalize their main process safety management system elements and their sub elements such as hazard identification and risk management, training and communication, inspection and audit, critical changes management, contractor management, permit to work, pre-start-up safety review, incident reporting and investigation, emergency response plan, personal protective equipment, occupational health, and action management in a fully customizable manner with no programming needs for users. We review the feedback from main actors within petrochemical plant which highlights improving their business performance and productivity as well as keep tracking their functions’ key performance indicators (KPIs) because it; 1) saves time, resources, and costs of all paperwork on our businesses (by Digitalization); 2) reduces errors and improve performance within management system by covering most of daily software needs of the organization and reduce complexity and associated costs of numerous tools and their required training (One Tool Approach); 3) focuses on management systems and integrate functions and put them into traceable task management (RASCI and Flowcharting); 4) helps the entire enterprise be resilient to any change of your processes, technologies, assets with minimum costs (through Organizational Resilience); 5) reduces significantly incidents and errors via world class safety management programs and elements (by Simplification); 6) gives the companies a systematic, traceable, risk based, process based, and science based integrated management system (via proper Methodologies); 7) helps business processes complies with ISO 9001, ISO 14001, ISO 45001, ISO 31000, best practices as well as legal regulations by PDCA approach (Compliance).

Keywords: process, safety, digitalization, management, risk, incident, SHEQTool, OSHA, CCPS

Procedia PDF Downloads 58
673 Comparison of Two Transcranial Magnetic Stimulation Protocols on Spasticity in Multiple Sclerosis - Pilot Study of a Randomized and Blind Cross-over Clinical Trial

Authors: Amanda Cristina da Silva Reis, Bruno Paulino Venâncio, Cristina Theada Ferreira, Andrea Fialho do Prado, Lucimara Guedes dos Santos, Aline de Souza Gravatá, Larissa Lima Gonçalves, Isabella Aparecida Ferreira Moretto, João Carlos Ferrari Corrêa, Fernanda Ishida Corrêa

Abstract:

Objective: To compare two protocols of Transcranial Magnetic Stimulation (TMS) on quadriceps muscle spasticity in individuals diagnosed with Multiple Sclerosis (MS). Method: Clinical, crossover study, in which six adult individuals diagnosed with MS and spasticity in the lower limbs were randomized to receive one session of high-frequency (≥5Hz) and low-frequency (≤ 1Hz) TMS on motor cortex (M1) hotspot for quadriceps muscle, with a one-week interval between the sessions. To assess the spasticity was applied the Ashworth scale and were analyzed the latency time (ms) of the motor evoked potential (MEP) and the central motor conduction time (CMCT) of the bilateral quadriceps muscle. Assessments were performed before and after each intervention. The difference between groups was analyzed using the Friedman test, with a significance level of 0.05 adopted. Results: All statistical analyzes were performed using the SPSS Statistic version 26 programs, with a significance level established for the analyzes at p<0.05. Shapiro Wilk normality test. Parametric data were represented as mean and standard deviation for non-parametric variables, median and interquartile range, and frequency and percentage for categorical variables. There was no clinical change in quadriceps spasticity assessed using the Ashworth scale for the 1 Hz (p=0.813) and 5 Hz (p= 0.232) protocols for both limbs. Motor Evoked Potential latency time: in the 5hz protocol, there was no significant change for the contralateral side from pre to post-treatment (p>0.05), and for the ipsilateral side, there was a decrease in latency time of 0.07 seconds (p<0.05 ); for the 1Hz protocol there was an increase of 0.04 seconds in the latency time (p<0.05) for the contralateral side to the stimulus, and for the ipsilateral side there was a decrease in the latency time of 0.04 seconds (p=<0.05), with a significant difference between the contralateral (p=0.007) and ipsilateral (p=0.014) groups. Central motor conduction time in the 1Hz protocol, there was no change for the contralateral side (p>0.05) and for the ipsilateral side (p>0.05). In the 5Hz protocol for the contralateral side, there was a small decrease in latency time (p<0.05) and for the ipsilateral side, there was a decrease of 0.6 seconds in the latency time (p<0.05) with a significant difference between groups (p=0.019). Conclusion: A high or low-frequency session does not change spasticity, but it is observed that when the low-frequency protocol was performed, there was an increase in latency time on the stimulated side, and a decrease in latency time on the non-stimulated side, considering then that inhibiting the motor cortex increases cortical excitability on the opposite side.

Keywords: multiple sclerosis, spasticity, motor evoked potential, transcranial magnetic stimulation

Procedia PDF Downloads 83
672 Gender and Asylum: A Critical Reassessment of the Case Law of the European Court of Human Right and of United States Courts Concerning Gender-Based Asylum Claims

Authors: Athanasia Petropoulou

Abstract:

While there is a common understanding that a person’s sex, gender, gender identity, and sexual orientation shape every stage of the migration experience, theories of international migration had until recently not been focused on exploring and incorporating a gender perspective in their analysis. In a similar vein, refugee law has long been the object of criticisms for failing to recognize and respond appropriately to women’s and sexual minorities’ experiences of persecution. The present analysis attempts to depict the challenges faced by the European Court of Human Rights (ECtHR) and U.S. courts when adjudicating in cases involving asylum claims with a gendered perspective. By providing a comparison between adjudicating strategies of international and national jurisdictions, the article aims to identify common or distinctive approaches in addressing gendered based claims. The paper argues that, despite the different nature of the judicial bodies and the different legal instruments applied respectively, judges face similar challenges in this context and often fail to qualify and address the gendered dimensions of asylum claims properly. The ECtHR plays a fundamental role in safeguarding human rights protection in Europe not only for European citizens but also for people fleeing violence, war, and dire living conditions. However, this role becomes more difficult to fulfill, not only because of the obvious institutional constraints but also because cases related to claims of asylum seekers concern a domain closely linked to State sovereignty. Amid the current “refugee crisis,” risk assessment performed by national authorities, like in the process of asylum determination, is shaped by wider geopolitical and economic considerations. The failure to recognize and duly address the gendered dimension of non - refoulement claims, one of the many shortcomings of these processes, is reflected in the decisions of the ECtHR. As regards U.S. case law, the study argues that U.S. courts either fail to apply any connection between asylum claims and their gendered dimension or tend to approach gendered based claims through the lens of the “political opinion” or “membership of a particular social group” reasons of fear of persecution. This exercise becomes even more difficult, taking into account that the U.S. asylum law inappropriately qualifies gendered-based claims. The paper calls for more sociologically informed decision-making practices and for a more contextualized and relational approach in the assessment of the risk of ill-treatment and persecution. Such an approach is essential for unearthing the gendered patterns of persecution and addressing effectively related claims, thus securing the human rights of asylum seekers.

Keywords: asylum, European court of human rights, gender, human rights, U.S. courts

Procedia PDF Downloads 107
671 Comparison of Two Home Sleep Monitors Designed for Self-Use

Authors: Emily Wood, James K. Westphal, Itamar Lerner

Abstract:

Background: Polysomnography (PSG) recordings are regularly used in research and clinical settings to study sleep and sleep-related disorders. Typical PSG studies are conducted in professional laboratories and performed by qualified researchers. However, the number of sleep labs worldwide is disproportionate to the increasing number of individuals with sleep disorders like sleep apnea and insomnia. Consequently, there is a growing need to supply cheaper yet reliable means to measure sleep, preferably autonomously by subjects in their own home. Over the last decade, a variety of devices for self-monitoring of sleep became available in the market; however, very few have been directly validated against PSG to demonstrate their ability to perform reliable automatic sleep scoring. Two popular mobile EEG-based systems that have published validation results, the DREEM 3 headband and the Z-Machine, have never been directly compared one to the other by independent researchers. The current study aimed to compare the performance of DREEM 3 and the Z-Machine to help investigators and clinicians decide which of these devices may be more suitable for their studies. Methods: 26 participants have completed the study for credit or monetary compensation. Exclusion criteria included any history of sleep, neurological or psychiatric disorders. Eligible participants arrived at the lab in the afternoon and received the two devices. They then spent two consecutive nights monitoring their sleep at home. Participants were also asked to keep a sleep log, indicating the time they fell asleep, woke up, and the number of awakenings occurring during the night. Data from both devices, including detailed sleep hypnograms in 30-second epochs (differentiating Wake, combined N1/N2, N3; and Rapid Eye Movement sleep), were extracted and aligned upon retrieval. For analysis, the number of awakenings each night was defined as four or more consecutive wake epochs between sleep onset and termination. Total sleep time (TST) and the number of awakenings were compared to subjects’ sleep logs to measure consistency with the subjective reports. In addition, the sleep scores from each device were compared epoch-by-epoch to calculate the agreement between the two devices using Cohen’s Kappa. All analysis was performed using Matlab 2021b and SPSS 27. Results/Conclusion: Subjects consistently reported longer times spent asleep than the time reported by each device (M= 448 minutes for sleep logs compared to M= 406 and M= 345 minutes for the DREEM and Z-Machine, respectively; both ps<0.05). Linear correlations between the sleep log and each device were higher for the DREEM than the Z-Machine for both TST and the number of awakenings, and, likewise, the mean absolute bias between the sleep logs and each device was higher for the Z-Machine for both TST (p<0.001) and awakenings (p<0.04). There was some indication that these effects were stronger for the second night compared to the first night. Epoch-by-epoch comparisons showed that the main discrepancies between the devices were for detecting N2 and REM sleep, while N3 had a high agreement. Overall, the DREEM headband seems superior for reliably scoring sleep at home.

Keywords: DREEM, EEG, seep monitoring, Z-machine

Procedia PDF Downloads 105
670 A Machine Learning Approach for Assessment of Tremor: A Neurological Movement Disorder

Authors: Rajesh Ranjan, Marimuthu Palaniswami, A. A. Hashmi

Abstract:

With the changing lifestyle and environment around us, the prevalence of the critical and incurable disease has proliferated. One such condition is the neurological disorder which is rampant among the old age population and is increasing at an unstoppable rate. Most of the neurological disorder patients suffer from some movement disorder affecting the movement of their body parts. Tremor is the most common movement disorder which is prevalent in such patients that infect the upper or lower limbs or both extremities. The tremor symptoms are commonly visible in Parkinson’s disease patient, and it can also be a pure tremor (essential tremor). The patients suffering from tremor face enormous trouble in performing the daily activity, and they always need a caretaker for assistance. In the clinics, the assessment of tremor is done through a manual clinical rating task such as Unified Parkinson’s disease rating scale which is time taking and cumbersome. Neurologists have also affirmed a challenge in differentiating a Parkinsonian tremor with the pure tremor which is essential in providing an accurate diagnosis. Therefore, there is a need to develop a monitoring and assistive tool for the tremor patient that keep on checking their health condition by coordinating them with the clinicians and caretakers for early diagnosis and assistance in performing the daily activity. In our research, we focus on developing a system for automatic classification of tremor which can accurately differentiate the pure tremor from the Parkinsonian tremor using a wearable accelerometer-based device, so that adequate diagnosis can be provided to the correct patient. In this research, a study was conducted in the neuro-clinic to assess the upper wrist movement of the patient suffering from Pure (Essential) tremor and Parkinsonian tremor using a wearable accelerometer-based device. Four tasks were designed in accordance with Unified Parkinson’s disease motor rating scale which is used to assess the rest, postural, intentional and action tremor in such patient. Various features such as time-frequency domain, wavelet-based and fast-Fourier transform based cross-correlation were extracted from the tri-axial signal which was used as input feature vector space for the different supervised and unsupervised learning tools for quantification of severity of tremor. A minimum covariance maximum correlation energy comparison index was also developed which was used as the input feature for various classification tools for distinguishing the PT and ET tremor types. An automatic system for efficient classification of tremor was developed using feature extraction methods, and superior performance was achieved using K-nearest neighbors and Support Vector Machine classifiers respectively.

Keywords: machine learning approach for neurological disorder assessment, automatic classification of tremor types, feature extraction method for tremor classification, neurological movement disorder, parkinsonian tremor, essential tremor

Procedia PDF Downloads 151
669 The Use of Geographic Information System Technologies for Geotechnical Monitoring of Pipeline Systems

Authors: A. G. Akhundov

Abstract:

Issues of obtaining unbiased data on the status of pipeline systems of oil- and oil product transportation become especially important when laying and operating pipelines under severe nature and climatic conditions. The essential attention is paid here to researching exogenous processes and their impact on linear facilities of the pipeline system. Reliable operation of pipelines under severe nature and climatic conditions, timely planning and implementation of compensating measures are only possible if operation conditions of pipeline systems are regularly monitored, and changes of permafrost soil and hydrological operation conditions are accounted for. One of the main reasons for emergency situations to appear is the geodynamic factor. Emergency situations are proved by the experience to occur within areas characterized by certain conditions of the environment and to develop according to similar scenarios depending on active processes. The analysis of natural and technical systems of main pipelines at different stages of monitoring gives a possibility of making a forecast of the change dynamics. The integration of GIS technologies, traditional means of geotechnical monitoring (in-line inspection, geodetic methods, field observations), and remote methods (aero-visual inspection, aero photo shooting, air and ground laser scanning) provides the most efficient solution of the problem. The united environment of geo information system (GIS) is a comfortable way to implement the monitoring system on the main pipelines since it provides means to describe a complex natural and technical system and every element thereof with any set of parameters. Such GIS enables a comfortable simulation of main pipelines (both in 2D and 3D), the analysis of situations and selection of recommendations to prevent negative natural or man-made processes and to mitigate their consequences. The specifics of such systems include: a multi-dimensions simulation of facilities in the pipeline system, math modelling of the processes to be observed, and the use of efficient numeric algorithms and software packets for forecasting and analyzing. We see one of the most interesting possibilities of using the monitoring results as generating of up-to-date 3D models of a facility and the surrounding area on the basis of aero laser scanning, data of aerophotoshooting, and data of in-line inspection and instrument measurements. The resulting 3D model shall be the basis of the information system providing means to store and process data of geotechnical observations with references to the facilities of the main pipeline; to plan compensating measures, and to control their implementation. The use of GISs for geotechnical monitoring of pipeline systems is aimed at improving the reliability of their operation, reducing the probability of negative events (accidents and disasters), and at mitigation of consequences thereof if they still are to occur.

Keywords: databases, 3D GIS, geotechnical monitoring, pipelines, laser scaning

Procedia PDF Downloads 187
668 Empowering Learners: From Augmented Reality to Shared Leadership

Authors: Vilma Zydziunaite, Monika Kelpsiene

Abstract:

In early childhood and preschool education, play has an important role in learning and cognitive processes. In the context of a changing world, personal autonomy and the use of technology are becoming increasingly important for the development of a wide range of learner competencies. By integrating technology into learning environments, the educational reality is changed, promoting unusual learning experiences for children through play-based activities. Alongside this, teachers are challenged to develop encouragement and motivation strategies that empower children to act independently. The aim of the study was to reveal the changes in the roles and experiences of teachers in the application of AR technology for the enrichment of the learning process. A quantitative research approach was used to conduct the study. The data was collected through an electronic questionnaire. Participants: 319 teachers of 5-6-year-old children using AR technology tools in their educational process. Methods of data analysis: Cronbach alpha, descriptive statistical analysis, normal distribution analysis, correlation analysis, regression analysis (SPSS software). Results. The results of the study show a significant relationship between children's learning and the educational process modeled by the teacher. The strongest predictor of child learning was found to be related to the role of the educator. Other predictors, such as pedagogical strategies, the concept of AR technology, and areas of children's education, have no significant relationship with child learning. The role of the educator was found to be a strong determinant of the child's learning process. Conclusions. The greatest potential for integrating AR technology into the teaching-learning process is revealed in collaborative learning. Teachers identified that when integrating AR technology into the educational process, they encourage children to learn from each other, develop problem-solving skills, and create inclusive learning contexts. A significant relationship has emerged - how the changing role of the teacher relates to the child's learning style and the aspiration for personal leadership and responsibility for their learning. Teachers identified the following key roles: observer of the learning process, proactive moderator, and creator of the educational context. All these roles enable the learner to become an autonomous and active participant in the learning process. This provides a better understanding and explanation of why it becomes crucial to empower the learner to experiment, explore, discover, actively create, and foster collaborative learning in the design and implementation of the educational content, also for teachers to integrate AR technologies and the application of the principles of shared leadership. No statistically significant relationship was found between the understanding of the definition of AR technology and the teacher’s choice of role in the learning process. However, teachers reported that their understanding of the definition of AR technology influences their choice of role, which has an impact on children's learning.

Keywords: teacher, learner, augmented reality, collaboration, shared leadership, preschool education

Procedia PDF Downloads 34
667 Influence of Structured Capillary-Porous Coatings on Cryogenic Quenching Efficiency

Authors: Irina P. Starodubtseva, Aleksandr N. Pavlenko

Abstract:

Quenching is a term generally accepted for the process of rapid cooling of a solid that is overheated above the thermodynamic limit of the liquid superheat. The main objective of many previous studies on quenching is to find a way to reduce the total time of the transient process. Computational experiments were performed to simulate quenching by a falling liquid nitrogen film of an extremely overheated vertical copper plate with a structured capillary-porous coating. The coating was produced by directed plasma spraying. Due to the complexities in physical pattern of quenching from chaotic processes to phase transition, the mechanism of heat transfer during quenching is still not sufficiently understood. To our best knowledge, no information exists on when and how the first stable liquid-solid contact occurs and how the local contact area begins to expand. Here we have more models and hypotheses than authentically established facts. The peculiarities of the quench front dynamics and heat transfer in the transient process are studied. The created numerical model determines the quench front velocity and the temperature fields in the heater, varying in space and time. The dynamic pattern of the running quench front obtained numerically satisfactorily correlates with the pattern observed in experiments. Capillary-porous coatings with straight and reverse orientation of crests are investigated. The results show that the cooling rate is influenced by thermal properties of the coating as well as the structure and geometry of the protrusions. The presence of capillary-porous coating significantly affects the dynamics of quenching and reduces the total quenching time more than threefold. This effect is due to the fact that the initialization of a quench front on a plate with a capillary-porous coating occurs at a temperature significantly higher than the thermodynamic limit of the liquid superheat, when a stable solid-liquid contact is thermodynamically impossible. Waves present on the liquid-vapor interface and protrusions on the complex micro-structured surface cause destabilization of the vapor film and the appearance of local liquid-solid micro-contacts even though the average integral surface temperature is much higher than the liquid superheat limit. The reliability of the results is confirmed by direct comparison with experimental data on the quench front velocity, the quench front geometry, and the surface temperature change over time. Knowledge of the quench front velocity and total time of transition process is required for solving practically important problems of nuclear reactors safety.

Keywords: capillary-porous coating, heat transfer, Leidenfrost phenomenon, numerical simulation, quenching

Procedia PDF Downloads 128
666 Comparison of Equivalent Linear and Non-Linear Site Response Model Performance in Kathmandu Valley

Authors: Sajana Suwal, Ganesh R. Nhemafuki

Abstract:

Evaluation of ground response under earthquake shaking is crucial in geotechnical earthquake engineering. Damage due to seismic excitation is mainly correlated to local geological and geotechnical conditions. It is evident from the past earthquakes (e.g. 1906 San Francisco, USA, 1923 Kanto, Japan) that the local geology has strong influence on amplitude and duration of ground motions. Since then significant studies has been conducted on ground motion amplification revealing the importance of influence of local geology on ground. Observations from the damaging earthquakes (e.g. Nigata and San Francisco, 1964; Irpinia, 1980; Mexico, 1985; Kobe, 1995; L’Aquila, 2009) divulged that non-uniform damage pattern, particularly in soft fluvio-lacustrine deposit is due to the local amplification of seismic ground motion. Non-uniform damage patterns are also observed in Kathmandu Valley during 1934 Bihar Nepal earthquake and recent 2015 Gorkha earthquake seemingly due to the modification of earthquake ground motion parameters. In this study, site effects resulting from amplification of soft soil in Kathmandu are presented. A large amount of subsoil data was collected and used for defining the appropriate subsoil model for the Kathamandu valley. A comparative study of one-dimensional total-stress equivalent linear and non-linear site response is performed using four strong ground motions for six sites of Kathmandu valley. In general, one-dimensional (1D) site-response analysis involves the excitation of a soil profile using the horizontal component and calculating the response at individual soil layers. In the present study, both equivalent linear and non-linear site response analyses were conducted using the computer program DEEPSOIL. The results show that there is no significant deviation between equivalent linear and non-linear site response models until the maximum strain reaches to 0.06-0.1%. Overall, it is clearly observed from the results that non-linear site response model perform better as compared to equivalent linear model. However, the significant deviation between two models is resulted from other influencing factors such as assumptions made in 1D site response, lack of accurate values of shear wave velocity and nonlinear properties of the soil deposit. The results are also presented in terms of amplification factors which are predicted to be around four times more in case of non-linear analysis as compared to equivalent linear analysis. Hence, the nonlinear behavior of soil prevails the urgent need of study of dynamic characteristics of the soft soil deposit that can specifically represent the site-specific design spectra for the Kathmandu valley for building resilient structures from future damaging earthquakes.

Keywords: deep soil, equivalent linear analysis, non-linear analysis, site response

Procedia PDF Downloads 288
665 The Design of a Computer Simulator to Emulate Pathology Laboratories: A Model for Optimising Clinical Workflows

Authors: M. Patterson, R. Bond, K. Cowan, M. Mulvenna, C. Reid, F. McMahon, P. McGowan, H. Cormican

Abstract:

This paper outlines the design of a simulator to allow for the optimisation of clinical workflows through a pathology laboratory and to improve the laboratory’s efficiency in the processing, testing, and analysis of specimens. Often pathologists have difficulty in pinpointing and anticipating issues in the clinical workflow until tests are running late or in error. It can be difficult to pinpoint the cause and even more difficult to predict any issues which may arise. For example, they often have no indication of how many samples are going to be delivered to the laboratory that day or at a given hour. If we could model scenarios using past information and known variables, it would be possible for pathology laboratories to initiate resource preparations, e.g. the printing of specimen labels or to activate a sufficient number of technicians. This would expedite the clinical workload, clinical processes and improve the overall efficiency of the laboratory. The simulator design visualises the workflow of the laboratory, i.e. the clinical tests being ordered, the specimens arriving, current tests being performed, results being validated and reports being issued. The simulator depicts the movement of specimens through this process, as well as the number of specimens at each stage. This movement is visualised using an animated flow diagram that is updated in real time. A traffic light colour-coding system will be used to indicate the level of flow through each stage (green for normal flow, orange for slow flow, and red for critical flow). This would allow pathologists to clearly see where there are issues and bottlenecks in the process. Graphs would also be used to indicate the status of specimens at each stage of the process. For example, a graph could show the percentage of specimen tests that are on time, potentially late, running late and in error. Clicking on potentially late samples will display more detailed information about those samples, the tests that still need to be performed on them and their urgency level. This would allow any issues to be resolved quickly. In the case of potentially late samples, this could help to ensure that critically needed results are delivered on time. The simulator will be created as a single-page web application. Various web technologies will be used to create the flow diagram showing the workflow of the laboratory. JavaScript will be used to program the logic, animate the movement of samples through each of the stages and to generate the status graphs in real time. This live information will be extracted from an Oracle database. As well as being used in a real laboratory situation, the simulator could also be used for training purposes. ‘Bots’ would be used to control the flow of specimens through each step of the process. Like existing software agents technology, these bots would be configurable in order to simulate different situations, which may arise in a laboratory such as an emerging epidemic. The bots could then be turned on and off to allow trainees to complete the tasks required at that step of the process, for example validating test results.

Keywords: laboratory-process, optimization, pathology, computer simulation, workflow

Procedia PDF Downloads 284
664 Embracing the Uniqueness and Potential of Each Child: Moving Theory to Practice

Authors: Joy Chadwick

Abstract:

This Study of Teaching and Learning (SoTL) research focused on the experiences of teacher candidates involved in an inclusive education methods course within a four-year direct entry Bachelor of Education program. The placement of this course within the final fourteen-week practicum semester is designed to facilitate deeper theory-practice connections between effective inclusive pedagogical knowledge and the real life of classroom teaching. The course focuses on supporting teacher candidates to understand that effective instruction within an inclusive classroom context must be intentional, responsive, and relational. Diversity is situated not as exceptional but rather as expected. This interpretive qualitative study involved the analysis of twenty-nine teacher candidate reflective journals and six individual teacher candidate semi-structured interviews. The journal entries were completed at the start of the semester and at the end of the semester with the intent of having teacher candidates reflect on their beliefs of what it means to be an effective inclusive educator and how the course and practicum experiences impacted their understanding and approaches to teaching in inclusive classrooms. The semi-structured interviews provided further depth and context to the journal data. The journals and interview transcripts were coded and themed using NVivo software. The findings suggest that instructional frameworks such as universal design for learning (UDL), differentiated instruction (DI), response to intervention (RTI), social emotional learning (SEL), and self-regulation supported teacher candidate’s abilities to meet the needs of their students more effectively. Course content that focused on specific exceptionalities also supported teacher candidates to be proactive rather than reactive when responding to student learning challenges. Teacher candidates also articulated the importance of reframing their perspective about students in challenging moments and that seeing the individual worth of each child was integral to their approach to teaching. A persisting question for teacher educators exists as to what pedagogical knowledge and understanding is most relevant in supporting future teachers to be effective at planning for and embracing the diversity of student needs within classrooms today. This research directs us to consider the critical importance of addressing personal attributes and mindsets of teacher candidates regarding children as well as considering instructional frameworks when designing coursework. Further, the alignment of an inclusive education course during a teaching practicum allows for an iterative approach to learning. The practical application of course concepts while teaching in a practicum allows for a deeper understanding of instructional frameworks, thus enhancing the confidence of teacher candidates. Research findings have implications for teacher education programs as connected to inclusive education methods courses, practicum experiences, and overall teacher education program design.

Keywords: inclusion, inclusive education, pre-service teacher education, practicum experiences, teacher education

Procedia PDF Downloads 65
663 Development and Experimental Evaluation of a Semiactive Friction Damper

Authors: Juan S. Mantilla, Peter Thomson

Abstract:

Seismic events may result in discomfort on occupants of the buildings, structural damage or even buildings collapse. Traditional design aims to reduce dynamic response of structures by increasing stiffness, thus increasing the construction costs and the design forces. Structural control systems arise as an alternative to reduce these dynamic responses. A commonly used control systems in buildings are the passive friction dampers, which adds energy dissipation through damping mechanisms induced by sliding friction between their surfaces. Passive friction dampers are usually implemented on the diagonal of braced buildings, but such devices have the disadvantage that are optimal for a range of sliding force and out of that range its efficiency decreases. The above implies that each passive friction damper is designed, built and commercialized for a specific sliding/clamping force, in which the damper shift from a locked state to a slip state, where dissipates energy through friction. The risk of having a variation in the efficiency of the device according to the sliding force is that the dynamic properties of the building can change as result of many factor, even damage caused by a seismic event. In this case the expected forces in the building can change and thus considerably reduce the efficiency of the damper (that is designed for a specific sliding force). It is also evident than when a seismic event occurs the forces in each floor varies in the time what means that the damper's efficiency is not the best at all times. Semi-Active Friction devices adapt its sliding force trying to maintain its motion in the slipping phase as much as possible, because of this, the effectiveness of the device depends on the control strategy used. This paper deals with the development and performance evaluation of a low cost Semiactive Variable Friction Damper (SAVFD) in reduced scale to reduce vibrations of structures subject to earthquakes. The SAVFD consist in a (1) hydraulic brake adapted to (2) a servomotor which is controlled with an (3) Arduino board and acquires accelerations or displacement from (4) sensors in the immediately upper and lower floors and a (5) power supply that can be a pair of common batteries. A test structure, based on a Benchmark structure for structural control, was design and constructed. The SAVFD and the structure are experimentally characterized. A numerical model of the structure and the SAVFD is developed based on the dynamic characterization. Decentralized control algorithms were modeled and later tested experimentally using shaking table test using earthquake and frequency chirp signals. The controlled structure with the SAVFD achieved reductions greater than 80% in relative displacements and accelerations in comparison to the uncontrolled structure.

Keywords: earthquake response, friction damper, semiactive control, shaking table

Procedia PDF Downloads 377