Search results for: specific emitter identification
2059 Development and Validation of Cylindrical Linear Oscillating Generator
Authors: Sungin Jeong
Abstract:
This paper presents a linear oscillating generator of cylindrical type for hybrid electric vehicle application. The focus of the study is the suggestion of the optimal model and the design rule of the cylindrical linear oscillating generator with permanent magnet in the back-iron translator. The cylindrical topology is achieved using equivalent magnetic circuit considering leakage elements as initial modeling. This topology with permanent magnet in the back-iron translator is described by number of phases and displacement of stroke. For more accurate analysis of an oscillating machine, it will be compared by moving just one-pole pitch forward and backward the thrust of single-phase system and three-phase system. Through the analysis and comparison, a single-phase system of cylindrical topology as the optimal topology is selected. Finally, the detailed design of the optimal topology takes the magnetic saturation effects into account by finite element analysis. Besides, the losses are examined to obtain more accurate results; copper loss in the conductors of machine windings, eddy-current loss of permanent magnet, and iron-loss of specific material of electrical steel. The considerations of thermal performances and mechanical robustness are essential, because they have an effect on the entire efficiency and the insulations of the machine due to the losses of the high temperature generated in each region of the generator. Besides electric machine with linear oscillating movement requires a support system that can resist dynamic forces and mechanical masses. As a result, the fatigue analysis of shaft is achieved by the kinetic equations. Also, the thermal characteristics are analyzed by the operating frequency in each region. The results of this study will give a very important design rule in the design of linear oscillating machines. It enables us to more accurate machine design and more accurate prediction of machine performances.Keywords: equivalent magnetic circuit, finite element analysis, hybrid electric vehicle, linear oscillating generator
Procedia PDF Downloads 1962058 Variations in Spatial Learning and Memory across Natural Populations of Zebrafish, Danio rerio
Authors: Tamal Roy, Anuradha Bhat
Abstract:
Cognitive abilities aid fishes in foraging, avoiding predators & locating mates. Factors like predation pressure & habitat complexity govern learning & memory in fishes. This study aims to compare spatial learning & memory across four natural populations of zebrafish. Zebrafish, a small cyprinid inhabits a diverse range of freshwater habitats & this makes it amenable to studies investigating role of native environment in spatial cognitive abilities. Four populations were collected across India from waterbodies with contrasting ecological conditions. Habitat complexity of the water-bodies was evaluated as a combination of channel substrate diversity and diversity of vegetation. Experiments were conducted on populations under controlled laboratory conditions. A square shaped spatial testing arena (maze) was constructed for testing the performance of adult zebrafish. The square tank consisted of an inner square shaped layer with the edges connected to the diagonal ends of the tank-walls by connections thereby forming four separate chambers. Each of the four chambers had a main door in the centre. Each chamber had three sections separated by two windows. A removable coloured window-pane (red, yellow, green or blue) identified each main door. A food reward associated with an artificial plant was always placed inside the left-hand section of the red-door chamber. The position of food-reward and plant within the red-door chamber was fixed. A test fish would have to explore the maze by taking turns and locate the food inside the right-side section of the red-door chamber. Fishes were sorted from each population stock and kept individually in separate containers for identification. At a time, a test fish was released into the arena and allowed 20 minutes to explore in order to find the food-reward. In this way, individual fishes were trained through the maze to locate the food reward for eight consecutive days. The position of red door, with the plant and the reward, was shuffled every day. Following training, an intermission of four days was given during which the fishes were not subjected to trials. Post-intermission, the fishes were re-tested on the 13th day following the same protocol for their ability to remember the learnt task. Exploratory tendencies and latency of individuals to explore on 1st day of training, performance time across trials, and number of mistakes made each day were recorded. Additionally, mechanism used by individuals to solve the maze each day was analyzed across populations. Fishes could be expected to use algorithm (sequence of turns) or associative cues in locating the food reward. Individuals of populations did not differ significantly in latencies and tendencies to explore. No relationship was found between exploration and learning across populations. High habitat-complexity populations had higher rates of learning & stronger memory while low habitat-complexity populations had lower rates of learning and much reduced abilities to remember. High habitat-complexity populations used associative cues more than algorithm for learning and remembering while low habitat-complexity populations used both equally. The study, therefore, helped understand the role of natural ecology in explaining variations in spatial learning abilities across populations.Keywords: algorithm, associative cue, habitat complexity, population, spatial learning
Procedia PDF Downloads 2912057 Wind Turbine Scaling for the Investigation of Vortex Shedding and Wake Interactions
Authors: Sarah Fitzpatrick, Hossein Zare-Behtash, Konstantinos Kontis
Abstract:
Traditionally, the focus of horizontal axis wind turbine (HAWT) blade aerodynamic optimisation studies has been the outer working region of the blade. However, recent works seek to better understand, and thus improve upon, the performance of the inboard blade region to enhance power production, maximise load reduction and better control the wake behaviour. This paper presents the design considerations and characterisation of a wind turbine wind tunnel model devised to further the understanding and fundamental definition of horizontal axis wind turbine root vortex shedding and interactions. Additionally, the application of passive and active flow control mechanisms – vortex generators and plasma actuators – to allow for the manipulation and mitigation of unsteady aerodynamic behaviour at the blade inboard section is investigated. A static, modular blade wind turbine model has been developed for use in the University of Glasgow’s de Havilland closed return, low-speed wind tunnel. The model components - which comprise of a half span blade, hub, nacelle and tower - are scaled using the equivalent full span radius, R, for appropriate Mach and Strouhal numbers, and to achieve a Reynolds number in the range of 1.7x105 to 5.1x105 for operational speeds up to 55m/s. The half blade is constructed to be modular and fully dielectric, allowing for the integration of flow control mechanisms with a focus on plasma actuators. Investigations of root vortex shedding and the subsequent wake characteristics using qualitative – smoke visualisation, tufts and china clay flow – and quantitative methods – including particle image velocimetry (PIV), hot wire anemometry (HWA), and laser Doppler anemometry (LDA) – were conducted over a range of blade pitch angles 0 to 15 degrees, and Reynolds numbers. This allowed for the identification of shed vortical structures from the maximum chord position, the transitional region where the blade aerofoil blends into a cylindrical joint, and the blade nacelle connection. Analysis of the trailing vorticity interactions between the wake core and freestream shows the vortex meander and diffusion is notably affected by the Reynold’s number. It is hypothesized that the shed vorticity from the blade root region directly influences and exacerbates the nacelle wake expansion in the downstream direction. As the design of inboard blade region form is, by necessity, driven by function rather than aerodynamic optimisation, a study is undertaken for the application of flow control mechanisms to manipulate the observed vortex phenomenon. The designed model allows for the effective investigation of shed vorticity and wake interactions with a focus on the accurate geometry of a root region which is representative of small to medium power commercial HAWTs. The studies undertaken allow for an enhanced understanding of the interplay of shed vortices and their subsequent effect in the near and far wake. This highlights areas of interest within the inboard blade area for the potential use of passive and active flow control devices which contrive to produce a more desirable wake quality in this region.Keywords: vortex shedding, wake interactions, wind tunnel model, wind turbine
Procedia PDF Downloads 2362056 On-Farm Evaluation of Fast and Slow Growing Genotypes for Organic and Pasture Poultry Production Systems
Authors: Komala Arsi, Terrel Spencer, Casey M. Owens, Dan J. Donoghue, Ann M. Donoghue
Abstract:
Organic poultry production is becoming increasingly popular in the United States with approximately 17% increase in the sales of organic meat and poultry in 2016. As per the National Organic Program (NOP), organic poultry production system should operate according to specific standards, including access to outdoors. In the United States, organic poultry farmers are raising both fast growing and slow growing genotypes for alternative productive systems. Even though heritage breed birds grow much slower compared to commercial breeds, many free range producers believe that they are better suited for outdoor production systems. We conducted an on-farm trial on a working pasture poultry farm to compare the performance and meat quality characteristics of a slow-growing heritage breed (Freedom Rangers, FR), and two commonly used fast growing types of chickens (Cornish cross, CC and Naked Neck, NN), raised on pasture, in side by side pens segregated by breed (n=70/breed). CC and NN group birds were reared for eight weeks whereas FR group birds were reared for 10 weeks and all the birds were commercially processed. By the end of the rearing period, the final body weight of FR group birds was significantly lower than both the fast growing genotypes (CC and NN). Both CC and NN birds showed significantly higher live weight, carcass weight as well as fillet, tender and leg yield (P < 0.05). There was no difference in the wing and rack yield among the different groups. Color of the meat was measured using CEILAB method and expressed as lightness (L), redness (a*) and yellowness (b*). The breast meat from FR birds was much redder (higher a* values) and less yellow (lesser b* values) compared to both the fast growing type of chickens (P < 0.05). Overall, fast growing genotypes produced higher carcass weight and meat yield compared to slow growing genotypes and appear to be an economical option for alternative production systems.Keywords: fast growing chickens, meat quality, pasture, slow growing chickens
Procedia PDF Downloads 3892055 High Purity Germanium Detector Characterization by Means of Monte Carlo Simulation through Application of Geant4 Toolkit
Authors: Milos Travar, Jovana Nikolov, Andrej Vranicar, Natasa Todorovic
Abstract:
Over the years, High Purity Germanium (HPGe) detectors proved to be an excellent practical tool and, as such, have established their today's wide use in low background γ-spectrometry. One of the advantages of gamma-ray spectrometry is its easy sample preparation as chemical processing and separation of the studied subject are not required. Thus, with a single measurement, one can simultaneously perform both qualitative and quantitative analysis. One of the most prominent features of HPGe detectors, besides their excellent efficiency, is their superior resolution. This feature virtually allows a researcher to perform a thorough analysis by discriminating photons of similar energies in the studied spectra where otherwise they would superimpose within a single-energy peak and, as such, could potentially scathe analysis and produce wrongly assessed results. Naturally, this feature is of great importance when the identification of radionuclides, as well as their activity concentrations, is being practiced where high precision comes as a necessity. In measurements of this nature, in order to be able to reproduce good and trustworthy results, one has to have initially performed an adequate full-energy peak (FEP) efficiency calibration of the used equipment. However, experimental determination of the response, i.e., efficiency curves for a given detector-sample configuration and its geometry, is not always easy and requires a certain set of reference calibration sources in order to account for and cover broader energy ranges of interest. With the goal of overcoming these difficulties, a lot of researches turned towards the application of different software toolkits that implement the Monte Carlo method (e.g., MCNP, FLUKA, PENELOPE, Geant4, etc.), as it has proven time and time again to be a very powerful tool. In the process of creating a reliable model, one has to have well-established and described specifications of the detector. Unfortunately, the documentation that manufacturers provide alongside the equipment is rarely sufficient enough for this purpose. Furthermore, certain parameters tend to evolve and change over time, especially with older equipment. Deterioration of these parameters consequently decreases the active volume of the crystal and can thus affect the efficiencies by a large margin if they are not properly taken into account. In this study, the optimisation method of two HPGe detectors through the implementation of the Geant4 toolkit developed by CERN is described, with the goal of further improving simulation accuracy in calculations of FEP efficiencies by investigating the influence of certain detector variables (e.g., crystal-to-window distance, dead layer thicknesses, inner crystal’s void dimensions, etc.). Detectors on which the optimisation procedures were carried out were a standard traditional co-axial extended range detector (XtRa HPGe, CANBERRA) and a broad energy range planar detector (BEGe, CANBERRA). Optimised models were verified through comparison with experimentally obtained data from measurements of a set of point-like radioactive sources. Acquired results of both detectors displayed good agreement with experimental data that falls under an average statistical uncertainty of ∼ 4.6% for XtRa and ∼ 1.8% for BEGe detector within the energy range of 59.4−1836.1 [keV] and 59.4−1212.9 [keV], respectively.Keywords: HPGe detector, γ spectrometry, efficiency, Geant4 simulation, Monte Carlo method
Procedia PDF Downloads 1232054 Comparison of Different Reanalysis Products for Predicting Extreme Precipitation in the Southern Coast of the Caspian Sea
Authors: Parvin Ghafarian, Mohammadreza Mohammadpur Panchah, Mehri Fallahi
Abstract:
Synoptic patterns from surface up to tropopause are very important for forecasting the weather and atmospheric conditions. There are many tools to prepare and analyze these maps. Reanalysis data and the outputs of numerical weather prediction models, satellite images, meteorological radar, and weather station data are used in world forecasting centers to predict the weather. The forecasting extreme precipitating on the southern coast of the Caspian Sea (CS) is the main issue due to complex topography. Also, there are different types of climate in these areas. In this research, we used two reanalysis data such as ECMWF Reanalysis 5th Generation Description (ERA5) and National Centers for Environmental Prediction /National Center for Atmospheric Research (NCEP/NCAR) for verification of the numerical model. ERA5 is the latest version of ECMWF. The temporal resolution of ERA5 is hourly, and the NCEP/NCAR is every six hours. Some atmospheric parameters such as mean sea level pressure, geopotential height, relative humidity, wind speed and direction, sea surface temperature, etc. were selected and analyzed. Some different type of precipitation (rain and snow) was selected. The results showed that the NCEP/NCAR has more ability to demonstrate the intensity of the atmospheric system. The ERA5 is suitable for extract the value of parameters for specific point. Also, ERA5 is appropriate to analyze the snowfall events over CS (snow cover and snow depth). Sea surface temperature has the main role to generate instability over CS, especially when the cold air pass from the CS. Sea surface temperature of NCEP/NCAR product has low resolution near coast. However, both data were able to detect meteorological synoptic patterns that led to heavy rainfall over CS. However, due to the time lag, they are not suitable for forecast centers. The application of these two data is for research and verification of meteorological models. Finally, ERA5 has a better resolution, respect to NCEP/NCAR reanalysis data, but NCEP/NCAR data is available from 1948 and appropriate for long term research.Keywords: synoptic patterns, heavy precipitation, reanalysis data, snow
Procedia PDF Downloads 1252053 Effect of N2-cold Plasma Treatment of Carbon Supports on the Activity of Pt3Pd3Sn2/C Towards the Dimethyl Ether Oxidation
Authors: Medhanie Gebremedhin Gebru, Alex Schechter
Abstract:
Dimethyl ether (DME) possesses several advantages over other small organic molecules such as methanol, ethanol, and ammonia in terms of providing higher energy density, being less toxic, and having lower Nafion membrane crossover. However, the absence of an active and stable catalyst has been the bottleneck that hindered the commercialization of direct DME fuel cells. A Vulcan XC72 carbon-supported ternary metal catalyst, Pt₃Pd₃Sn₂/C is reported to have yielded the highest specific power density (90 mW mg-¹PGM) as compared to other catalysts tested fordirect DME fuel cell (DDMEFC). However, the micropores and sulfur groups present in Vulcan XC72 hinder the fuel utilization by causing Pt agglomeration and sulfur poisoning. Vulcan XC72 having a high carbon sp³ hybridization content, is also prone to corrosion. Therefore, carbon supports such as multi-walled carbon nanotube (MWCNT), black pearl 2000 (BP2000), and their cold N2 plasma-treated counterpartswere tested to further enhance the activity of the catalyst, and the outputs with these carbons were compared with the originally used support. Detailed characterization of the pristine and carbon supports was conducted. Electrochemical measurements in three-electrode cells and laboratory prototype fuel cells were conducted.Pt₃Pd₃Sn₂/BP2000 exhibited excellent performance in terms of electrochemical active surface area (ECSA), peak current density (jp), and DME oxidation charge (Qoxi). The effect of the plasma activation on the activity improvement was observed only in the case of MWCNT while having little or no effect on the other carbons. A Pt₃Pd₃Sn₂ supported on the optimized mixture of carbons containing 75% plasma-activated MWCNT and 25% BP2000 (Pt₃Pd₃Sn₂/75M25B) provided the highest reported power density of 117 mW mg-1PGM using an anode loading of1.55 mgPGMcm⁻².Keywords: DME, DDMEFC, ternary metal catalyst, carbon support, plasma activation
Procedia PDF Downloads 1462052 Targeting the EphA2 Receptor Tyrosine Kinases in Melanoma Cancer, both in Humans and Dogs
Authors: Shabnam Abdi, Behzad Toosi
Abstract:
Background: Melanoma is the most lethal type of malignant skin cancer in humans and dogs since it spreads rapidly throughout the body. Despite significant advances in treatment, cancer at an advanced stage has a poor prognosis. Hence, more effective treatments are needed to enhance outcomes with fewer side effects. Erythropoietin-producing hepatocellular receptors are the largest family of receptor tyrosine kinases and are divided into two subfamilies, EphA and EphB, both of which play a significant role in disease, especially cancer. Due to their association with proliferation and invasion in many aggressive types of cancer, Eph receptor tyrosine kinases (Eph RTKs) are promising cancer therapy molecules. Because these receptors have not been studied in canine melanoma, we investigated how EphA2 influences survival and tumorigenicity of melanoma cells. Methods: Expression of EphA2 protein in canine melanoma cell lines and human melanoma cell line was evaluated by Western blot. Melanoma cells were transduced with lentiviral particles encoding Eph-targeting shRNAs or non-silencing shRNAs (control) for silencing the expression of EphA2 receptor, and silencing was confirmed by Western blotting and immunofluorescence. The effect of siRNA treatment on cellular proliferation, colony formation, tumorsphere assay, invasion was analyzed by Resazurin assay Matrigel invasion assay, respectively. Results: Expression of EphA2 was detected in canine and human melanoma cell lines. Moreover, stably silencing EphA2 by specific shRNAs significantly and consistently decreased the expression of EphA2 protein in both human and canine melanoma cells. Proliferation, colony formation, tumorsphere and invasion of melanoma cells were significantly decreased in EphA2 siRNA-treated cells compared to control. Conclusion: Our data provide the first functional evidence that the EphA2 receptor plays a critical role in the malignant cellular behavior of melanoma in both human and dogs.Keywords: ephA2, targeting, melanoma, human, canine
Procedia PDF Downloads 632051 Petrogenetic Model of Formation of Orthoclase Gabbro of the Dzirula Crystalline Massif, the Caucasus
Authors: David Shengelia, Tamara Tsutsunava, Manana Togonidze, Giorgi Chichinadze, Giorgi Beridze
Abstract:
Orthoclase gabbro intrusive exposes in the Eastern part of the Dzirula crystalline massif of the Central Transcaucasian microcontinent. It is intruded in the Baikal quartz-diorite gneisses as a stock-like body. The intrusive is characterized by heterogeneity of rock composition: variability of mineral content and irregular distribution of rock-forming minerals. The rocks are represented by pyroxenites, gabbro-pyroxenites and gabbros of different composition – K-feldspar, pyroxene-hornblende and biotite bearing varieties. Scientific views on the genesis and age of the orthoclase gabbro intrusive are considerably different. Based on the long-term pertogeochemical and geochronological investigations of the intrusive with such an extraordinary composition the authors came to the following conclusions. According to geological and geophysical data, it is stated that in the Saurian orogeny horizontal tectonic layering of the Earth’s crust of the Central Transcaucasian microcontinent took place. That is precisely this fact that explains the formation of the orthoclase gabbro intrusive. During the tectonic doubling of the Earth’s crust of the mentioned microcontinent thick tectonic nappes of mafic and sialic layers overlap the sialic basement (‘inversion’ layer). The initial magma of the intrusive was of high-temperature basite-ultrabasite composition, crystallization products of which are pyroxenites and gabbro-pyroxenites. Petrochemical data of the magma attest to its formation in the Upper mantle and partially in the ‘crustal astenolayer’. Then, a newly formed overheated dry magma with phenocrysts of clinopyrocxene and basic plagioclase intruded into the ‘inversion’ layer. From the new medium it was enriched by the volatile components causing the selective melting and as a result the formation of leucocratic quartz-feldspar material. At the same time in the basic magma intensive transformation of pyroxene to hornblende was going on. The basic magma partially mixed with the newly formed acid magma. These different magmas intruded first into the allochthonous basite layer without its significant transformation and then into the upper sialic layer and crystallized here at a depth of 7-10 km. By petrochemical data the newly formed leucocratic granite magma belongs to the S type granites, but the above mentioned mixed magma – to H (hybrid) type. During the final stage of magmatic processes the gabbroic rocks impregnated with high-temperature feldspar-bearing material forming anorthoclase or orthoclase. Thus, so called ‘orthoclase gabbro’ includes the rocks of various genetic groups: 1. protolith of gabbroic intrusive; 2. hybrid rock – K-feldspar gabbro and 3. leucocratic quartz-feldspar bearing rock. Petrochemical and geochemical data obtained from the hybrid gabbro and from the inrusive protolith differ from each other. For the identification of petrogenetic model of the orthoclase gabbro intrusive formation LA-ICP-MS- U-Pb zircon dating has been conducted in all three genetic types of gabbro. The zircon age of the protolith – mean 221.4±1.9 Ma and of hybrid K-feldspar gabbro – mean 221.9±2.2 Ma, records crystallization time of the intrusive, but the zircon age of quartz-feldspar bearing rocks – mean 323±2.9 Ma, as well as the inherited age (323±9, 329±8.3, 332±10 and 335±11 Ma) of hybrid K-feldspar gabbro corresponds to the formation age of Late Variscan granitoids widespread in the Dzirula crystalline massif.Keywords: The Caucasus, isotope dating, orthoclase-bearing gabbro, petrogenetic model
Procedia PDF Downloads 3462050 Towards Intercultural Competence in EFL Textbook: the Case of ‘New Prospects’
Authors: Kamilia Mebarki
Abstract:
The promotion of intercultural competence plays an important role in foreign language education. The outcome of intercultural educationalists‟ studies was the adoption of intercultural language learning and a modified version of the Communicative Competence that encompasses an intercultural component enabling language learners to communicate successfully interculturally. Intercultural Competencehas an even more central role in teaching English as a foreign language (EFL) since efforts are critical to preparing learners for intercultural communisation in our global world. In these efforts, EFL learning materials are a crucial stimulus for developing learners’ intercultural competence. There has been a continuous interest in the analysis of EFL textbooks by researcher all over the world. One specific area that has received prominent attention in recent years is a focus on how the cultural content of EFL materials promote intercultural competence. In the Algerian context, research on the locally produced EFL textbooks tend to focus on investigating the linguistic and communicative competence. The cultural content of the materials has not yet been systematically researched. Therefore, this study contributes to filling this gap by evaluating the locally published EFL textbook ‘New Prospects’ used at the high school level as well as investigating teachers’ views and attitudes on the cultural content of ‘New Prospects’ alongside two others locally produced EFL textbooks ‘Getting Through’ and ‘At the Crossroad’ used at high school level. To estimate the textbook’s potential of developing intercultural competence, mixed methods, a combination of quantitative and qualitative data collection, was used in the material evaluation analysed via content analysis and in the survey questionnaire and interview with teachers.Data collection and analysis were supported by the frameworks developed by the researcher for analysing the textbook, questionnaire, and interview. Indeed, based on the literature, three frameworks/ models are developed in this study to analyse, on one hand, the cultural contexts and themes discussed in the material that play an important role in fostering learners’ intercultural awareness. On the other hand, to evaluate the promotion of developing intercultural competence.Keywords: intercultural communication, intercultural communicative competence, intercultural competence, EFL materials
Procedia PDF Downloads 972049 Information Asymmetry and Governing Boards in Higher Education: The Heat Map of Information Asymmetry Across Competencies and the Role of Training in Mitigating Information Asymmetry
Authors: Ana Karaman, Dmitriy Gaevoy
Abstract:
Successful and effective governing boards play an essential role in higher education by providing essential oversight and helping to steer the direction of an institution while creating and maintaining a thriving culture of stewardship. A well-functioning board can also help mitigate conflicts of interest, ensure responsible use of an organization's assets, and maintain institutional transparency. However, boards’ functions in higher education are inhibited by the presence of information asymmetry between the board and management. Board members typically have little specific knowledge about the business side of the higher education, in general, and an institution under their oversight in particular. As a result, boards often must rely on the discretion of the institutional upper administration as to what type of pertinent information being disclosed to the board. The phenomenon of information asymmetry is not unique to the higher education and has been studied previously in the context of both corporate and non-for-profit boards. Various board characteristics have been analyzed with respect to mitigating the information asymmetry between an organizational board and management. For example, it has been argued that such board characteristics as its greater size, independence, and a higher proportion of female members tend to reduce information asymmetry by raising levels of information disclosure and organizational transparency. This paper explores the phenomenon of information asymmetry between boards and management in the context of higher education. In our analysis, we propose a heat map of information asymmetry based on the categories of board competencies in higher education. The proposed heat map is based on the assessment of potential risks to both the boards and its institutions. It employs an assumption that a potential risk created by the presence of information asymmetry varies in its magnitude across various areas of boards’ competencies. Then, we explore the role of board members’ training in mitigating information asymmetry between the boards and the management by increasing the level of information disclosure and enhancing transparency in management communication with the boards. The paper seeks to demonstrate how appropriate training can provide board members with an adequate preparation to request a sufficient level of information disclose and transparency by arming them with knowledge of what questions to ask of the management.Keywords: higher education, governing boards information asymmetry, board competencies, board training
Procedia PDF Downloads 722048 Providing Tailored as a Human Rights Obligation: Feminist Lawyering as an Alternative Practice to Address Gender-Based Violence Against Women Refugees
Authors: Maelle Noir
Abstract:
International Human rights norms prescribe the obligation to protect refugee women against violence which requires, inter alia, state provision of justiciable, accessible, affordable and non-discriminatory access to justice. However, the interpretation and application of the law still lack gender sensitivity, intersectionality and a trauma-informed approach. Consequently, many refugee survivors face important structural obstacles preventing access to justice and often experience secondary traumatisation when navigating the legal system. This paper argues that the unique nature of the experiences of refugees with gender-based violence against women exacerbated throughout the migration journey calls for a tailored practice of the law to ensure adequate access to justice. The argument developed here is that the obligation to provide survivors with justiciable, accessible, affordable and non-discriminatory access to justice implies radically transforming the practice of the law altogether. This paper, therefore, proposes feminist lawyering as an alternative approach to the practice of the law when addressing gender-based violence against women refugees. First, this paper discusses the specific nature of gender-based violence against refugees with a particular focus on two aspects of the power-violence nexus: the analysis of the shift in gender roles and expectations following displacement as one of the causes of gender-based violence against women refugees and the argument that the asylum situation itself constitutes a form of state-sponsored and institutional violence. Second, the re-traumatising and re-victimising nature of the legal system is explored with the objective to demonstrate States’ failure to comply with their legal obligation to provide refugee women with effective access to justice. Third, this paper discusses some key practical strategies that have been proposed and implemented to transform the practice of the law when dealing with gender-based violence outside of the refugee context. Lastly, this analysis is applied to the specificities of the experiences of refugee survivors of gender-based violence.Keywords: feminist lawyering, feminist legal theory, gender-based violence, human rights law, intersectionality, refugee protection
Procedia PDF Downloads 1862047 Genotypic and Allelic Distribution of Polymorphic Variants of Gene SLC47A1 Leu125Phe (rs77474263) and Gly64Asp (rs77630697) and Their Association to the Clinical Response to Metformin in Adult Pakistani T2DM Patients
Authors: Sadaf Moeez, Madiha Khalid, Zoya Khalid, Sania Shaheen, Sumbul Khalid
Abstract:
Background: Inter-individual variation in response to metformin, which has been considered as a first line therapy for T2DM treatment is considerable. In the current study, it was aimed to investigate the impact of two genetic variants Leu125Phe (rs77474263) and Gly64Asp (rs77630697) in gene SLC47A1 on the clinical efficacy of metformin in T2DM Pakistani patients. Methods: The study included 800 T2DM patients (400 metformin responders and 400 metformin non-responders) along with 400 ethnically matched healthy individuals. The genotypes were determined by allele-specific polymerase chain reaction. In-silico analysis was done to confirm the effect of the two SNPs on the structure of genes. Association was statistically determined using SPSS software. Results: Minor allele frequency for rs77474263 and rs77630697 was 0.13 and 0.12. For SLC47A1 rs77474263 the homozygotes of one mutant allele ‘T’ (CT) of rs77474263 variant were fewer in metformin responders than metformin non-responders (29.2% vs. 35.5 %). Likewise, the efficacy was further reduced (7.2% vs. 4.0 %) in homozygotes of two copies of ‘T’ allele (TT). Remarkably, T2DM cases with two copies of allele ‘C’ (CC) had 2.11 times more probability to respond towards metformin monotherapy. For SLC47A1 rs77630697 the homozygotes of one mutant allele ‘A’ (GA) of rs77630697 variant were fewer in metformin responders than metformin non-responders (33.5% vs. 43.0 %). Likewise, the efficacy was further reduced (8.5% vs. 4.5%) in homozygotes of two copies of ‘A’ allele (AA). Remarkably, T2DM cases with two copies of allele ‘G’ (GG) had 2.41 times more probability to respond towards metformin monotherapy. In-silico analysis revealed that these two variants affect the structure and stability of their corresponding proteins. Conclusion: The present data suggest that SLC47A1 Leu125Phe (rs77474263) and Gly64Asp (rs77630697) polymorphisms were associated with the therapeutic response of metformin in T2DM patients of Pakistan.Keywords: diabetes, T2DM, SLC47A1, Pakistan, polymorphism
Procedia PDF Downloads 1622046 Redox-labeled Electrochemical Aptasensor Array for Single-cell Detection
Authors: Shuo Li, Yannick Coffinier, Chann Lagadec, Fabrizio Cleri, Katsuhiko Nishiguchi, Akira Fujiwara, Soo Hyeon Kim, Nicolas Clément
Abstract:
The need for single cell detection and analysis techniques has increased in the past decades because of the heterogeneity of individual living cells, which increases the complexity of the pathogenesis of malignant tumors. In the search for early cancer detection, high-precision medicine and therapy, the technologies most used today for sensitive detection of target analytes and monitoring the variation of these species are mainly including two types. One is based on the identification of molecular differences at the single-cell level, such as flow cytometry, fluorescence-activated cell sorting, next generation proteomics, lipidomic studies, another is based on capturing or detecting single tumor cells from fresh or fixed primary tumors and metastatic tissues, and rare circulating tumors cells (CTCs) from blood or bone marrow, for example, dielectrophoresis technique, microfluidic based microposts chip, electrochemical (EC) approach. Compared to other methods, EC sensors have the merits of easy operation, high sensitivity, and portability. However, despite various demonstrations of low limits of detection (LOD), including aptamer sensors, arrayed EC sensors for detecting single-cell have not been demonstrated. In this work, a new technique based on 20-nm-thick nanopillars array to support cells and keep them at ideal recognition distance for redox-labeled aptamers grafted on the surface. The key advantages of this technology are not only to suppress the false positive signal arising from the pressure exerted by all (including non-target) cells pushing on the aptamers by downward force but also to stabilize the aptamer at the ideal hairpin configuration thanks to a confinement effect. With the first implementation of this technique, a LOD of 13 cells (with5.4 μL of cell suspension) was estimated. In further, the nanosupported cell technology using redox-labeled aptasensors has been pushed forward and fully integrated into a single-cell electrochemical aptasensor array. To reach this goal, the LOD has been reduced by more than one order of magnitude by suppressing parasitic capacitive electrochemical signals by minimizing the sensor area and localizing the cells. Statistical analysis at the single-cell level is demonstrated for the recognition of cancer cells. The future of this technology is discussed, and the potential for scaling over millions of electrodes, thus pushing further integration at sub-cellular level, is highlighted. Despite several demonstrations of electrochemical devices with LOD of 1 cell/mL, the implementation of single-cell bioelectrochemical sensor arrays has remained elusive due to their challenging implementation at a large scale. Here, the introduced nanopillar array technology combined with redox-labeled aptamers targeting epithelial cell adhesion molecule (EpCAM) is perfectly suited for such implementation. Combining nanopillar arrays with microwells determined for single cell trapping directly on the sensor surface, single target cells are successfully detected and analyzed. This first implementation of a single-cell electrochemical aptasensor array based on Brownian-fluctuating redox species opens new opportunities for large-scale implementation and statistical analysis of early cancer diagnosis and cancer therapy in clinical settings.Keywords: bioelectrochemistry, aptasensors, single-cell, nanopillars
Procedia PDF Downloads 1212045 Effects of Magnetization Patterns on Characteristics of Permanent Magnet Linear Synchronous Generator for Wave Energy Converter Applications
Authors: Sung-Won Seo, Jang-Young Choi
Abstract:
The rare earth magnets used in synchronous generators offer many advantages, including high efficiency, greatly reduced the size, and weight. The permanent magnet linear synchronous generator (PMLSG) allows for direct drive without the need for a mechanical device. Therefore, the PMLSG is well suited to translational applications, such as wave energy converters and free piston energy converters. This manuscript compares the effects of different magnetization patterns on the characteristics of double-sided PMLSGs in slotless stator structures. The Halbach array has a higher flux density in air-gap than the Vertical array, and the advantages of its performance and efficiency are widely known. To verify the advantage of Halbach array, we apply a finite element method (FEM) and analytical method. In general, a FEM and an analytical method are used in the electromagnetic analysis for determining model characteristics, and the FEM is preferable to magnetic field analysis. However, the FEM is often slow and inflexible. On the other hand, the analytical method requires little time and produces accurate analysis of the magnetic field. Therefore, the flux density in air-gap and the Back-EMF can be obtained by FEM. In addition, the results from the analytical method correspond well with the FEM results. The model of the Halbach array reveals less copper loss than the model of the Vertical array, because of the Halbach array’s high output power density. The model of the Vertical array is lower core loss than the model of Halbach array, because of the lower flux density in air-gap. Therefore, the current density in the Vertical model is higher for identical power output. The completed manuscript will include the magnetic field characteristics and structural features of both models, comparing various results, and specific comparative analysis will be presented for the determination of the best model for application in a wave energy converting system.Keywords: wave energy converter, permanent magnet linear synchronous generator, finite element method, analytical method
Procedia PDF Downloads 3042044 Human Resources Development and Management: A Guide to School Owners
Authors: Charita B. Lasala, Lakambini G. Reluya
Abstract:
The human factor composing the organization is an asset that needs to be managed conscientiously and to be in tuned with the organization’s need. Thus, the human resources add value to the organization by using their talents, skills and knowledge in transforming the other resources of the organization to either produce or to deliver products and services that generate profits or other valued forms for return. Keeping these kinds of employees has always been the main goal of each Human Resources Department in every company worldwide; regardless of the work being done. They are the most important resource a company can have and treating them well will make them priceless assets that can help make a business a success. Larmen de Guia Memorial College (LGMC) and Royal Oaks International School (ROIS) is one of the many organizations that seek ways to keep the human factor and are in the process of formalization and that people management is on the top of the list thus, this study was made since there was a need for the creation of the Human Resources Department due to its absence in the organization and to help the organization in keeping these valued employees. The study was anchored on the concept that human resources consist of people who perform its activities and that all decisions that affect the workforce concern the organization’s human resources functions. In conducting this study, it made use of the mixed method using both the qualitative and quantitative approaches with focus group discussions. The design has three stages namely: problem conceptualization, case analysis, and output. The output from the survey and interviews tells the abstracted ideas on the proposed HR program for the said institution. Based on the findings of the study, it can be concluded that the personnel in the institution is not in the correct perspective, much more that the personnel has no specific job descriptions. The hiring procedure is not extensive, nor the personnel was given the chance to be exposed to training that would aid them in job development and enhancement of their skills and talents. The compensation package offered by the institution does not commensurate to their services rendered. Lastly, it is concluded that in the opinion/decision rendered by the grievance committee is not fair and that the institution failed to give good motivation/initiative for the employees to be more productive.Keywords: employee benefits, employee relations, human resources and management, people management, recruitment, trainings
Procedia PDF Downloads 3192043 The Desire for Significance & Memorability in Popular Culture: A Cognitive Psychological Study of Contemporary Literature, Art, and Media
Authors: Israel B. Bitton
Abstract:
“Memory” is associated with various phenomena, from physical to mental, personal to collective and historical to cultural. As part of a broader exploration of memory studies in philosophy and science (slated for academic publication October 2021), this specific study employs analytical methods of cognitive psychology and philosophy of memory to theorize that A) the primary human will (drive) is to significance, in that every human action and expression can be rooted in a most primal desire to be cosmically significant (however that is individually perceived); and B) that the will to significance manifests as the will to memorability, an innate desire to be remembered by others after death. In support of these broad claims, a review of various popular culture “touchpoints”—historic and contemporary records spanning literature, film and television, traditional news media, and social media—is presented to demonstrate how this very theory is repeatedly and commonly expressed (and has been for a long time) by many popular public figures as well as “everyday people.” Though developed before COVID, the crisis only increased the theory’s relevance: so many people were forced to die alone, leaving them and their loved ones to face even greater existential angst than what ordinarily accompanies death since the usual expectations for one’s “final moments” were shattered. To underscore this issue of, and response to, what can be considered a sociocultural “memory gap,” this study concludes with a summary of several projects launched by journalists at the height of the pandemic to document the memorable human stories behind COVID’s tragic warped speed death toll that, when analyzed through the lens of Viktor E. Frankl’s psychoanalytical perspective on “existential meaning,” shows how countless individuals were robbed of the last wills and testaments to their self-significance and memorability typically afforded to the dying and the aggrieved. The resulting insight ought to inform how government and public health officials determine what is truly “non-essential” to human health, physical and mental, at times of crisis.Keywords: cognitive psychology, covid, neuroscience, philosophy of memory
Procedia PDF Downloads 1872042 Phase Optimized Ternary Alloy Material for Gas Turbines
Authors: Mayandi Ramanathan
Abstract:
Gas turbine blades see the most aggressive thermal stress conditions within the engine, due to Turbine Entry Temperatures in the range of 1500 to 1600°C, but in synchronization with other functional components, they must readily deliver efficient performance, whilst incurring minimal overhaul and repair costs during its service life up to 5 million flying miles. The blades rotate at very high rotation rates and remove significant amount of thermal power from the gas stream. At high temperatures the major component failure mechanism is creep. During its service over time under high temperatures and loads, the blade will deform, lengthen and rupture. High strength and stiffness in the longitudinal direction up to elevated service temperatures are certainly the most needed properties of turbine blades. The proposed advanced Ti alloy material needs a process that provides strategic orientation of metallic ordering, uniformity in composition and high metallic strength. 25% Ta/(Al+Ta) ratio ensures TaAl3 phase formation, where as 51% Al/(Al+Ti) ratio ensures formation of α-Ti3Al and γ-TiAl mixed phases fand the three phase combination ensures minimal Al excess (~1.4% Al excess), unlike Ti-47Al-2Cr-2Nb which has significant excess Al (~5% Al excess) that could affect the service life of turbine blades. This presentation will involve the summary of additive manufacturing and heat treatment process conditions to fabricate turbine blade with Ti-43Al matrix alloyed with optimized amount of refractory Ta metal. Summary of thermo-mechanical test results such as high temperature tensile strength, creep strain rate, thermal expansion coefficient and fracture toughness will be presented. Improvement in service temperature of the turbine blades and corrosion resistance dependence on coercivity of the alloy material will be reported. Phase compositions will be quantified, and a summary of its correlation with creep strain rate will be presented.Keywords: gas turbine, aerospace, specific strength, creep, high temperature materials, alloys, phase optimization
Procedia PDF Downloads 1822041 Loss of Function of Only One of Two CPR5 Paralogs Causes Resistance Against Rice Yellow Mottle Virus
Authors: Yugander Arra, Florence Auguy, Melissa Stiebner, Sophie Chéron, Michael M. Wudick, Van Schepler-Luu, Sébastien Cunnac, Wolf B. Frommer, Laurence Albar
Abstract:
Rice yellow mottle virus (RYMV) is one of the most important diseases affecting rice in Africa. The most promising strategy to reduce yield losses is the use of highly resistant varieties. The resistance gene RYMV2 is homolog of the Arabidopsis constitutive expression of pathogenesis related protein-5 (AtCPR5) nucleoporin gene. Resistance alleles are originating from African cultivated rice Oryza glaberrima, rarely cultivated, and are characterized by frameshifts or early stop codons, leading to a non-functional or truncated protein. Rice possesses two paralogs of CPR5 and function of these genes are unclear. Here, we evaluated the role of the two rice candidate nucleoporin paralogs OsCPR5.1 (pathogenesis-related gene 5; RYMV2) and OsCPR5.2 by CRISPR/Cas9 genome editing. Despite striking sequence and structural similarity, only loss-of-function of OsCPR5.1 led to full resistance, while loss-of-function oscpr5.2 mutants remained susceptible. Short N-terminal deletions in OsCPR5.1 also did not lead to resistance. In contrast to Atcpr5 mutants, neither OsCPR5.1 nor OsCPR5.2 knock out mutants showed substantial growth defects. Taken together, the candidate nucleoporin OsCPR5.1, but not its close homolog OsCPR5.2, plays a specific role for the susceptibility to RYMV, possibly by impairing the import of viral RNA or protein into the nucleus. Whereas gene introgression from O. glaberrima to high yielding O. sativa varieties is impaired by strong sterility barriers and the negative impact of linkage drag, genome editing of OsCPR5.1, while maintaining OsCPR5.2 activity, thus provides a promising strategy to generate O. sativa elite lines that are resistant to RYMV.Keywords: CRISPR Cas9, genome editing, knock out mutant, recessive resistance, rice yellow mottle virus
Procedia PDF Downloads 1222040 Problem Solving: Process or Product? A Mathematics Approach to Problem Solving in Knowledge Management
Authors: A. Giannakopoulos, S. B. Buckley
Abstract:
Problem solving in any field is recognised as a prerequisite for any advancement in knowledge. For example in South Africa it is one of the seven critical outcomes of education together with critical thinking. As a systematic way to problem solving was initiated in mathematics by the great mathematician George Polya (the father of problem solving), more detailed and comprehensive ways in problem solving have been developed. This paper is based on the findings by the author and subsequent recommendations for further research in problem solving and critical thinking. Although the study was done in mathematics, there is no doubt by now in almost anyone’s mind that mathematics is involved to a greater or a lesser extent in all fields, from symbols, to variables, to equations, to logic, to critical thinking. Therefore it stands to reason that mathematical principles and learning cannot be divorced from any field. In management of knowledge situations, the types of problems are similar to mathematics problems varying from simple to analogical to complex; from well-structured to ill-structured problems. While simple problems could be solved by employees by adhering to prescribed sequential steps (the process), analogical and complex problems cannot be proceduralised and that diminishes the capacity of the organisation of knowledge creation and innovation. The low efficiency in some organisations and the low pass rates in mathematics prompted the author to view problem solving as a product. The authors argue that using mathematical approaches to knowledge management problem solving and treating problem solving as a product will empower the employee through further training to tackle analogical and complex problems. The question the authors asked was: If it is true that problem solving and critical thinking are indeed basic skills necessary for advancement of knowledge why is there so little literature of knowledge management (KM) about them and how they are connected and advance KM?This paper concludes with a conceptual model which is based on general accepted principles of knowledge acquisition (developing a learning organisation), knowledge creation, sharing, disseminating and storing thereof, the five pillars of knowledge management (KM). This model, also expands on Gray’s framework on KM practices and problem solving and opens the doors to a new approach to training employees in general and domain specific areas problems which can be adapted in any type of organisation.Keywords: critical thinking, knowledge management, mathematics, problem solving
Procedia PDF Downloads 6002039 Mitigating Self-Regulation Issues in the Online Instruction of Math
Authors: Robert Vanderburg, Michael Cowling, Nicholas Gibson
Abstract:
Mathematics is one of the core subjects taught in the Australian K-12 education system and is considered an important component for future studies in areas such as engineering and technology. In addition to this, Australia has been a world leader in distance education due to the vastness of its geographic landscape. Despite this, research is still needed on distance math instruction. Even though delivery of curriculum has given way to online studies, and there is a resultant push for computer-based (PC, tablet, smartphone) math instruction, much instruction still involves practice problems similar to those original curriculum packs, without the ability for students to self-regulate their learning using the full interactive capabilities of these devices. Given this need, this paper addresses issues students have during online instruction. This study consists of 32 students struggling with mathematics enrolled in a math tutorial conducted in an online setting. The study used a case study design to understand some of the blockades hindering the students’ success. Data was collected by tracking students practice and quizzes, tracking engagement of the site, recording one-on-one tutorials, and collecting data from interviews with the students. Results revealed that when students have cognitively straining tasks in an online instructional setting, the first thing to dissipate was their ability to self-regulate. The results also revealed that instructors could ameliorate the situation and provided useful data on strategies that could be used for designing future online tasks. Specifically, instructors could utilize cognitive dissonance strategies to reduce the cognitive drain of the tasks online. They could segment the instruction process to reduce the cognitive demands of the tasks and provide in-depth self-regulatory training, freeing mental capacity for the mathematics content. Finally, instructors could provide specific scheduling and assignment structure changes to reduce the amount of student centered self-regulatory tasks in the class. These findings will be discussed in more detail and summarized in a framework that can be used for future work.Keywords: digital education, distance education, mathematics education, self-regulation
Procedia PDF Downloads 1372038 About the State of Students’ Career Guidance in the Conditions of Inclusive Education in the Republic of Kazakhstan
Authors: Laura Butabayeva, Svetlana Ismagulova, Gulbarshin Nogaibayeva, Maiya Temirbayeva, Aidana Zhussip
Abstract:
Over the years of independence, Kazakhstan has not only ratified international documents regulating the rights of children to Inclusive education, but also developed its own inclusive educational policy. Along with this, the state pays particular attention to high school students' preparedness for professional self-determination. However, a number of problematic issues in this field have been revealed, such as the lack of systemic mechanisms coordinating stakeholders’ actions in preparing schoolchildren for a conscious choice of in-demand profession, meeting their individual capabilities and special educational needs (SEN). The analysis of the state’s current situation indicates school graduates’ adaptation to the labor market does not meet existing demands of the society. According to the Ministry of Labor and Social Protection of the Population of the Republic of Kazakhstan, about 70 % of Kazakhstani school graduates find themselves difficult to choose a profession, 87 % of schoolchildren make their career choice under the influence of parents and school teachers, 90 % of schoolchildren and their parents have no idea about the most popular professions on the market. The results of the study conducted by KorlanSyzdykova in 2016 indicated the urgent need of Kazakhstani school graduates in obtaining extensive information about in- demand professions and receiving professional assistance in choosing a profession in accordance with their individual skills, abilities, and preferences. The results of the survey, conducted by Information and Analytical Center among heads of colleges in 2020, showed that despite significant steps in creating conditions for students with SEN, they face challenges in studying because of poor career guidance provided to them in schools. The results of the study, conducted by the Center for Inclusive Education of the National Academy of Education named after Y. Altynsarin in the state’s general education schools in 2021, demonstrated the lack of career guidance, pedagogical and psychological support for children with SEN. To investigate these issues, the further study was conducted to examine the state of students’ career guidance and socialization, taking into account their SEN. The hypothesis of this study proposed that to prepare school graduates for a conscious career choice, school teachers and specialists need to develop their competencies in early identification of students' interests, inclinations, SEN and ensure necessary support for them. The state’s 5 regions were involved in the study according to the geographical location. The triangulation approach was utilized to ensure the credibility and validity of research findings, including both theoretical (analysis of existing statistical data, legal documents, results of previous research) and empirical (school survey for students, interviews with parents, teachers, representatives of school administration) methods. The data were analyzed independently and compared to each other. The survey included questions related to provision of pedagogical support for school students in making their career choice. Ethical principles were observed in the process of developing the methodology, collecting, analyzing the data and distributing the results. Based on the results, methodological recommendations on students’ career guidance for school teachers and specialists were developed, taking into account the former’s individual capabilities and SEN.Keywords: career guidance, children with special educational needs, inclusive education, Kazakhstan
Procedia PDF Downloads 1772037 Comparison of Zinc Amino Acid Complex and Zinc Sulfate in Diet for Asian Seabass (Lates calcarifer)
Authors: Kanokwan Sansuwan, Orapint Jintasataporn, Srinoy Chumkam
Abstract:
Asian seabass is one of the economically important fish of Thailand and other countries in the Southeast Asia. Zinc is an essential trace metal to fish and vital to various biological processes and function. It is required for normal growth and indispensable in the diet. Therefore, the artificial diets offered to intensively cultivated fish must possess the zinc content required by the animal metabolism for health maintenance and high weight gain rates. However, essential elements must also be in an available form to be utilized by the organism. Thus, this study was designed to evaluate the application of different zinc forms, including organic Zinc (zinc amino acid complex) and inorganic Zinc (zinc sulfate), as feed additives in diets for Asian seabass. Three groups with five replicates of fish (mean weight 22.54 ± 0.80 g) were given a basal diet either unsupplemented (control) or supplemented with 50 mg Zn kg−¹ sulfate (ZnSO₄) or Zinc Amino Acid Complex (ZnAA) respectively. Feeding regimen was initially set at 3% of body weight per day, and then the feed amount was adjusted weekly according to the actual feeding performance. The experiment was conducted for 10 weeks. Fish supplemented with ZnAA had the highest values in all studied growth indicators (weight gain, average daily growth and specific growth rate), followed by fish fed the diets with the ZnSO₄, and lowest in fish fed the diets with the control. Lysozyme and superoxide dismutase (SOD) activity of fish supplemented with ZnAA were significantly higher compared with all other groups (P < 0.05). Fish supplemented with ZnSO₄ exhibited significant increase in digestive enzyme activities (protease, pepsin and trypsin) compared with ZnAA and the control (P < 0.05). However, no significant differences were observed for RNA and protein in muscle (P > 0.05). The results of the present work suggest that ZnAA are a better source of trace elements for Asian seabass, based on growth performance and immunity indices examined in this study.Keywords: Asian seabass, growth performance, zinc amino acid complex (ZnAA), zinc sulfate (ZnSO₄)
Procedia PDF Downloads 1832036 Thermodynamic Analysis of Surface Seawater under Ocean Warming: An Integrated Approach Combining Experimental Measurements, Theoretical Modeling, Machine Learning Techniques, and Molecular Dynamics Simulation for Climate Change Assessment
Authors: Nishaben Desai Dholakiya, Anirban Roy, Ranjan Dey
Abstract:
Understanding ocean thermodynamics has become increasingly critical as Earth's oceans serve as the primary planetary heat regulator, absorbing approximately 93% of excess heat energy from anthropogenic greenhouse gas emissions. This investigation presents a comprehensive analysis of Arabian Sea surface seawater thermodynamics, focusing specifically on heat capacity (Cp) and thermal expansion coefficient (α) - parameters fundamental to global heat distribution patterns. Through high-precision experimental measurements of ultrasonic velocity and density across varying temperature (293.15-318.15K) and salinity (0.5-35 ppt) conditions, it characterize critical thermophysical parameters including specific heat capacity, thermal expansion, and isobaric and isothermal compressibility coefficients in natural seawater systems. The study employs advanced machine learning frameworks - Random Forest, Gradient Booster, Stacked Ensemble Machine Learning (SEML), and AdaBoost - with SEML achieving exceptional accuracy (R² > 0.99) in heat capacity predictions. the findings reveal significant temperature-dependent molecular restructuring: enhanced thermal energy disrupts hydrogen-bonded networks and ion-water interactions, manifesting as decreased heat capacity with increasing temperature (negative ∂Cp/∂T). This mechanism creates a positive feedback loop where reduced heat absorption capacity potentially accelerates oceanic warming cycles. These quantitative insights into seawater thermodynamics provide crucial parametric inputs for climate models and evidence-based environmental policy formulation, particularly addressing the critical knowledge gap in thermal expansion behavior of seawater under varying temperature-salinity conditions.Keywords: climate change, arabian sea, thermodynamics, machine learning
Procedia PDF Downloads 202035 Effect of Collection Technique of Blood on Clinical Pathology
Authors: Marwa Elkalla, E. Ali Abdelfadil, Ali. Mohamed. M. Sami, Ali M. Abdel-Monem
Abstract:
To assess the impact of the blood collection technique on clinical pathology markers and to establish reference intervals, a study was performed using normal, healthy C57BL/6 mice. Both sexes were employed, and they were randomly assigned to different groups depending on the phlebotomy technique used. The blood was drawn in one of four ways: intracardiac (IC), caudal vena cava (VC), caudal vena cava (VC) plus a peritoneal collection of any extravasated blood, or retroorbital phlebotomy (RO). Several serum biochemistries, such as a liver function test, a complete blood count with differentials, and a platelet count, were analysed from the blood and serum samples analysed. Red blood cell count, haemoglobin (p >0.002), hematocrit, alkaline phosphatase, albumin, total protein, and creatinine were all significantly greater in female mice. Platelet counts, specific white blood cell numbers (total, neutrophil, lymphocyte, and eosinophil counts), globulin, amylase, and the BUN/creatinine ratio were all greater in males. The VC approach seemed marginally superior to the IC approach for the characteristics under consideration and was linked to the least variation among both sexes. Transaminase levels showed the greatest variation between study groups. The aspartate aminotransferase (AST) values were linked with decreased fluctuation for the VC approach, but the alanine aminotransferase (ALT) values were similar between the IC and VC groups. There was a lot of diversity and range in transaminase levels between the MC and RO groups. We found that the RO approach, the only one tested that allowed for repeated sample collection, yielded acceptable ALT readings. The findings show that the test results are significantly affected by the phlebotomy technique and that the VC or IC techniques provide the most reliable data. When organising a study and comparing data to reference ranges, the ranges supplied here by collection method and sex can be utilised to determine the best approach to data collection. The authors suggest establishing norms based on the procedures used by each individual researcher in his or her own lab.Keywords: clinical, pathology, blood, effect
Procedia PDF Downloads 972034 A Computerized Tool for Predicting Future Reading Abilities in Pre-Readers Children
Authors: Stephanie Ducrot, Marie Vernet, Eve Meiss, Yves Chaix
Abstract:
Learning to read is a key topic of debate today, both in terms of its implications on school failure and illiteracy and regarding what are the best teaching methods to develop. It is estimated today that four to six percent of school-age children suffer from specific developmental disorders that impair learning. The findings from people with dyslexia and typically developing readers suggest that the problems children experience in learning to read are related to the preliteracy skills that they bring with them from kindergarten. Most tools available to professionals are designed for the evaluation of child language problems. In comparison, there are very few tools for assessing the relations between visual skills and the process of learning to read. Recent literature reports that visual-motor skills and visual-spatial attention in preschoolers are important predictors of reading development — the main goal of this study aimed at improving screening for future reading difficulties in preschool children. We used a prospective, longitudinal approach where oculomotor processes (assessed with the DiagLECT test) were measured in pre-readers, and the impact of these skills on future reading development was explored. The dialect test specifically measures the online time taken to name numbers arranged irregularly in horizontal rows (horizontal time, HT), and the time taken to name numbers arranged in vertical columns (vertical time, VT). A total of 131 preschoolers took part in this study. At Time 0 (kindergarten), the mean VT, HT, errors were recorded. One year later, at Time 1, the reading level of the same children was evaluated. Firstly, this study allowed us to provide normative data for a standardized evaluation of the oculomotor skills in 5- and 6-year-old children. The data also revealed that 25% of our sample of preschoolers showed oculomotor impairments (without any clinical complaints). Finally, the results of this study assessed the validity of the DiagLECT test for predicting reading outcomes; the better a child's oculomotor skills are, the better his/her reading abilities will be.Keywords: vision, attention, oculomotor processes, reading, preschoolers
Procedia PDF Downloads 1482033 Organ Dose Calculator for Fetus Undergoing Computed Tomography
Authors: Choonsik Lee, Les Folio
Abstract:
Pregnant patients may undergo CT in emergencies unrelated with pregnancy, and potential risk to the developing fetus is of concern. It is critical to accurately estimate fetal organ doses in CT scans. We developed a fetal organ dose calculation tool using pregnancy-specific computational phantoms combined with Monte Carlo radiation transport techniques. We adopted a series of pregnancy computational phantoms developed at the University of Florida at the gestational ages of 8, 10, 15, 20, 25, 30, 35, and 38 weeks (Maynard et al. 2011). More than 30 organs and tissues and 20 skeletal sites are defined in each fetus model. We calculated fetal organ dose-normalized by CTDIvol to derive organ dose conversion coefficients (mGy/mGy) for the eight fetuses for consequential slice locations ranging from the top to the bottom of the pregnancy phantoms with 1 cm slice thickness. Organ dose from helical scans was approximated by the summation of doses from multiple axial slices included in the given scan range of interest. We then compared dose conversion coefficients for major fetal organs in the abdominal-pelvis CT scan of pregnancy phantoms with the uterine dose of a non-pregnant adult female computational phantom. A comprehensive library of organ conversion coefficients was established for the eight developing fetuses undergoing CT. They were implemented into an in-house graphical user interface-based computer program for convenient estimation of fetal organ doses by inputting CT technical parameters as well as the age of the fetus. We found that the esophagus received the least dose, whereas the kidneys received the greatest dose in all fetuses in AP scans of the pregnancy phantoms. We also found that when the uterine dose of a non-pregnant adult female phantom is used as a surrogate for fetal organ doses, root-mean-square-error ranged from 0.08 mGy (8 weeks) to 0.38 mGy (38 weeks). The uterine dose was up to 1.7-fold greater than the esophagus dose of the 38-week fetus model. The calculation tool should be useful in cases requiring fetal organ dose in emergency CT scans as well as patient dose monitoring.Keywords: computed tomography, fetal dose, pregnant women, radiation dose
Procedia PDF Downloads 1442032 Effect of Anionic Lipid on Zeta Potential Values and Physical Stability of Liposomal Amikacin
Authors: Yulistiani, Muhammad Amin, Fasich
Abstract:
A surface charge of the nanoparticle is a very important consideration in pulmonal drug delivery system. The zeta potential (ZP) is related to the surface charge which can predict stability of nanoparticles as nebules of liposomal amikacin. Anionic lipid such as 1,2-dipalmitoyl-sn-glycero-3-phosphatidylglycerol (DPPG) is expected to contribute to the physical stability of liposomal amikacin and the optimal ZP value. Suitable ZP can improve drug release profiles at specific sites in alveoli as well as their stability in dosage form. This study aimed to analyze the effect of DPPG on ZP values and physical stability of liposomal amikacin. Liposomes were prepared by using the reserved phase evaporation method. Liposomes consisting of DPPG, 1,2-dipalmitoyl-sn-glycero-3-phosphatidylcholine (DPPC), cholesterol and amikacin were formulated in five different compositions 0/150/5/100, 10//150/5/100, 20/150/5/100, 30/150/5/100 and 40/150/5/100 (w/v) respectively. A chloroform/methanol mixture in the ratio of 1 : 1 (v/v) was used as solvent to dissolve lipids. These systems were adjusted in the phosphate buffer at pH 7.4. Nebules of liposomal amikacin were produced by using the vibrating nebulizer and then characterized by the X-ray diffraction, differential scanning calorimetry, particle size and zeta potential analyzer, and scanning electron microscope. Amikacin concentration from liposome leakage was determined by the immunoassay method. The study revealed that presence of DPPG could increase the ZP value. The addition of 10 mg DPPG in the composition resulted in increasing of ZP value to 3.70 mV (negatively charged). The optimum ZP value was reached at -28.780 ± 0.70 mV and particle size of nebules 461.70 ± 21.79 nm. Nebulizing process altered parameters such as particle size, conformation of lipid components and the amount of surface charges of nanoparticles which could influence the ZP value. These parameters might have profound effects on the application of nebules in the alveoli; however, negatively charge nanoparticles were unexpected to have a high ZP value in this system due to increased macrophage uptake and pulmonal clearance. Therefore, the ratio of liposome 20/150/5/100 (w/v) resulted in the most stable colloidal system and might be applicable to pulmonal drug delivery system.Keywords: anionic lipid, dipalmitoylphosphatidylglycerol, liposomal amikacin, stability, zeta potential
Procedia PDF Downloads 3412031 Effect of Mistranslating tRNA Alanine on Polyglutamine Aggregation
Authors: Sunidhi Syal, Rasangi Tennakoon, Patrick O'Donoghue
Abstract:
Polyglutamine (polyQ) diseases are a group of diseases related to neurodegeneration caused by repeats of the amino acid glutamine (Q) in the DNA, which translates into an elongated polyQ tract in the protein. The pathological explanation is that the polyQ tract forms cytotoxic aggregates in the neurons, leading to their degeneration. There are no cures or preventative efforts established for these diseases as of today, although the symptoms of these diseases can be relieved. This study specifically focuses on Huntington's disease, which is a type of polyQ disease in which aggregation is caused by the extended cytosine, adenine, guanine (CUG) codon repeats in the huntingtin (HTT) gene, which encodes for the huntingtin protein. Using this principle, we attempted to create six models, which included mutating wildtype tRNA alanine variant tRNA-AGC-8-1 to have glutamine anticodons CUG and UUG so serine is incorporated at glutamine sites in poly Q tracts. In the process, we were successful in obtaining tAla-8-1 CUG mutant clones in the HTTexon1 plasmids with a polyQ tract of 23Q (non-pathogenic model) and 74Q (disease model). These plasmids were transfected into mouse neuroblastoma cells to characterize protein synthesis and aggregation in normal and mistranslating cells and to investigate the effects of glutamines replaced with alanines on the disease phenotype. Notably, we observed no noteworthy differences in mean fluorescence between the CUG mutants for 23Q or 74Q; however, the Triton X-100 assay revealed a significant reduction in insoluble 74Q aggregates. We were unable to create a tAla-8-1 UUG mutant clone, and determining the difference in the effects of the two glutamine anticodons may enrich our understanding of the disease phenotype. In conclusion, by generating structural disruption with the amino acid alanine, it may be possible to find ways to minimize the toxicity of Huntington's disease caused by these polyQ aggregates. Further research is needed to advance knowledge in this field by identifying the cellular and biochemical impact of specific tRNA variants found naturally in human genomes.Keywords: Huntington's disease, polyQ, tRNA, anticodon, clone, overlap PCR
Procedia PDF Downloads 462030 An Advanced Automated Brain Tumor Diagnostics Approach
Authors: Berkan Ural, Arif Eser, Sinan Apaydin
Abstract:
Medical image processing is generally become a challenging task nowadays. Indeed, processing of brain MRI images is one of the difficult parts of this area. This study proposes a hybrid well-defined approach which is consisted from tumor detection, extraction and analyzing steps. This approach is mainly consisted from a computer aided diagnostics system for identifying and detecting the tumor formation in any region of the brain and this system is commonly used for early prediction of brain tumor using advanced image processing and probabilistic neural network methods, respectively. For this approach, generally, some advanced noise removal functions, image processing methods such as automatic segmentation and morphological operations are used to detect the brain tumor boundaries and to obtain the important feature parameters of the tumor region. All stages of the approach are done specifically with using MATLAB software. Generally, for this approach, firstly tumor is successfully detected and the tumor area is contoured with a specific colored circle by the computer aided diagnostics program. Then, the tumor is segmented and some morphological processes are achieved to increase the visibility of the tumor area. Moreover, while this process continues, the tumor area and important shape based features are also calculated. Finally, with using the probabilistic neural network method and with using some advanced classification steps, tumor area and the type of the tumor are clearly obtained. Also, the future aim of this study is to detect the severity of lesions through classes of brain tumor which is achieved through advanced multi classification and neural network stages and creating a user friendly environment using GUI in MATLAB. In the experimental part of the study, generally, 100 images are used to train the diagnostics system and 100 out of sample images are also used to test and to check the whole results. The preliminary results demonstrate the high classification accuracy for the neural network structure. Finally, according to the results, this situation also motivates us to extend this framework to detect and localize the tumors in the other organs.Keywords: image processing algorithms, magnetic resonance imaging, neural network, pattern recognition
Procedia PDF Downloads 421