Search results for: loop filter
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1337

Search results for: loop filter

167 Effect of Locally Produced Sweetened Pediatric Antibiotics on Streptococcus mutans Isolated from the Oral Cavity of Pediatric Patients in Syria - in Vitro Study

Authors: Omar Nasani, Chaza Kouchaji, Muznah Alkhani, Maisaa Abd-alkareem

Abstract:

Objective: To evaluate the influence of sweetening agents used in pediatric medications on the growth of Streptococcus mutans colonies and its effect on the cariogenic activity in the oral cavity. No previous studies are registered yet in Syrian children. Methods: Specimens were isolated from the oral cavity of pediatric patients, then in-vitro study is applied on locally manufactured liquid pediatric antibiotic drugs, containing natural or synthetic sweeteners. The selected antibiotics are Ampicillin (sucrose), Amoxicillin (sucrose), Amoxicillin + Flucloxacillin (sorbitol), Amoxicillin+Clavulanic acid (Sorbitol or sucrose). These antibiotics have a known inhibitory effect on gram positive aerobic/anaerobic bacteria especially Streptococcus mutans strains in children’s oral biofilm. Five colonies are studied with each antibiotic. Saturated antibiotics were spread on a 6mm diameter filter disc. Incubated culture media were compared with each other and with the control antibiotic discs. Results were evaluated by measuring the diameter of the inhibition zones. The control group of antibiotic discs was resourced from Abtek Biologicals Ltd. Results: The diameter of inhibition zones around discs of antibiotics sweetened with sorbitol was larger than those sweetened with sucrose. The effect was most important when comparing Amoxicillin + Clavulanic Acid (sucrose 25mm; versus sorbitol 27mm). The highest inhibitory effect was observed with the usage of Amoxicillin + Flucloxacillin sweetened with sorbitol (38mm). Whereas the lowest inhibitory effect was observed with Amoxicillin and Ampicillin sweetened with sucrose (22mm and 21mm). Conclusion: The results of this study indicate that although all selected antibiotic produced an inhibitory effect on S. mutans, sucrose weakened the inhibitory action of the antibiotic to varying degrees, meanwhile antibiotic formulations containing sorbitol simulated the effects of the control antibiotic. This study calls attention to effects of sweeteners included in pediatric drugs on the oral hygiene and tooth decay.

Keywords: pediatric, dentistry, antibiotics, streptococcus mutans, biofilm, sucrose, sugar free

Procedia PDF Downloads 44
166 Recommendations for Data Quality Filtering of Opportunistic Species Occurrence Data

Authors: Camille Van Eupen, Dirk Maes, Marc Herremans, Kristijn R. R. Swinnen, Ben Somers, Stijn Luca

Abstract:

In ecology, species distribution models are commonly implemented to study species-environment relationships. These models increasingly rely on opportunistic citizen science data when high-quality species records collected through standardized recording protocols are unavailable. While these opportunistic data are abundant, uncertainty is usually high, e.g., due to observer effects or a lack of metadata. Data quality filtering is often used to reduce these types of uncertainty in an attempt to increase the value of studies relying on opportunistic data. However, filtering should not be performed blindly. In this study, recommendations are built for data quality filtering of opportunistic species occurrence data that are used as input for species distribution models. Using an extensive database of 5.7 million citizen science records from 255 species in Flanders, the impact on model performance was quantified by applying three data quality filters, and these results were linked to species traits. More specifically, presence records were filtered based on record attributes that provide information on the observation process or post-entry data validation, and changes in the area under the receiver operating characteristic (AUC), sensitivity, and specificity were analyzed using the Maxent algorithm with and without filtering. Controlling for sample size enabled us to study the combined impact of data quality filtering, i.e., the simultaneous impact of an increase in data quality and a decrease in sample size. Further, the variation among species in their response to data quality filtering was explored by clustering species based on four traits often related to data quality: commonness, popularity, difficulty, and body size. Findings show that model performance is affected by i) the quality of the filtered data, ii) the proportional reduction in sample size caused by filtering and the remaining absolute sample size, and iii) a species ‘quality profile’, resulting from a species classification based on the four traits related to data quality. The findings resulted in recommendations on when and how to filter volunteer generated and opportunistically collected data. This study confirms that correctly processed citizen science data can make a valuable contribution to ecological research and species conservation.

Keywords: citizen science, data quality filtering, species distribution models, trait profiles

Procedia PDF Downloads 169
165 Accelerator Mass Spectrometry Analysis of Isotopes of Plutonium in PM₂.₅

Authors: C. G. Mendez-Garcia, E. T. Romero-Guzman, H. Hernandez-Mendoza, C. Solis, E. Chavez-Lomeli, E. Chamizo, R. Garcia-Tenorio

Abstract:

Plutonium is present in different concentrations in the environment and biological samples related to nuclear weapons testing, nuclear waste recycling and accidental discharges of nuclear plants. This radioisotope is considered the most radiotoxic substance, particularly when it enters the human body through inhalation of powders insoluble or aerosols. This is the main reason of the determination of the concentration of this radioisotope in the atmosphere. Besides that, the isotopic ratio of ²⁴⁰Pu/²³⁹Pu provides information about the origin of the source. PM₂.₅ sampling was carried out in the Metropolitan Zone of the Valley of Mexico (MZVM) from February 18th to March 17th in 2015 on quartz filter. There have been significant developments recently due to the establishment of new methods for sample preparation and accurate measurement to detect ultra trace levels as the plutonium is found in the environment. The accelerator mass spectrometry (AMS) is a technique that allows measuring levels of detection around of femtograms (10-15 g). The AMS determinations include the chemical isolation of Pu. The Pu separation involved an acidic digestion and a radiochemical purification using an anion exchange resin. Finally, the source is prepared, when Pu is pressed in the corresponding cathodes. According to the author's knowledge on these aerosols showed variations on the ²³⁵U/²³⁸U ratio of the natural value, suggesting that could be an anthropogenic source altering it. The determination of the concentration of the isotopes of Pu can be a useful tool in order the clarify this presence in the atmosphere. The first results showed a mean value of activity concentration of ²³⁹Pu of 280 nBq m⁻³ thus the ²⁴⁰Pu/²³⁹Pu was 0.025 corresponding to the weapon production source; these results corroborate that there is an anthropogenic influence that is increasing the concentration of radioactive material in PM₂.₅. According to the author's knowledge in Total Suspended Particles (TSP) have been reported activity concentrations of ²³⁹⁺²⁴⁰Pu around few tens of nBq m⁻³ and 0.17 of ²⁴⁰Pu/²³⁹Pu ratios. The preliminary results in MZVM show high activity concentrations of isotopes of Pu (40 and 700 nBq m⁻³) and low ²⁴⁰Pu/²³⁹Pu ratio than reported. These results are in the order of the activity concentrations of Pu in weapons-grade of high purity.

Keywords: aerosols, fallout, mass spectrometry, radiochemistry, tracer, ²⁴⁰Pu/²³⁹Pu ratio

Procedia PDF Downloads 139
164 Architectural Design as Knowledge Production: A Comparative Science and Technology Study of Design Teaching and Research at Different Architecture Schools

Authors: Kim Norgaard Helmersen, Jan Silberberger

Abstract:

Questions of style and reproducibility in relation to architectural design are not only continuously debated; the very concepts can seem quite provocative to architects, who like to think of architectural design as depending on intuition, ideas, and individual personalities. This standpoint - dominant in architectural discourse - is challenged in the present paper presenting early findings from a comparative STS-inspired research study of architectural design teaching and research at different architecture schools in varying national contexts. In philosophy of science framework, the paper reflects empirical observations of design teaching at the Royal Academy of Fine Arts in Copenhagen and presents a tentative theoretical framework for the on-going research project. The framework suggests that architecture – as a field of knowledge production – is mainly dominated by three epistemological positions, which will be presented and discussed. Besides serving as a loosely structured framework for future data analysis, the proposed framework brings forth the argument that architecture can be roughly divided into different schools of thought, like the traditional science disciplines. Without reducing the complexity of the discipline, describing its main intellectual positions should prove fruitful for the future development of architecture as a theoretical discipline, moving an architectural critique beyond discussions of taste preferences. Unlike traditional science disciplines, there is a lack of a community-wide, shared pool of codified references in architecture, with architects instead referencing art projects, buildings, and famous architects, when positioning their standpoints. While these inscriptions work as an architectural reference system, to be compared to codified theories in academic writing of traditional research, they are not used systematically in the same way. As a result, architectural critique is often reduced to discussions of taste and subjectivity rather than epistemological positioning. Architects are often criticized as judges of taste and accused that their rationality is rooted in cultural-relative aesthetical concepts of taste closely linked to questions of style, but arguably their supposedly subjective reasoning, in fact, forms part of larger systems of thought. Putting architectural ‘styles’ under a loop, and tracing their philosophical roots, can potentially open up a black box in architectural theory. Besides ascertaining and recognizing the existence of specific ‘styles’ and thereby schools of thought in current architectural discourse, the study could potentially also point at some mutations of the conventional – something actually ‘new’ – of potentially high value for architectural design education.

Keywords: architectural theory, design research, science and technology studies (STS), sociology of architecture

Procedia PDF Downloads 109
163 Understanding the Social Movements around the ‘Rohingya Crisis’ within the Political Process Model

Authors: Aklima Jesmin, Ubaidur Rob, M. Ashrafur Rahman

Abstract:

Rohingya population of Arakan state in Myanmar are one the most persecuted ethnic minorities in this 21st century. According to the Universal Declaration of Human Rights (UDHR), all human beings are born free, equal in dignity and rights. However, these populations are systematically excluded from this universal proclamation of human rights as they are Rohingya, which signify ‘other’. Based on the accessible and available literatures about Rohingya issue, this study firstly found there are chronological pattern of human rights violations against the ethnic Rohingya which follows the pathology of the Holocaust in this 21st century of human civilization. These violations have been possible due to modern technology, bureaucracy which has been performed through authorization, routinization and dehumanization; not only in formal institutions but in the society as a whole. This kind of apparently never-ending situation poses any author with the problem of available many scientific articles. The most important sources are, therefore the international daily newspapers, social media and official webpage of the non-state actors for nitty-gritty day to day update. Although it challenges the validity and objectivity of the information, but to address the critical ongoing human rights violations against Rohingya population can become a base for further work on this issue. One of the aspects of this paper is to accommodate all the social movements since August 2017 to date. The findings of this paper is that even though it seemed only human rights violations occurred against Rohingya historically but, simultaneously the process of social movements had also started, can be traced more after the military campaign in 2017. Therefore, the Rohingya crisis can be conceptualized within one ‘campaign’ movement for justice, not as episodic events, especially within the Political Process Model than any other social movement theories. This model identifies that the role of international political movements as well as the role of non-state actors are more powerful than any other episodes of violence conducted against Rohinyga in reframing issue, blaming and shaming to Myanmar government and creating the strategic opportunities for social changes. The lack of empowerment of the affected Rohingya population has been found as the loop to utilize this strategic opportunity. Their lack of empowerment can also affect their capacity to reframe their rights and to manage the campaign for their justice. Therefore, this should be placed at the heart of the international policy agenda within the broader socio-political movement for the justice of Rohingya population. Without ensuring human rights of Rohingya population, achieving the promise of the united nation’s sustainable development goals - no one would be excluded – will be impossible.

Keywords: civilization, holocaust, human rights violation, military campaign, political process model, Rohingya population, sustainable development goal, social justice, social movement, strategic opportunity

Procedia PDF Downloads 255
162 Partial Least Square Regression for High-Dimentional and High-Correlated Data

Authors: Mohammed Abdullah Alshahrani

Abstract:

The research focuses on investigating the use of partial least squares (PLS) methodology for addressing challenges associated with high-dimensional correlated data. Recent technological advancements have led to experiments producing data characterized by a large number of variables compared to observations, with substantial inter-variable correlations. Such data patterns are common in chemometrics, where near-infrared (NIR) spectrometer calibrations record chemical absorbance levels across hundreds of wavelengths, and in genomics, where thousands of genomic regions' copy number alterations (CNA) are recorded from cancer patients. PLS serves as a widely used method for analyzing high-dimensional data, functioning as a regression tool in chemometrics and a classification method in genomics. It handles data complexity by creating latent variables (components) from original variables. However, applying PLS can present challenges. The study investigates key areas to address these challenges, including unifying interpretations across three main PLS algorithms and exploring unusual negative shrinkage factors encountered during model fitting. The research presents an alternative approach to addressing the interpretation challenge of predictor weights associated with PLS. Sparse estimation of predictor weights is employed using a penalty function combining a lasso penalty for sparsity and a Cauchy distribution-based penalty to account for variable dependencies. The results demonstrate sparse and grouped weight estimates, aiding interpretation and prediction tasks in genomic data analysis. High-dimensional data scenarios, where predictors outnumber observations, are common in regression analysis applications. Ordinary least squares regression (OLS), the standard method, performs inadequately with high-dimensional and highly correlated data. Copy number alterations (CNA) in key genes have been linked to disease phenotypes, highlighting the importance of accurate classification of gene expression data in bioinformatics and biology using regularized methods like PLS for regression and classification.

Keywords: partial least square regression, genetics data, negative filter factors, high dimensional data, high correlated data

Procedia PDF Downloads 13
161 Perception of Greek Vowels by Arabic-Greek Bilinguals: An Experimental Study

Authors: Georgios P. Georgiou

Abstract:

Infants are able to discriminate a number of sound contrasts in most languages. However, this ability is not available in adults who might face difficulties in discriminating accurately second language sound contrasts as they filter second language speech through the phonological categories of their native language. For example, Spanish speakers often struggle to perceive the difference between the English /ε/ and /æ/ because both vowels do not exist in their native language; so they assimilate these vowels to the closest phonological category of their first language. The present study aims to uncover the perceptual patterns of Arabic adult speakers in regard to the vowels of their second language (Greek). Still, there is not any study that investigates the perception of Greek vowels by Arabic speakers and, thus, the present study would contribute to the enrichment of the literature with cross-linguistic research in new languages. To the purpose of the present study, 15 native speakers of Egyptian Arabic who permanently live in Cyprus and have adequate knowledge of Greek as a second language passed through vowel assimilation and vowel contrast discrimination tests (AXB) in their second language. The perceptual stimuli included non-sense words that contained vowels in both stressed and unstressed positions. The second language listeners’ patterns were analyzed through the Perceptual Assimilation Model which makes testable hypotheses about the assimilation of second language sounds to the speakers’ native phonological categories and the discrimination accuracy over second language sound contrasts. The results indicated that second language listeners assimilated pairs of Greek vowels in a single phonological category of their native language resulting in a Category Goodness difference assimilation type for the Greek stressed /i/-/e/ and the Greek stressed-unstressed /o/-/u/ vowel contrasts. On the contrary, the members of the Greek unstressed /i/-/e/ vowel contrast were assimilated to two different categories resulting in a Two Category assimilation type. Furthermore, they could discriminate the Greek stressed /i/-/e/ and the Greek stressed-unstressed /o/-/u/ contrasts only in a moderate degree while the Greek unstressed /i/-/e/ contrast could be discriminated in an excellent degree. Two main implications emerge from the results. First, there is a strong influence of the listeners’ native language on the perception of the second language vowels. In Egyptian Arabic, contiguous vowel categories such as [i]-[e] and [u]-[o] do not have phonemic difference but they are subject to allophonic variation; by contrast, the vowel contrasts /i/-/e/ and /o/-/u/ are phonemic in Greek. Second, the role of stress is significant for second language perception since stressed vs. unstressed vowel contrasts were perceived in a different manner by the Greek listeners.

Keywords: Arabic, bilingual, Greek, vowel perception

Procedia PDF Downloads 112
160 Safety Validation of Black-Box Autonomous Systems: A Multi-Fidelity Reinforcement Learning Approach

Authors: Jared Beard, Ali Baheri

Abstract:

As autonomous systems become more prominent in society, ensuring their safe application becomes increasingly important. This is clearly demonstrated with autonomous cars traveling through a crowded city or robots traversing a warehouse with heavy equipment. Human environments can be complex, having high dimensional state and action spaces. This gives rise to two problems. One being that analytic solutions may not be possible. The other is that in simulation based approaches, searching the entirety of the problem space could be computationally intractable, ruling out formal methods. To overcome this, approximate solutions may seek to find failures or estimate their likelihood of occurrence. One such approach is adaptive stress testing (AST) which uses reinforcement learning to induce failures in the system. The premise of which is that a learned model can be used to help find new failure scenarios, making better use of simulations. In spite of these failures AST fails to find particularly sparse failures and can be inclined to find similar solutions to those found previously. To help overcome this, multi-fidelity learning can be used to alleviate this overuse of information. That is, information in lower fidelity can simulations can be used to build up samples less expensively, and more effectively cover the solution space to find a broader set of failures. Recent work in multi-fidelity learning has passed information bidirectionally using “knows what it knows” (KWIK) reinforcement learners to minimize the number of samples in high fidelity simulators (thereby reducing computation time and load). The contribution of this work, then, is development of the bidirectional multi-fidelity AST framework. Such an algorithm, uses multi-fidelity KWIK learners in an adversarial context to find failure modes. Thus far, a KWIK learner has been used to train an adversary in a grid world to prevent an agent from reaching its goal; thus demonstrating the utility of KWIK learners in an AST framework. The next step is implementation of the bidirectional multi-fidelity AST framework described. Testing will be conducted in a grid world containing an agent attempting to reach a goal position and adversary tasked with intercepting the agent as demonstrated previously. Fidelities will be modified by adjusting the size of a time-step, with higher-fidelity effectively allowing for more responsive closed loop feedback. Results will compare the single KWIK AST learner with the multi-fidelity algorithm with respect to number of samples, distinct failure modes found, and relative effect of learning after a number of trials.

Keywords: multi-fidelity reinforcement learning, multi-fidelity simulation, safety validation, falsification

Procedia PDF Downloads 129
159 Comfort Evaluation of Summer Knitted Clothes of Tencel and Cotton Fabrics

Authors: Mona Mohamed Shawkt Ragab, Heba Mohamed Darwish

Abstract:

Context: Comfort properties of garments are crucial for the wearer, and with the increasing demand for cotton fabric, there is a need to explore alternative fabrics that can offer similar or superior comfort properties. This study focuses on comparing the comfort properties of tencel/cotton single jersey fabric and cotton single jersey fabric, with the aim of identifying fabrics that are more suitable for summer clothes. Research Aim: The aim of this study is to evaluate the comfort properties of tencel/cotton single jersey fabric and cotton single jersey fabric, with the goal of identifying fabrics that can serve as alternatives to cotton, considering their comfort properties for summer clothing. Methodology: An experimental, analytical approach was employed in this study. Two circular knitting machines were used to produce the fabrics, one with a 24 inches gauge and the other with a 28 inches gauge. Both fabrics were knitted with three different loop lengths (3.05 mm, 2.9 mm, and 2.6 mm) to obtain loose, medium, and tight fabrics for evaluation. Various comfort properties, including air permeability, water vapor permeability, wickability, and thermal resistance, were measured for both fabric types. Findings: The study found a significant difference in comfort properties between tencel/cotton single jersey fabric and cotton single jersey fabric. Tencel/cotton fabric exhibited higher air permeability, water vapor permeability, and wickability compared to cotton fabric. These findings suggest that tencel fabric is more suitable for summer clothes due to its superior ventilation and absorption properties. Theoretical Importance: This study contributes to the exploration of alternative fabrics to cotton by evaluating their comfort properties. By identifying fabrics that offer better comfort properties than cotton, particularly in terms of water usage, the study provides valuable insights into sustainable fabric choices for the fashion industry. Data Collection and Analysis Procedures: The comfort properties of the fabrics were measured using appropriate testing methods. Paired comparison t-tests were conducted to determine the significant differences between tencel/cotton fabric and cotton fabric in the measured properties. Correlation coefficients were also calculated to examine the relationships between the factors under study. Question Addressed: The study addresses the question of whether tencel/cotton single jersey fabric can serve as an alternative to cotton fabric for summer clothes, considering their comfort properties. Conclusion: The study concludes that tencel/cotton single jersey fabric offers superior comfort properties compared to cotton single jersey fabric, making it a suitable alternative for summer clothes. The findings also highlight the importance of considering fabric properties, such as air permeability, water vapor permeability, and wickability, when selecting materials for garments to enhance wearer comfort. This research contributes to the search for sustainable alternatives to cotton and provides valuable insights for the fashion industry in making informed fabric choices.

Keywords: comfort properties, cotton fabric, tencel fabric, single jersey

Procedia PDF Downloads 47
158 Characterization of Double Shockley Stacking Fault in 4H-SiC Epilayer

Authors: Zhe Li, Tao Ju, Liguo Zhang, Zehong Zhang, Baoshun Zhang

Abstract:

In-grow stacking-faults (IGSFs) in 4H-SiC epilayers can cause increased leakage current and reduce the blocking voltage of 4H-SiC power devices. Double Shockley stacking fault (2SSF) is a common type of IGSF with double slips on the basal planes. In this study, a 2SSF in the 4H-SiC epilayer grown by chemical vaper deposition (CVD) is characterized. The nucleation site of the 2SSF is discussed, and a model for the 2SSF nucleation is proposed. Homo-epitaxial 4H-SiC is grown on a commercial 4 degrees off-cut substrate by a home-built hot-wall CVD. Defect-selected-etching (DSE) is conducted with melted KOH at 500 degrees Celsius for 1-2 min. Room temperature cathodoluminescence (CL) is conducted at a 20 kV acceleration voltage. Low-temperature photoluminescence (LTPL) is conducted at 3.6 K with the 325 nm He-Cd laser line. In the CL image, a triangular area with bright contrast is observed. Two partial dislocations (PDs) with a 20-degree angle in between show linear dark contrast on the edges of the IGSF. CL and LTPL spectrums are conducted to verify the IGSF’s type. The CL spectrum shows the maximum photoemission at 2.431 eV and negligible bandgap emission. In the LTPL spectrum, four phonon replicas are found at 2.468 eV, 2.438 eV, 2.420 eV and 2.410 eV, respectively. The Egx is estimated to be 2.512 eV. A shoulder with a red-shift to the main peak in CL, and a slight protrude at the same wavelength in LTPL are verified as the so called Egx- lines. Based on the CL and LTPL results, the IGSF is identified as a 2SSF. Back etching by neutral loop discharge and DSE are conducted to track the origin of the 2SSF, and the nucleation site is found to be a threading screw dislocation (TSD) in this sample. A nucleation mechanism model is proposed for the formation of the 2SSF. Steps introduced by the off-cut and the TSD on the surface are both suggested to be two C-Si bilayers height. The intersections of such two types of steps are along [11-20] direction from the TSD, while a four-bilayer step at each intersection. The nucleation of the 2SSF in the growth is proposed as follows. Firstly, the upper two bilayers of the four-bilayer step grow down and block the lower two at one intersection, and an IGSF is generated. Secondly, the step-flow grows over the IGSF successively, and forms an AC/ABCABC/BA/BC stacking sequence. Then a 2SSF is formed and extends by the step-flow growth. In conclusion, a triangular IGSF is characterized by CL approach. Base on the CL and LTPL spectrums, the estimated Egx is 2.512 eV and the IGSF is identified to be a 2SSF. By back etching, the 2SSF nucleation site is found to be a TSD. A model for the 2SSF nucleation from an intersection of off-cut- and TSD- introduced steps is proposed.

Keywords: cathodoluminescence, defect-selected-etching, double Shockley stacking fault, low-temperature photoluminescence, nucleation model, silicon carbide

Procedia PDF Downloads 284
157 Composition, Velocity, and Mass of Projectiles Generated from a Chain Shot Event

Authors: Eric Shannon, Mark J. McGuire, John P. Parmigiani

Abstract:

A hazard associated with the use of timber harvesters is chain shot. Harvester saw chain is subjected to large dynamic mechanical stresses which can cause it to fracture. The resulting open loop of saw chain can fracture a second time and create a projectile consisting of several saw-chain links referred to as a chain shot. Its high kinetic energy enables it to penetrate operator enclosures and be a significant hazard. Accurate data on projectile composition, mass, and speed are needed for the design of both operator enclosures resistant to projectile penetration and for saw chain resistant to fracture. The work presented here contributes to providing this data through the use of a test machine designed and built at Oregon State University. The machine’s enclosure is a standard shipping container. To safely contain any anticipated chain shot, the container was lined with both 9.5 mm AR500 steel plates and 50 mm high-density polyethylene (HDPE). During normal operation, projectiles are captured virtually undamaged in the HDPE enabling subsequent analysis. Standard harvester components are used for bar mounting and chain tensioning. Standard guide bars and saw chains are used. An electric motor with flywheel drives the system. Testing procedures follow ISO Standard 11837. Chain speed at break was approximately 45.5 m/s. Data was collected using both a 75 cm solid bar (Oregon 752HSFB149) and 90 cm solid bar (Oregon 902HSFB149). Saw chains used were 89 Drive Link .404”-18HX loops made from factory spools. Standard 16-tooth sprockets were used. Projectile speed was measured using both a high-speed camera and a chronograph. Both rotational and translational kinetic energy are calculated. For this study 50 chain shot events were executed. Results showed that projectiles consisted of a variety combinations of drive links, tie straps, and cutter links. Most common (occurring in 60% of the events) was a drive-link / tie-strap / drive-link combination having a mass of approximately 10.33 g. Projectile mass varied from a minimum of 2.99 g corresponding to a drive link only to a maximum of 18.91 g corresponding to a drive-link / tie-strap / drive-link / cutter-link / drive-link combination. Projectile translational speed was measured to be approximately 270 m/s and rotational speed of approximately 14000 r/s. The calculated translational and rotational kinetic energy magnitudes each average over 600 J. This study provides useful information for both timber harvester manufacturers and saw chain manufacturers to design products that reduce the hazards associated with timber harvesting.

Keywords: chain shot, timber harvesters, safety, testing

Procedia PDF Downloads 126
156 Immunomodulatory Role of Heat Killed Mycobacterium indicus pranii against Cervical Cancer

Authors: Priyanka Bhowmik, Subrata Majumdar, Debprasad Chattopadhyay

Abstract:

Background: Cervical cancer is the third major cause of cancer in women and the second most frequent cause of cancer related deaths causing 300,000 deaths annually worldwide. Evasion of immune response by Human Papilloma Virus (HPV), the key contributing factor behind cancer and pre-cancerous lesions of the uterine cervix, makes immunotherapy a necessity to treat this disease. Objective: A Heat killed fraction of Mycobacterium indicus pranii (MIP), a non-pathogenic Mycobacterium has been shown to exhibit cytotoxic effects on different cancer cells, including human cervical carcinoma cell line HeLa. However, the underlying mechanisms remain unknown. The aim of this study is to decipher the mechanism of MIP induced HeLa cell death. Methods: The cytotoxicity of Mycobacterium indicus pranii against HeLa cells was evaluated by 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assay. Apoptosis was detected by annexin V and Propidium iodide (PI) staining. The assessment of reactive oxygen species (ROS) generation and cell cycle analysis were measured by flow cytometry. The expression of apoptosis associated genes was analyzed by real time PCR. Result: MIP could inhibit the proliferation of HeLa cell in a time and dose dependent manner but caused minor damage to normal cells. The induction of apoptosis was confirmed by the cell surface presentation of phosphatidyl serine, DNA fragmentation, and mitochondrial damage. MIP caused very early (as early as 30 minutes) transcriptional activation of p53, followed by a higher activation (32 fold) at 24 hours suggesting prime importance of p53 in MIP-induced apoptosis in HeLa cell. The up regulation of p53 dependent pro-apoptotic genes Bax, Bak, PUMA, and Noxa followed a lag phase that was required for the transcriptional p53 program. MIP also caused the transcriptional up regulation of Toll like receptor 2 and 4 after 30 minutes of MIP treatment suggesting recognition of MIP by toll like receptors. Moreover, MIP caused the inhibition of expression of HPV anti apoptotic gene E6, which is known to interfere with p53/PUMA/Bax apoptotic cascade. This inhibition might have played a role in transcriptional up regulation of PUMA and subsequently apoptosis. ROS was generated transiently which was concomitant with the highest transcription activation of p53 suggesting a plausible feedback loop network of p53 and ROS in the apoptosis of HeLa cells. Scavenger of ROS, such as N-acetyl-L-cysteine, decreased apoptosis suggesting ROS is an important effector of MIP induced apoptosis. Conclusion: Taken together, MIP possesses full potential to be a novel therapeutic agent in the clinical treatment of cervical cancer.

Keywords: cancer, mycobacterium, immunity, immunotherapy.

Procedia PDF Downloads 229
155 Development of a Turbulent Boundary Layer Wall-pressure Fluctuations Power Spectrum Model Using a Stepwise Regression Algorithm

Authors: Zachary Huffman, Joana Rocha

Abstract:

Wall-pressure fluctuations induced by the turbulent boundary layer (TBL) developed over aircraft are a significant source of aircraft cabin noise. Since the power spectral density (PSD) of these pressure fluctuations is directly correlated with the amount of sound radiated into the cabin, the development of accurate empirical models that predict the PSD has been an important ongoing research topic. The sound emitted can be represented from the pressure fluctuations term in the Reynoldsaveraged Navier-Stokes equations (RANS). Therefore, early TBL empirical models (including those from Lowson, Robertson, Chase, and Howe) were primarily derived by simplifying and solving the RANS for pressure fluctuation and adding appropriate scales. Most subsequent models (including Goody, Efimtsov, Laganelli, Smol’yakov, and Rackl and Weston models) were derived by making modifications to these early models or by physical principles. Overall, these models have had varying levels of accuracy, but, in general, they are most accurate under the specific Reynolds and Mach numbers they were developed for, while being less accurate under other flow conditions. Despite this, recent research into the possibility of using alternative methods for deriving the models has been rather limited. More recent studies have demonstrated that an artificial neural network model was more accurate than traditional models and could be applied more generally, but the accuracy of other machine learning techniques has not been explored. In the current study, an original model is derived using a stepwise regression algorithm in the statistical programming language R, and TBL wall-pressure fluctuations PSD data gathered at the Carleton University wind tunnel. The theoretical advantage of a stepwise regression approach is that it will automatically filter out redundant or uncorrelated input variables (through the process of feature selection), and it is computationally faster than machine learning. The main disadvantage is the potential risk of overfitting. The accuracy of the developed model is assessed by comparing it to independently sourced datasets.

Keywords: aircraft noise, machine learning, power spectral density models, regression models, turbulent boundary layer wall-pressure fluctuations

Procedia PDF Downloads 114
154 Measurement of Magnetic Properties of Grainoriented Electrical Steels at Low and High Fields Using a Novel Single

Authors: Nkwachukwu Chukwuchekwa, Joy Ulumma Chukwuchekwa

Abstract:

Magnetic characteristics of grain-oriented electrical steel (GOES) are usually measured at high flux densities suitable for its typical applications in power transformers. There are limited magnetic data at low flux densities which are relevant for the characterization of GOES for applications in metering instrument transformers and low frequency magnetic shielding in magnetic resonance imaging medical scanners. Magnetic properties such as coercivity, B-H loop, AC relative permeability and specific power loss of conventional grain oriented (CGO) and high permeability grain oriented (HGO) electrical steels were measured and compared at high and low flux densities at power magnetising frequency. 40 strips comprising 20 CGO and 20 HGO, 305 mm x 30 mm x 0.27 mm from a supplier were tested. The HGO and CGO strips had average grain sizes of 9 mm and 4 mm respectively. Each strip was singly magnetised under sinusoidal peak flux density from 8.0 mT to 1.5 T at a magnetising frequency of 50 Hz. The novel single sheet tester comprises a personal computer in which LabVIEW version 8.5 from National Instruments (NI) was installed, a NI 4461 data acquisition (DAQ) card, an impedance matching transformer, to match the 600  minimum load impedance of the DAQ card with the 5 to 20  low impedance of the magnetising circuit, and a 4.7 Ω shunt resistor. A double vertical yoke made of GOES which is 290 mm long and 32 mm wide is used. A 500-turn secondary winding, about 80 mm in length, was wound around a plastic former, 270 mm x 40 mm, housing the sample, while a 100-turn primary winding, covering the entire length of the plastic former was wound over the secondary winding. A standard Epstein strip to be tested is placed between the yokes. The magnetising voltage was generated by the LabVIEW program through a voltage output from the DAQ card. The voltage drop across the shunt resistor and the secondary voltage were acquired by the card for calculation of magnetic field strength and flux density respectively. A feedback control system implemented in LabVIEW was used to control the flux density and to make the induced secondary voltage waveforms sinusoidal to have repeatable and comparable measurements. The low noise NI4461 card with 24 bit resolution and a sampling rate of 204.8 KHz and 92 KHz bandwidth were chosen to take the measurements to minimize the influence of thermal noise. In order to reduce environmental noise, the yokes, sample and search coil carrier were placed in a noise shielding chamber. HGO was found to have better magnetic properties at both high and low magnetisation regimes. This is because of the higher grain size of HGO and higher grain-grain misorientation of CGO. HGO is better CGO in both low and high magnetic field applications.

Keywords: flux density, electrical steel, LabVIEW, magnetization

Procedia PDF Downloads 270
153 Approximate Spring Balancing for the Arm of a Humanoid Robot to Reduce Actuator Torque

Authors: Apurva Patil, Ashay Aswale, Akshay Kulkarni, Shubham Bharadiya

Abstract:

The potential benefit of gravity compensation of linkages in mechanisms using springs to reduce actuator requirements is well recognized, but practical applications have been elusive. Although existing methods provide exact spring balance, they require additional masses or auxiliary links, or all the springs used originate from the ground, which makes the resulting device bulky and space-inefficient. This paper uses a method of static balancing of mechanisms with conservative loads such as gravity and spring loads using non-zero-free-length springs with child–parent connections and no auxiliary links. Application of this method to the developed arm of a humanoid robot is presented here. Spring balancing is particularly important in this case because the serial chain of linkages has to work against gravity.This work involves approximate spring balancing of the open-loop chain of linkages using minimization of potential energy variance. It uses the approach of flattening the potential energy distribution over the workspace and fuses it with numerical optimization. The results show the considerable reduction in actuator torque requirement with practical spring design and arrangement. Reduced actuator torque facilitates the use of lower end actuators which are generally smaller in weight and volume thereby lowering the space requirements and the total weight of the arm. This is particularly important for humanoid robots where the parent actuator has to handle the weight of the subsequent actuators as well. Actuators with lower actuation requirements are more energy efficient, thereby reduce the energy consumption of the mechanism. Lower end actuators are lower in cost and facilitate the development of low-cost devices. Although the method provides only an approximate balancing, it is versatile, flexible in choosing appropriate control variables that are relevant to the design problem and easy to implement. The true potential of this technique lies in the fact that it uses a very simple optimization to find the spring constant, free-length of the spring and the optimal attachment points subject to the optimization constraints. Also, it uses physically realizable non-zero-free-length springs directly, thereby reducing the complexity involved in simulating zero-free-length springs from non-zero-free-length springs. This method allows springs to be attached to the preceding parent link, which makes the implementation of spring balancing practical. Because auxiliary linkages can be avoided, the resultant arm of the humanoid robot is compact. The cost benefits and reduced complexity can be significant advantages in the development of this arm of the humanoid robot.

Keywords: actuator torque, child-parent connections, spring balancing, the arm of a humanoid robot

Procedia PDF Downloads 223
152 Experimental and Computational Fluid Dynamic Modeling of a Progressing Cavity Pump Handling Newtonian Fluids

Authors: Deisy Becerra, Edwar Perez, Nicolas Rios, Miguel Asuaje

Abstract:

Progressing Cavity Pump (PCP) is a type of positive displacement pump that is being awarded greater importance as capable artificial lift equipment in the heavy oil field. The most commonly PCP used is driven single lobe pump that consists of a single external helical rotor turning eccentrically inside a double internal helical stator. This type of pump was analyzed by the experimental and Computational Fluid Dynamic (CFD) approach from the DCAB031 model located in a closed-loop arrangement. Experimental measurements were taken to determine the pressure rise and flow rate with a flow control valve installed at the outlet of the pump. The flowrate handled was measured by a FLOMEC-OM025 oval gear flowmeter. For each flowrate considered, the pump’s rotational speed and power input were controlled using an Invertek Optidrive E3 frequency driver. Once a steady-state operation was attained, pressure rise measurements were taken with a Sper Scientific wide range digital pressure meter. In this study, water and three Newtonian oils of different viscosities were tested at different rotational speeds. The CFD model implementation was developed on Star- CCM+ using an Overset Mesh that includes the relative motion between rotor and stator, which is one of the main contributions of the present work. The simulations are capable of providing detailed information about the pressure and velocity fields inside the device in laminar and unsteady regimens. The simulations have a good agreement with the experimental data due to Mean Squared Error (MSE) in under 21%, and the Grid Convergence Index (GCI) was calculated for the validation of the mesh, obtaining a value of 2.5%. In this case, three different rotational speeds were evaluated (200, 300, 400 rpm), and it is possible to show a directly proportional relationship between the rotational speed of the rotor and the flow rate calculated. The maximum production rates for the different speeds for water were 3.8 GPM, 4.3 GPM, and 6.1 GPM; also, for the oil tested were 1.8 GPM, 2.5 GPM, 3.8 GPM, respectively. Likewise, an inversely proportional relationship between the viscosity of the fluid and pump performance was observed, since the viscous oils showed the lowest pressure increase and the lowest volumetric flow pumped, with a degradation around of 30% of the pressure rise, between performance curves. Finally, the Productivity Index (PI) remained approximately constant for the different speeds evaluated; however, between fluids exist a diminution due to the viscosity.

Keywords: computational fluid dynamic, CFD, Newtonian fluids, overset mesh, PCP pressure rise

Procedia PDF Downloads 105
151 Prediction of Alzheimer's Disease Based on Blood Biomarkers and Machine Learning Algorithms

Authors: Man-Yun Liu, Emily Chia-Yu Su

Abstract:

Alzheimer's disease (AD) is the public health crisis of the 21st century. AD is a degenerative brain disease and the most common cause of dementia, a costly disease on the healthcare system. Unfortunately, the cause of AD is poorly understood, furthermore; the treatments of AD so far can only alleviate symptoms rather cure or stop the progress of the disease. Currently, there are several ways to diagnose AD; medical imaging can be used to distinguish between AD, other dementias, and early onset AD, and cerebrospinal fluid (CSF). Compared with other diagnostic tools, blood (plasma) test has advantages as an approach to population-based disease screening because it is simpler, less invasive also cost effective. In our study, we used blood biomarkers dataset of The Alzheimer’s disease Neuroimaging Initiative (ADNI) which was funded by National Institutes of Health (NIH) to do data analysis and develop a prediction model. We used independent analysis of datasets to identify plasma protein biomarkers predicting early onset AD. Firstly, to compare the basic demographic statistics between the cohorts, we used SAS Enterprise Guide to do data preprocessing and statistical analysis. Secondly, we used logistic regression, neural network, decision tree to validate biomarkers by SAS Enterprise Miner. This study generated data from ADNI, contained 146 blood biomarkers from 566 participants. Participants include cognitive normal (healthy), mild cognitive impairment (MCI), and patient suffered Alzheimer’s disease (AD). Participants’ samples were separated into two groups, healthy and MCI, healthy and AD, respectively. We used the two groups to compare important biomarkers of AD and MCI. In preprocessing, we used a t-test to filter 41/47 features between the two groups (healthy and AD, healthy and MCI) before using machine learning algorithms. Then we have built model with 4 machine learning methods, the best AUC of two groups separately are 0.991/0.709. We want to stress the importance that the simple, less invasive, common blood (plasma) test may also early diagnose AD. As our opinion, the result will provide evidence that blood-based biomarkers might be an alternative diagnostics tool before further examination with CSF and medical imaging. A comprehensive study on the differences in blood-based biomarkers between AD patients and healthy subjects is warranted. Early detection of AD progression will allow physicians the opportunity for early intervention and treatment.

Keywords: Alzheimer's disease, blood-based biomarkers, diagnostics, early detection, machine learning

Procedia PDF Downloads 297
150 Corrosion Protection and Failure Mechanism of ZrO₂ Coating on Zirconium Alloy Zry-4 under Varied LiOH Concentrations in Lithiated Water at 360°C and 18.5 MPa

Authors: Guanyu Jiang, Donghai Xu, Huanteng Liu

Abstract:

After the Fukushima-Daiichi accident, the development of accident tolerant fuel cladding materials to improve reactor safety has become a hot topic in the field of nuclear industry. ZrO₂ has a satisfactory neutron economy and can guarantee the fission chain reaction process, which enables it to be a promising coating for zirconium alloy cladding. Maintaining a good corrosion resistance in primary coolant loop during normal operations of Pressurized Water Reactors is a prerequisite for ZrO₂ as a protective coating on zirconium alloy cladding. Research on the corrosion performance of ZrO₂ coating in nuclear water chemistry is relatively scarce, and existing reports failed to provide an in-depth explanation for the failure causes of ZrO₂ coating. Herein, a detailed corrosion process of ZrO₂ coating in lithiated water at 360 °C and 18.5 MPa was proposed based on experimental research and molecular dynamics simulation. Lithiated water with different LiOH solutions in the present work was deaerated and had a dissolved oxygen concentration of < 10 ppb. The concentration of Li (as LiOH) was determined to be 2.3 ppm, 70 ppm, and 500 ppm, respectively. Corrosion tests were conducted in a static autoclave. Modeling and corresponding calculations were operated on Materials Studio software. The calculation of adsorption energy and dynamics parameters were undertaken by the Energy task and Dynamics task of the Forcite module, respectively. The protective effect and failure mechanism of ZrO₂ coating on Zry-4 under varied LiOH concentrations was further revealed by comparison with the coating corrosion performance in pure water (namely 0 ppm Li). ZrO₂ coating provided a favorable corrosion protection with the occurrence of localized corrosion at low LiOH concentrations. Factors influencing corrosion resistance mainly include pitting corrosion extension, enhanced Li+ permeation, short-circuit diffusion of O²⁻ and ZrO₂ phase transformation. In highly-concentrated LiOH solutions, intergranular corrosion, internal oxidation, and perforation resulted in coating failure. Zr ions were released to coating surface to form flocculent ZrO₂ and ZrO₂ clusters due to the strong diffusion and dissolution tendency of α-Zr in the Zry-4 substrate. Considering that primary water of Pressurized Water Reactors usually includes 2.3 ppm Li, the stability of ZrO₂ make itself a candidate fuel cladding coating material. Under unfavorable conditions with high Li concentrations, more boric acid should be added to alleviate caustic corrosion of ZrO₂ coating once it is used. This work can provide some references to understand the service behavior of nuclear coatings under variable water chemistry conditions and promote the in-pile application of ZrO₂ coating.

Keywords: ZrO₂ coating, Zry-4, corrosion behavior, failure mechanism, LiOH concentration

Procedia PDF Downloads 32
149 Inverterless Grid Compatible Micro Turbine Generator

Authors: S. Ozeri, D. Shmilovitz

Abstract:

Micro‐Turbine Generators (MTG) are small size power plants that consist of a high speed, gas turbine driving an electrical generator. MTGs may be fueled by either natural gas or kerosene and may also use sustainable and recycled green fuels such as biomass, landfill or digester gas. The typical ratings of MTGs start from 20 kW up to 200 kW. The primary use of MTGs is for backup for sensitive load sites such as hospitals, and they are also considered a feasible power source for Distributed Generation (DG) providing on-site generation in proximity to remote loads. The MTGs have the compressor, the turbine, and the electrical generator mounted on a single shaft. For this reason, the electrical energy is generated at high frequency and is incompatible with the power grid. Therefore, MTGs must contain, in addition, a power conditioning unit to generate an AC voltage at the grid frequency. Presently, this power conditioning unit consists of a rectifier followed by a DC/AC inverter, both rated at the full MTG’s power. The losses of the power conditioning unit account to some 3-5%. Moreover, the full-power processing stage is a bulky and costly piece of equipment that also lowers the overall system reliability. In this study, we propose a new type of power conditioning stage in which only a small fraction of the power is processed. A low power converter is used only to program the rotor current (i.e. the excitation current which is substantially lower). Thus, the MTG's output voltage is shaped to the desired amplitude and frequency by proper programming of the excitation current. The control is realized by causing the rotor current to track the electrical frequency (which is related to the shaft frequency) with a difference that is exactly equal to the line frequency. Since the phasor of the rotation speed and the phasor of the rotor magnetic field are multiplied, the spectrum of the MTG generator voltage contains the sum and the difference components. The desired difference component is at the line frequency (50/60 Hz), whereas the unwanted sum component is at about twice the electrical frequency of the stator. The unwanted high frequency component can be filtered out by a low-pass filter leaving only the low-frequency output. This approach allows elimination of the large power conditioning unit incorporated in conventional MTGs. Instead, a much smaller and cheaper fractional power stage can be used. The proposed technology is also applicable to other high rotation generator sets such as aircraft power units.

Keywords: gas turbine, inverter, power multiplier, distributed generation

Procedia PDF Downloads 212
148 Effect of Plant Growth Promoting Rhizobacteria on the Germination and Early Growth of Onion (Allium cepa)

Authors: Dragana R. Stamenov, Simonida S. Djuric, Timea Hajnal Jafari

Abstract:

Plant growth promoting rhizobacteria (PGPR) are a heterogeneous group of bacteria that can be found in the rhizosphere, at root surfaces and in association with roots, enhancing the growth of the plant either directly and/or indirectly. Increased crop productivity associated with the presence of PGPR has been observed in a broad range of plant species, such as raspberry, chickpeas, legumes, cucumber, eggplant, pea, pepper, radish, tobacco, tomato, lettuce, carrot, corn, cotton, millet, bean, cocoa, etc. However, until now there has not been much research about influences of the PGPR on the growth and yield of onion. Onion (Allium cepa L.), of the Liliaceae family, is a species of great economic importance, widely cultivated all over the world. The aim of this research was to examine the influence of plant growth promoting bacteria Pseudomonas sp. Dragana, Pseudomonas sp. Kiš, Bacillus subtillis and Azotobacter sp. on the seed germination and early growth of onion (Allium cepa). PGPR Azotobacter sp., Bacillus subtilis, Pseudomonas sp. Dragana, Pseudomonas sp. Kiš, from the collection of the Faculty of Agriculture, Novi Sad, Serbia, were used as inoculants. The number of cells in 1 ml of the inoculum was 10⁸ CFU/ml. The control variant was not inoculated. The effect of PGPR on seed germination and hypocotyls length of Allium cepa was evaluated in controlled conditions, on filter paper in the dark at 22°C, while effect on the plant length and mass in semicontrol conditions, in 10 l volume vegetative pots. Seed treated with fungicide and untreated seed were used. After seven days the percentage of germination was determined. After seven and fourteen days hypocotil length was measured. Fourteen days after germination, length and mass of plants were measured. Application of Pseudomonas sp. Dragana and Kiš and Bacillus subtillis had a negative effect on onion seed germination, while the use of Azotobacter sp. gave positive results. On average, application of all investigated inoculants had a positive effect on the measured parameters of plant growth. Azotobacter sp. had the greatest effect on the hypocotyls length, length and mass of the plant. In average, better results were achieved with untreated seeds in compare with treated. Results of this study have shown that PGPR can be used in the production of onion.

Keywords: germination, length, mass, microorganisms, onion

Procedia PDF Downloads 205
147 A Tutorial on Model Predictive Control for Spacecraft Maneuvering Problem with Theory, Experimentation and Applications

Authors: O. B. Iskender, K. V. Ling, V. Dubanchet, L. Simonini

Abstract:

This paper discusses the recent advances and future prospects of spacecraft position and attitude control using Model Predictive Control (MPC). First, the challenges of the space missions are summarized, in particular, taking into account the errors, uncertainties, and constraints imposed by the mission, spacecraft and, onboard processing capabilities. The summary of space mission errors and uncertainties provided in categories; initial condition errors, unmodeled disturbances, sensor, and actuator errors. These previous constraints are classified into two categories: physical and geometric constraints. Last, real-time implementation capability is discussed regarding the required computation time and the impact of sensor and actuator errors based on the Hardware-In-The-Loop (HIL) experiments. The rationales behind the scenarios’ are also presented in the scope of space applications as formation flying, attitude control, rendezvous and docking, rover steering, and precision landing. The objectives of these missions are explained, and the generic constrained MPC problem formulations are summarized. Three key design elements used in MPC design: the prediction model, the constraints formulation and the objective cost function are discussed. The prediction models can be linear time invariant or time varying depending on the geometry of the orbit, whether it is circular or elliptic. The constraints can be given as linear inequalities for input or output constraints, which can be written in the same form. Moreover, the recent convexification techniques for the non-convex geometrical constraints (i.e., plume impingement, Field-of-View (FOV)) are presented in detail. Next, different objectives are provided in a mathematical framework and explained accordingly. Thirdly, because MPC implementation relies on finding in real-time the solution to constrained optimization problems, computational aspects are also examined. In particular, high-speed implementation capabilities and HIL challenges are presented towards representative space avionics. This covers an analysis of future space processors as well as the requirements of sensors and actuators on the HIL experiments outputs. The HIL tests are investigated for kinematic and dynamic tests where robotic arms and floating robots are used respectively. Eventually, the proposed algorithms and experimental setups are introduced and compared with the authors' previous work and future plans. The paper concludes with a conjecture that MPC paradigm is a promising framework at the crossroads of space applications while could be further advanced based on the challenges mentioned throughout the paper and the unaddressed gap.

Keywords: convex optimization, model predictive control, rendezvous and docking, spacecraft autonomy

Procedia PDF Downloads 87
146 Intervention Program for Emotional Management in Disruptive Situations Through Self-Compassion and Compassion

Authors: M. Bassas, J. Grané-Morcillo, J. Segura, J.M. Soldevila

Abstract:

Mental health prevention is key in a society where, according to the World Health Organization, the fourth leading cause of death worldwide is suicide. Compassion is closely linked to personal growth. It shows once again that therapies based on prevention remain an urgent and social need. In this sense, a growing body of research demonstrates how cultivating a compassionate mind can help alleviate and prevent a variety of psychological problems. In the early 21st century, there has been a boom in third-generation compassion-based therapies, although there is a lack of empirical evidence of their efficacy. This study proposes a psychotherapy method (‘Being Method’), whose central axis revolves around emotional management through the cultivation of compassion. Therefore, the objective of this research was to analyze the effectiveness of this method with regard to the emotional changes experienced when we focus on what we are concerned about through the filter of compassion. The Being Method was born from the influence of Buddhist philosophy and contemporary psychology based mainly on Western rationalist currents. A quantitative cross-sectional study has been carried out in a sample of women between 18 and 53 years old (n=47; Mage=36.02; SDage= 11.86) interested in personal growth in which the following 6 measuring instruments were administered: Peace of mind Scale (PoM), Rosenberg Self-Esteem Scale (RSES), Subjective Happiness Scale (SHS), 2 Sacles of the Compassionate Action and Engagement Scales (CAES), Coping Response Inventory for Adults (CRI-A) and Cognitive-Behavioral Strategies Evaluation Scale (MOLDES). Following an experimental method approach, participants were divided into an experimental and control group. Longitudinal analysis was also carried out through a pre-post program comparison. Pre-post comparison outcomes indicated significant differences (p<.05) between before and after the therapy in the variables Peace of Mind, Self-esteem, Happiness, Self-compassion (A-B), Compassion (A-B), in several mental molds, as well as in several coping strategies. Also, between-groups tests proved significantly higher means obtained in the experimental group. Thus, these outcomes highlighted the effectiveness of the therapy, improving all the analyzed dimensions. The social, clinical and research implications are discussed.

Keywords: being method, compassion, effectiveness, emotional management, intervention program, personal growth therapy

Procedia PDF Downloads 8
145 Biogas Potential of Deinking Sludge from Wastepaper Recycling Industry: Influence of Dewatering Degree and High Calcium Carbonate Content

Authors: Moses Kolade Ogun, Ina Korner

Abstract:

To improve on the sustainable resource management in the wastepaper recycling industry, studies into the valorization of wastes generated by the industry are necessary. The industry produces different residues, among which is the deinking sludge (DS). The DS is generated from the deinking process and constitutes a major fraction of the residues generated by the European pulp and paper industry. The traditional treatment of DS by incineration is capital intensive due to energy requirement for dewatering and the need for complementary fuel source due to DS low calorific value. This could be replaced by a biotechnological approach. This study, therefore, investigated the biogas potential of different DS streams (different dewatering degrees) and the influence of the high calcium carbonate content of DS on its biogas potential. Dewatered DS (solid fraction) sample from filter press and the filtrate (liquid fraction) were collected from a partner wastepaper recycling company in Germany. The solid fraction and the liquid fraction were mixed in proportion to realize DS with different water content (55–91% fresh mass). Spiked samples of DS using deionized water, cellulose and calcium carbonate were prepared to simulate DS with varying calcium carbonate content (0– 40% dry matter). Seeding sludge was collected from an existing biogas plant treating sewage sludge in Germany. Biogas potential was studied using a 1-liter batch test system under the mesophilic condition and ran for 21 days. Specific biogas potential in the range 133- 230 NL/kg-organic dry matter was observed for DS samples investigated. It was found out that an increase in the liquid fraction leads to an increase in the specific biogas potential and a reduction in the absolute biogas potential (NL-biogas/ fresh mass). By comparing the absolute biogas potential curve and the specific biogas potential curve, an optimal dewatering degree corresponding to a water content of about 70% fresh mass was identified. This degree of dewatering is a compromise when factors such as biogas yield, reactor size, energy required for dewatering and operation cost are considered. No inhibitory influence was observed in the biogas potential of DS due to the reported high calcium carbonate content of DS. This study confirms that DS is a potential bioresource for biogas production. Further optimization such as nitrogen supplementation due to DS high C/N ratio can increase biogas yield.

Keywords: biogas, calcium carbonate, deinking sludge, dewatering, water content

Procedia PDF Downloads 135
144 Design Development and Qualification of a Magnetically Levitated Blower for C0₂ Scrubbing in Manned Space Missions

Authors: Larry Hawkins, Scott K. Sakakura, Michael J. Salopek

Abstract:

The Marshall Space Flight Center is designing and building a next-generation CO₂ removal system, the Four Bed Carbon Dioxide Scrubber (4BCO₂), which will use the International Space Station (ISS) as a testbed. The current ISS CO2 removal system has faced many challenges in both performance and reliability. Given that CO2 removal is an integral Environmental Control and Life Support System (ECLSS) subsystem, the 4BCO2 Scrubber has been designed to eliminate the shortfalls identified in the current ISS system. One of the key required upgrades was to improve the performance and reliability of the blower that provides the airflow through the CO₂ sorbent beds. A magnetically levitated blower, capable of higher airflow and pressure than the previous system, was developed to meet this need. The design and qualification testing of this next-generation blower are described here. The new blower features a high-efficiency permanent magnet motor, a five-axis, active magnetic bearing system, and a compact controller containing both a variable speed drive and a magnetic bearing controller. The blower uses a centrifugal impeller to pull air from the inlet port and drive it through an annular space around the motor and magnetic bearing components to the exhaust port. Technical challenges of the blower and controller development include survival of the blower system under launch random vibration loads, operation in microgravity, packaging under strict size and weight requirements, and successful operation during 4BCO₂ operational changeovers. An ANSYS structural dynamic model of the controller was used to predict response to the NASA defined random vibration spectrum and drive minor design changes. The simulation results are compared to measurements from qualification testing the controller on a vibration table. Predicted blower performance is compared to flow loop testing measurements. Dynamic response of the system to valve changeovers is presented and discussed using high bandwidth measurements from dynamic pressure probes, magnetic bearing position sensors, and actuator coil currents. The results presented in the paper show that the blower controller will survive launch vibration levels, the blower flow meets the requirements, and the magnetic bearings have adequate load capacity and control bandwidth to maintain the desired rotor position during the valve changeover transients.

Keywords: blower, carbon dioxide removal, environmental control and life support system, magnetic bearing, permanent magnet motor, validation testing, vibration

Procedia PDF Downloads 106
143 Walkability with the Use of Mobile Apps

Authors: Dimitra Riza

Abstract:

This paper examines different ways of exploring a city by using smart phones' applications while walking, and the way this new attitude will change our perception of the urban environment. By referring to various examples of such applications we will consider options and possibilities that open up with new technologies, their advantages and disadvantages, as well as ways of experiencing and interpreting the urban environment. The widespread use of smart phones gave access to information, maps, knowledge, etc. at all times and places. The city tourism marketing takes advantage of this event and promotes the city's attractions through technology. Mobile mediated walking tours, provide new possibilities and modify the way we used to explore cities, for instance by giving directions proper to find easily destinations, by displaying our exact location on the map, by creating our own tours through picking points of interest and interconnecting them to create a route. These apps act as interactive ones, as they filter the user's interests, movements, etc. Discovering a city on foot and visiting interesting sites and landmarks, became very easy, and has been revolutionized through the help of navigational and other applications. In contrast to the re-invention of the city as suggested by the Baudelaire's Flâneur in the 19th century, or to the construction of situations by the Situationists in 60s, the new technological means do not allow people to "get lost", as these follow and record our moves. In the case of strolling or drifting around the city, the option of "getting lost" is desired, as the goal is not the "wayfinding" or the destination, but it is the experience of walking itself. Getting lost is not always about dislocation, but it is about getting a feeling, free of the urban environment while experiencing it. So, on the one hand, walking is considered to be a physical and embodied experience, as the observer becomes an actor and participates with all his senses in the city activities. On the other hand, the use of a screen turns out to become a disembodied experience of the urban environment, as we perceive it in a fragmented and distanced way. Relations with the city are similar to Alberti’s isolated viewer, detached from any urban stage. The smartphone, even if we are present, acts as a mediator: we interact directly with it and indirectly with the environment. Contrary to the Flaneur and to the Situationists, who discovered the city with their own bodies, today the body itself is being detached from that experience. While contemporary cities turn out to become more walkable, the new technological applications tend to open out all possibilities in order to explore them by suggesting multiple routes. Exploration becomes easier, but Perception changes.

Keywords: body, experience, mobile apps, walking

Procedia PDF Downloads 382
142 Comparison of Iodine Density Quantification through Three Material Decomposition between Philips iQon Dual Layer Spectral CT Scanner and Siemens Somatom Force Dual Source Dual Energy CT Scanner: An in vitro Study

Authors: Jitendra Pratap, Jonathan Sivyer

Abstract:

Introduction: Dual energy/Spectral CT scanning permits simultaneous acquisition of two x-ray spectra datasets and can complement radiological diagnosis by allowing tissue characterisation (e.g., uric acid vs. non-uric acid renal stones), enhancing structures (e.g. boost iodine signal to improve contrast resolution), and quantifying substances (e.g. iodine density). However, the latter showed inconsistent results between the 2 main modes of dual energy scanning (i.e. dual source vs. dual layer). Therefore, the present study aimed to determine which technology is more accurate in quantifying iodine density. Methods: Twenty vials with known concentrations of iodine solutions were made using Optiray 350 contrast media diluted in sterile water. The concentration of iodine utilised ranged from 0.1 mg/ml to 1.0mg/ml in 0.1mg/ml increments, 1.5 mg/ml to 4.5 mg/ml in 0.5mg/ml increments followed by further concentrations at 5.0 mg/ml, 7mg/ml, 10 mg/ml and 15mg/ml. The vials were scanned using Dual Energy scan mode on a Siemens Somatom Force at 80kV/Sn150kV and 100kV/Sn150kV kilovoltage pairing. The same vials were scanned using Spectral scan mode on a Philips iQon at 120kVp and 140kVp. The images were reconstructed at 5mm thickness and 5mm increment using Br40 kernel on the Siemens Force and B Filter on Philips iQon. Post-processing of the Dual Energy data was performed on vendor-specific Siemens Syngo VIA (VB40) and Philips Intellispace Portal (Ver. 12) for the Spectral data. For each vial and scan mode, the iodine concentration was measured by placing an ROI in the coronal plane. Intraclass correlation analysis was performed on both datasets. Results: The iodine concentrations were reproduced with a high degree of accuracy for Dual Layer CT scanner. Although the Dual Source images showed a greater degree of deviation in measured iodine density for all vials, the dataset acquired at 80kV/Sn150kV had a higher accuracy. Conclusion: Spectral CT scanning by the dual layer technique has higher accuracy for quantitative measurements of iodine density compared to the dual source technique.

Keywords: CT, iodine density, spectral, dual-energy

Procedia PDF Downloads 95
141 Investigating the Indoor Air Quality of the Respiratory Care Wards

Authors: Yu-Wen Lin, Chin-Sheng Tang, Wan-Yi Chen

Abstract:

Various biological specimens, drugs, and chemicals exist in the hospital. The medical staffs and hypersensitive inpatients expose might expose to multiple hazards while they work or stay in the hospital. Therefore, the indoor air quality (IAQ) of the hospital should be paid more attention. Respiratory care wards (RCW) are responsible for caring the patients who cannot spontaneously breathe without the ventilators. The patients in RCW are easy to be infected. Compared to the bacteria concentrations of other hospital units, RCW came with higher values in other studies. This research monitored the IAQ of the RCW and checked the compliances of the indoor air quality standards of Taiwan Indoor Air Quality Act. Meanwhile, the influential factors of IAQ and the impacts of ventilator modules, with humidifier or with filter, were investigated. The IAQ of two five-bed wards and one nurse station of a RCW in a regional hospital were monitored. The monitoring was proceeded for 16 hours or 24 hours during the sampling days with a sampling frequency of 20 minutes per hour. The monitoring was performed for two days in a row and the AIQ of the RCW were measured for eight days in total. The concentrations of carbon dioxide (CO₂), carbon monoxide (CO), particulate matter (PM), nitrogen oxide (NOₓ), total volatile organic compounds (TVOCs), relative humidity (RH) and temperature were measured by direct reading instruments. The bioaerosol samples were taken hourly. The hourly air change rate (ACH) was calculated by measuring the air ventilation volume. Human activities were recorded during the sampling period. The linear mixed model (LMM) was applied to illustrate the impact factors of IAQ. The concentrations of CO, CO₂, PM, bacterial and fungi exceeded the Taiwan IAQ standards. The major factors affecting the concentrations of CO, PM₁ and PM₂.₅ were location and the number of inpatients. The significant factors to alter the CO₂ and TVOC concentrations were location and the numbers of in-and-out staff and inpatients. The number of in-and-out staff and the level of activity affected the PM₁₀ concentrations statistically. The level of activity and the numbers of in-and-out staff and inpatients are the significant factors in changing the bacteria and fungi concentrations. Different models of the patients’ ventilators did not affect the IAQ significantly. The results of LMM can be utilized to predict the pollutant concentrations under various environmental conditions. The results of this study would be a valuable reference for air quality management of RCW.

Keywords: respiratory care ward, indoor air quality, linear mixed model, bioaerosol

Procedia PDF Downloads 86
140 Development of Loop Mediated Isothermal Amplification (Lamp) Assay for the Diagnosis of Ovine Theileriosis

Authors: Muhammad Fiaz Qamar, Uzma Mehreen, Muhammad Arfan Zaman, Kazim Ali

Abstract:

Ovine Theileriosis is a world-wide concern, especially in tropical and subtropical areas, due to having tick abundance that has received less awareness in different developed and developing areas due to less worth of sheep, low to the middle level of infection in different small ruminants herd. Across Asia, the prevalence reports have been conducted to provide equivalent calculation of flock and animal level prevalence of Theileriosisin animals. It is a challenge for veterinarians to timely diagnosis & control of Theileriosis and famers because of the nature of the organism and inadequacy of restricted plans to control. All most work is based upon the development of such a technique which should be farmer-friendly, less expensive, and easy to perform into the field. By the timely diagnosis of this disease will decrease the irrational use of the drugs, and other plan was to determine the prevalence of Theileriosis in District Jhang by using the conventional method, PCR and qPCR, and LAMP. We quantify the molecular epidemiology of T.lestoquardiin sheep from Jhang districts, Punjab, Pakistan. In this study, we concluded that the overall prevalence of Theileriosis was (32/350*100= 9.1%) in sheep by using Giemsa staining technique, whereas (48/350*100= 13%) is observed by using PCR technique (56/350*100=16%) in qPCR and the LAMP technique have shown up to this much prevalence percentage (60/350*100= 17.1%). The specificity and sensitivity also calculated in comparison with the PCR and LAMP technique. Means more positive results have been shown when the diagnosis has been done with the help of LAMP. And there is little bit of difference between the positive results of PCR and qPCR, and the least positive animals was by using Giemsa staining technique/conventional method. If we talk about the specificity and sensitivity of the LAMP as compared to PCR, The cross tabulation shows that the results of sensitivity of LAMP counted was 94.4%, and specificity of LAMP counted was 78%. Advances in scientific field must be upon reality based ideas which can lessen the gaps and hurdles in the way of scientific research; the lamp is one of such techniques which have done wonders in adding value and helping human at large. It is such a great biological diagnostic tools and has helped a lot in the proper diagnosis and treatment of certain diseases. Other methods for diagnosis, such as culture techniques and serological techniques, have exposed humans with great danger. However, with the help of molecular diagnostic technique like LAMP, exposure to such pathogens is being avoided in the current era Most prompt and tentative diagnosis can be made using LAMP. Other techniques like PCR has many disadvantages when compared to LAMP as PCR is a relatively expensive, time consuming, and very complicated procedure while LAMP is relatively cheap, easy to perform, less time consuming, and more accurate. LAMP technique has removed hurdles in the way of scientific research and molecular diagnostics, making it approachable to poor and developing countries.

Keywords: distribution, thelaria, LAMP, primer sequences, PCR

Procedia PDF Downloads 82
139 Enhancing Social Well-Being in Older Adults Through Tailored Technology Interventions: A Future Systematic Review

Authors: Rui Lin, Jimmy Xiangji Huang, Gary Spraakman

Abstract:

This forthcoming systematic review will underscore the imperative of leveraging technology to mitigate social isolation in older adults, particularly in the context of unprecedented global challenges such as the COVID-19 pandemic. With the continual evolution of technology, it becomes crucial to scrutinize the efficacy of interventions and discern how they can alleviate social isolation and augment social well-being among the elderly. This review will strive to clarify the best methods for older adults to utilize cost-effective and user-friendly technology and will investigate how the adaptation and execution of such interventions can be fine-tuned to maximize their positive outcomes. The study will follow the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines to filter pertinent studies. We foresee conducting an analysis of articles and executing a narrative analysis to discover themes and indicators related to quality of life and, technology use and well-being. The review will examine how involving older adults at the community level, applying top practices from community-based participatory research, can establish efficient strategies to implement technology-based interventions designed to diminish social isolation and boost digital use self-efficacy. Applications based on mobile technology and virtual platforms are set to assume a crucial role not only in enhancing connections within families but also in connecting older adults to vital healthcare resources, fostering both physical and mental well-being. The review will investigate how technological devices and platforms can address the cognitive, visual, and auditory requirements of older adults, thus strengthening their confidence and proficiency in digital use—a crucial factor during enforced social distancing or self-isolation periods during pandemics. This review will endeavor to provide insights into the multifaceted benefits of technology for older adults, focusing on how tailored technological interventions can be a beacon of social and mental wellness in times of social restrictions. It will contribute to the growing body of knowledge on the intersection of technology and elderly well-being, offering nuanced understandings and practical implications for developing user-centric, effective, and inclusive technological solutions for older populations.

Keywords: older adults, health service delivery, digital health, social isolation, social well-being

Procedia PDF Downloads 33
138 Optimisation of Energy Harvesting for a Composite Aircraft Wing Structure Bonded with Discrete Macro Fibre Composite Sensors

Authors: Ali H. Daraji, Ye Jianqiao

Abstract:

The micro electrical devices of the wireless sensor network are continuously developed and become very small and compact with low electric power requirements using limited period life conventional batteries. The low power requirement for these devices, cost of conventional batteries and its replacement have encouraged researcher to find alternative power supply represented by energy harvesting system to provide an electric power supply with infinite period life. In the last few years, the investigation of energy harvesting for structure health monitoring has increased to powering wireless sensor network by converting waste mechanical vibration into electricity using piezoelectric sensors. Optimisation of energy harvesting is an important research topic to ensure a flowing of efficient electric power from structural vibration. The harvesting power is mainly based on the properties of piezoelectric material, dimensions of piezoelectric sensor, its position on a structure and value of an external electric load connected between sensor electrodes. Larger surface area of sensor is not granted larger power harvesting when the sensor area is covered positive and negative mechanical strain at the same time. Thus lead to reduction or cancellation of piezoelectric output power. Optimisation of energy harvesting is achieved by locating these sensors precisely and efficiently on the structure. Limited published work has investigated the energy harvesting for aircraft wing. However, most of the published studies have simplified the aircraft wing structure by a cantilever flat plate or beam. In these studies, the optimisation of energy harvesting was investigated by determination optimal value of an external electric load connected between sensor electrode terminals or by an external electric circuit or by randomly splitting piezoelectric sensor to two segments. However, the aircraft wing structures are complex than beam or flat plate and mostly constructed from flat and curved skins stiffened by stringers and ribs with more complex mechanical strain induced on the wing surfaces. This aircraft wing structure bonded with discrete macro fibre composite sensors was modelled using multiphysics finite element to optimise the energy harvesting by determination of the optimal number of sensors, location and the output resistance load. The optimal number and location of macro fibre sensors were determined based on the maximization of the open and close loop sensor output voltage using frequency response analysis. It was found different optimal distribution, locations and number of sensors bounded on the top and the bottom surfaces of the aircraft wing.

Keywords: energy harvesting, optimisation, sensor, wing

Procedia PDF Downloads 282