Search results for: perceptual linear prediction (PLP’s)
233 Photoemission Momentum Microscopy of Graphene on Ir (111)
Authors: Anna V. Zaporozhchenko, Dmytro Kutnyakhov, Katherina Medjanik, Christian Tusche, Hans-Joachim Elmers, Olena Fedchenko, Sergey Chernov, Martin Ellguth, Sergej A. Nepijko, Gerd Schoenhense
Abstract:
Graphene reveals a unique electronic structure that predetermines many intriguing properties such as massless charge carriers, optical transparency and high velocity of fermions at the Fermi level, opening a wide horizon of future applications. Hence, a detailed investigation of the electronic structure of graphene is crucial. The method of choice is angular resolved photoelectron spectroscopy ARPES. Here we present experiments using time-of-flight (ToF) momentum microscopy, being an alternative way of ARPES using full-field imaging of the whole Brillouin zone (BZ) and simultaneous acquisition of up to several 100 energy slices. Unlike conventional ARPES, k-microscopy is not limited in simultaneous k-space access. We have recorded the whole first BZ of graphene on Ir(111) including all six Dirac cones. As excitation source we used synchrotron radiation from BESSY II (Berlin) at the U125-2 NIM, providing linearly polarized (both polarizations p- and s-) VUV radiation. The instrument uses a delay-line detector for single-particle detection up the 5 Mcps range and parallel energy detection via ToF recording. In this way, we gather a 3D data stack I(E,kx,ky) of the full valence electronic structure in approx. 20 mins. Band dispersion stacks were measured in the energy range of 14 eV up to 23 eV with steps of 1 eV. The linearly-dispersing graphene bands for all six K and K’ points were simultaneously recorded. We find clear features of hybridization with the substrate, in particular in the linear dichroism in the angular distribution (LDAD). Recording of the whole Brillouin zone of graphene/Ir(111) revealed new features. First, the intensity differences (i.e. the LDAD) are very sensitive to the interaction of graphene bands with substrate bands. Second, the dark corridors are investigated in detail for both, p- and s- polarized radiation. They appear as local distortions of photoelectron current distribution and are induced by quantum mechanical interference of graphene sublattices. The dark corridors are located in different areas of the 6 Dirac cones and show chirality behaviour with a mirror plane along vertical axis. Moreover, two out of six show an oval shape while the rest are more circular. It clearly indicates orientation dependence with respect to E vector of incident light. Third, a pattern of faint but very sharp lines is visible at energies around 22eV that strongly remind on Kikuchi lines in diffraction. In conclusion, the simultaneous study of all six Dirac cones is crucial for a complete understanding of dichroism phenomena and the dark corridor.Keywords: band structure, graphene, momentum microscopy, LDAD
Procedia PDF Downloads 340232 Membrane Permeability of Middle Molecules: A Computational Chemistry Approach
Authors: Sundaram Arulmozhiraja, Kanade Shimizu, Yuta Yamamoto, Satoshi Ichikawa, Maenaka Katsumi, Hiroaki Tokiwa
Abstract:
Drug discovery is shifting from small molecule based drugs targeting local active site to middle molecules (MM) targeting large, flat, and groove-shaped binding sites, for example, protein-protein interface because at least half of all targets assumed to be involved in human disease have been classified as “difficult to drug” with traditional small molecules. Hence, MMs such as peptides, natural products, glycans, nucleic acids with various high potent bioactivities become important targets for drug discovery programs in the recent years as they could be used for ‘undruggable” intracellular targets. Cell membrane permeability is one of the key properties of pharmacodynamically active MM drug compounds and so evaluating this property for the potential MMs is crucial. Computational prediction for cell membrane permeability of molecules is very challenging; however, recent advancement in the molecular dynamics simulations help to solve this issue partially. It is expected that MMs with high membrane permeability will enable drug discovery research to expand its borders towards intracellular targets. Further to understand the chemistry behind the permeability of MMs, it is necessary to investigate their conformational changes during the permeation through membrane and for that their interactions with the membrane field should be studied reliably because these interactions involve various non-bonding interactions such as hydrogen bonding, -stacking, charge-transfer, polarization dispersion, and non-classical weak hydrogen bonding. Therefore, parameters-based classical mechanics calculations are hardly sufficient to investigate these interactions rather, quantum mechanical (QM) calculations are essential. Fragment molecular orbital (FMO) method could be used for such purpose as it performs ab initio QM calculations by dividing the system into fragments. The present work is aimed to study the cell permeability of middle molecules using molecular dynamics simulations and FMO-QM calculations. For this purpose, a natural compound syringolin and its analogues were considered in this study. Molecular simulations were performed using NAMD and Gromacs programs with CHARMM force field. FMO calculations were performed using the PAICS program at the correlated Resolution-of-Identity second-order Moller Plesset (RI-MP2) level with the cc-pVDZ basis set. The simulations clearly show that while syringolin could not permeate the membrane, its selected analogues go through the medium in nano second scale. These correlates well with the existing experimental evidences that these syringolin analogues are membrane-permeable compounds. Further analyses indicate that intramolecular -stacking interactions in the syringolin analogues influenced their permeability positively. These intramolecular interactions reduce the polarity of these analogues so that they could permeate the lipophilic cell membrane. Conclusively, the cell membrane permeability of various middle molecules with potent bioactivities is efficiently studied using molecular dynamics simulations. Insight of this behavior is thoroughly investigated using FMO-QM calculations. Results obtained in the present study indicate that non-bonding intramolecular interactions such as hydrogen-bonding and -stacking along with the conformational flexibility of MMs are essential for amicable membrane permeation. These results are interesting and are nice example for this theoretical calculation approach that could be used to study the permeability of other middle molecules. This work was supported by Japan Agency for Medical Research and Development (AMED) under Grant Number 18ae0101047.Keywords: fragment molecular orbital theory, membrane permeability, middle molecules, molecular dynamics simulation
Procedia PDF Downloads 189231 Examining the Independent Effects of Early Exposure to Game Consoles and Parent-Child Activities on Psychosocial Development
Authors: Rosa S. Wong, Keith T. S. Tung, Frederick K. Ho, Winnie W. Y. Tso, King-wa Fu, Nirmala Rao, Patrick Ip
Abstract:
As technology advances, exposures in early childhood are no longer confined to stimulations in the surrounding physical environments. Children nowadays are also subject to influences from the digital world. In particular, early access to game consoles can cause risks to child development, especially when the game is not developmentally appropriate for young children. Overstimulation is possible and could impair brain development. On the other hand, recreational parent-child activities, including outdoor activities and visits to museums, require child interaction with parents, which is beneficial for developing adaptive emotion regulation and social skills. Given the differences between these two types of exposures, this study investigated and compared the independent effects of early exposure to a game console and early play-based parent-child activities on children’s long-term psychosocial outcomes. This study used data from a subset of children (n=304, 142 male and 162 female) in the longitudinal cohort study, which studied the long-term impact of family socioeconomic status on child development. In 2012/13, we recruited a group of children at Kindergarten 3 (K3) randomly from Hong Kong local kindergartens and collected data regarding their duration of exposure to game console and recreational parent-child activities at that time. In 2018/19, we re-surveyed the parents of these children who were matriculated as Form 1 (F1) students (ages ranging from 11 to 13 years) in secondary schools and asked the parents to rate their children’s psychosocial problems in F1. Linear regressions were conducted to examine the associations between early exposures and adolescent psychosocial problems with and without adjustment for child gender and K3 family socioeconomic status. On average, K3 children spent about 42 minutes on a game console every day and had 2-3 recreational activities with their parents every week. Univariate analyses showed that more time spent on game consoles at K3 was associated with more psychosocial difficulties in F1 particularly more externalizing problems. The effect of early exposure to game console on externalizing behavior remained significant (B=0.59, 95%CI: 0.15 to 1.03, p=0.009) after adjusting for recreational parent-child activities and child gender. For recreational parent-child activities at K3, its effect on overall psychosocial difficulties became insignificant after adjusting for early exposure to game consoles and child gender. However, it was found to have significant protective effect on externalizing problems (B=-0.65, 95%CI: -1.23 to -0.07, p=0.028) even after adjusting for the confounders. Early exposure to game consoles has negative impact on children’s psychosocial health, whereas play-based parent-child activities can foster positive psychosocial outcomes. More efforts should be directed to propagate the risks and benefits of these activities and urge the parents and caregivers to replace child-alone screen time with parent-child play time in daily routine.Keywords: early childhood, electronic device, parenting, psychosocial wellbeing
Procedia PDF Downloads 167230 Recycling Service Strategy by Considering Demand-Supply Interaction
Authors: Hui-Chieh Li
Abstract:
Circular economy promotes greater resource productivity and avoids pollution through greater recycling and re-use which bring benefits for both the environment and the economy. The concept is contrast to a linear economy which is ‘take, make, dispose’ model of production. A well-design reverse logistics service strategy could enhance the willingness of recycling of the users and reduce the related logistics cost as well as carbon emissions. Moreover, the recycle brings the manufacturers most advantages as it targets components for closed-loop reuse, essentially converting materials and components from worn-out product into inputs for new ones at right time and right place. This study considers demand-supply interaction, time-dependent recycle demand, time-dependent surplus value of recycled product and constructs models on recycle service strategy for the recyclable waste collector. A crucial factor in optimizing a recycle service strategy is consumer demand. The study considers the relationships between consumer demand towards recycle and product characteristics, surplus value and user behavior. The study proposes a recycle service strategy which differs significantly from the conventional and typical uniform service strategy. Periods with considerable demand and large surplus product value suggest frequent and short service cycle. The study explores how to determine a recycle service strategy for recyclable waste collector in terms of service cycle frequency and duration and vehicle type for all service cycles by considering surplus value of recycled product, time-dependent demand, transportation economies and demand-supply interaction. The recyclable waste collector is responsible for the collection of waste product for the manufacturer. The study also examines the impacts of utilization rate on the cost and profit in the context of different sizes of vehicles. The model applies mathematical programming methods and attempts to maximize the total profit of the distributor during the study period. This study applies the binary logit model, analytical model and mathematical programming methods to the problem. The model specifically explores how to determine a recycle service strategy for the recycler by considering product surplus value, time-dependent recycle demand, transportation economies and demand-supply interaction. The model applies mathematical programming methods and attempts to minimize the total logistics cost of the recycler and maximize the recycle benefits of the manufacturer during the study period. The study relaxes the constant demand assumption and examines how service strategy affects consumer demand towards waste recycling. Results of the study not only help understanding how the user demand for recycle service and product surplus value affects the logistics cost and manufacturer’s benefits, but also provide guidance such as award bonus and carbon emission regulations for the government.Keywords: circular economy, consumer demand, product surplus value, recycle service strategy
Procedia PDF Downloads 392229 Work-Family Conflict and Family and Job Resources among Women: The Role of Negotiation
Authors: Noa Nelson, Meitar Moshe, Dana Cohen
Abstract:
Work-family conflict (WFC) is a significant source of stress for contemporary employees, with research indicating its heightened severity for women. The conservation of resources theory argues that individuals experience stress when their resources fall short of demands, and attempt to reach balance by obtaining resources. Presumably then, to achieve work-family balance women would need to negotiate for resources such as spouse support, employer support and work flexibility. The current research tested the hypotheses that competent negotiation at home and at work associated with increased family and job resources and with decreased WFC, as well as with higher work, marital and life satisfaction. In the first study, 113 employed mothers, married or cohabiting, reported to what extent they conducted satisfactory negotiation with spouse over division of housework, and their actual housework load compared to spouse. They answered a WFC questionnaire, measuring how much work interferes with family (WIF) and how much family interferes with work (FIW), and finally, measurements of satisfaction. In the second study, 94 employed mothers, married or cohabiting reported to what extent they conducted satisfactory negotiation with their boss over balancing work demands with family needs. They reported the levels of three job resources: flexibility, control and family-friendly organizational culture. Finally, they answered the same WFC and satisfaction measurements from study 1. Statistical analyses –t-tests, correlations, and hierarchical linear regressions- showed that in both studies, women reported higher WIF than FIW. Negotiations associated with increased resources: support from spouse, work flexibility and control and a family-friendly culture; negotiation with spouse associated also with satisfaction measurements. However, negotiations or resources (except family-friendly culture) did not associate with reduced conflict. The studies demonstrate the role of negotiation in obtaining family and job resources. Causation cannot be determined, but the fact is that employed mothers who enjoyed more support (at both home and work), flexibility and control, were more likely to keep active interactions to increase them. This finding has theoretical and practical implications, especially in view of research on female avoidance of negotiation. It is intriguing that negotiations and resources generally did not associate with reduced WFC. This finding might reflect the severity of the conflict, especially of work interfering with family, which characterizes many contemporary jobs. It might also suggest that employed mothers have high expectations from themselves, and even under supportive circumstances, experience the challenge of balancing two significant and demanding roles. The research contributes to the fields of negotiation, gender, and work-life balance. It calls for further studies, to test its model in additional populations and validate the role employees have in actively negotiating for the balance that they need. It also calls for further research to understand the contributions of job and family resources to reducing work-family conflict, and the circumstances under which they contribute.Keywords: sork-family conflict, work-life balance, negotiation, gender, job resources, family resources
Procedia PDF Downloads 226228 E-Business Role in the Development of the Economy of Sultanate of Oman
Authors: Mairaj Salim, Asma Zaheer
Abstract:
Oman has accomplished as much or more than its fellow Gulf monarchies, despite starting from scratch considerably later, having less oil income to utilize, dealing with a larger and more rugged geography, and resolving a bitter civil war along the way. Of course, Oman's progress in the past 30-plus years has not been without problems and missteps, but the balance is squarely on the positive side of the ledger. Oil has been the driving force of the Omani economy since Oman began commercial production in 1967. The oil industry supports the country’s high standard of living and is primarily responsible for its modern and expansive infrastructure, including electrical utilities, telephone services, roads, public education and medical services. In addition to extensive oil reserves, Oman also has substantial natural gas reserves, which are expected to play a leading role in the Omani economy in the Twenty-first Century. To reduce the country’s dependence on oil revenues, the government is restructuring the economy by directing investment to non-oil activities. Since the 21st century IT has changed the performing tasks. To manage the affairs for the benefits of organizations and economy, the Omani government has adopted E-Business technologies for the development. E-Business is important because it allows • Transformation of old economy relationships (vertical/linear relationships) to new economy relationships characterized by end-to-end relationship management solutions (integrated or extended relationships) • Facilitation and organization of networks, small firms depend on ‘partner’ firms for supplies and product distribution to meet customer demands • SMEs to outsource back-end process or cost centers enabling the SME to focus on their core competence • ICT to connect, manage and integrate processes internally and externally • SMEs to join networks and enter new markets, through shortened supply chains to increase market share, customers and suppliers • SMEs to take up the benefits of e-business to reduce costs, increase customer satisfaction, improve client referral and attract quality partners • New business models of collaboration for SMEs to increase their skill base • SMEs to enter virtual trading arena and increase their market reach A national strategy for the advancement of information and communication technology (ICT) has been worked out, mainly to introduce e-government, e-commerce, and a digital society. An information technology complex KOM (Knowledge Oasis Muscat) had been established, consisting of section for information technology, incubator services, a shopping center of technology software and hardware, ICT colleges, E-Government services and other relevant services. So, all these efforts play a vital role in the development of Oman economy.Keywords: ICT, ITA, CRM, SCM, ERP, KOM, SMEs, e-commerce and e-business
Procedia PDF Downloads 251227 High Efficiency Double-Band Printed Rectenna Model for Energy Harvesting
Authors: Rakelane A. Mendes, Sandro T. M. Goncalves, Raphaella L. R. Silva
Abstract:
The concepts of energy harvesting and wireless energy transfer have been widely discussed in recent times. There are some ways to create autonomous systems for collecting ambient energy, such as solar, vibratory, thermal, electromagnetic, radiofrequency (RF), among others. In the case of the RF it is possible to collect up to 100 μW / cm². To collect and/or transfer energy in RF systems, a device called rectenna is used, which is defined by the junction of an antenna and a rectifier circuit. The rectenna presented in this work is resonant at the frequencies of 1.8 GHz and 2.45 GHz. Frequencies at 1.8 GHz band are e part of the GSM / LTE band. The GSM (Global System for Mobile Communication) is a frequency band of mobile telephony, it is also called second generation mobile networks (2G), it came to standardize mobile telephony in the world and was originally developed for voice traffic. LTE (Long Term Evolution) or fourth generation (4G) has emerged to meet the demand for wireless access to services such as Internet access, online games, VoIP and video conferencing. The 2.45 GHz frequency is part of the ISM (Instrumentation, Scientific and Medical) frequency band, this band is internationally reserved for industrial, scientific and medical development with no need for licensing, and its only restrictions are related to maximum power transfer and bandwidth, which must be kept within certain limits (in Brazil the bandwidth is 2.4 - 2.4835 GHz). The rectenna presented in this work was designed to present efficiency above 50% for an input power of -15 dBm. It is known that for wireless energy capture systems the signal power is very low and varies greatly, for this reason this ultra-low input power was chosen. The Rectenna was built using the low cost FR4 (Flame Resistant) substrate, the antenna selected is a microfita antenna, consisting of a Meandered dipole, and this one was optimized using the software CST Studio. This antenna has high efficiency, high gain and high directivity. Gain is the quality of an antenna in capturing more or less efficiently the signals transmitted by another antenna and/or station. Directivity is the quality that an antenna has to better capture energy in a certain direction. The rectifier circuit used has series topology and was optimized using Keysight's ADS software. The rectifier circuit is the most complex part of the rectenna, since it includes the diode, which is a non-linear component. The chosen diode is the Schottky diode SMS 7630, this presents low barrier voltage (between 135-240 mV) and a wider band compared to other types of diodes, and these attributes make it perfect for this type of application. In the rectifier circuit are also used inductor and capacitor, these are part of the input and output filters of the rectifier circuit. The inductor has the function of decreasing the dispersion effect on the efficiency of the rectifier circuit. The capacitor has the function of eliminating the AC component of the rectifier circuit and making the signal undulating.Keywords: dipole antenna, double-band, high efficiency, rectenna
Procedia PDF Downloads 124226 Predictors of Sexually Transmitted Infection of Korean Adolescent Females: Analysis of Pooled Data from Korean Nationwide Survey
Authors: Jaeyoung Lee, Minji Je
Abstract:
Objectives: In adolescence, adolescents are curious about sex, but sexual experience before becoming an adult can cause the risk of high probability of sexually transmitted infection. Therefore, it is very important to prevent sexually transmitted infections so that adolescents can grow in healthy and upright way. Adolescent females, especially, have sexual behavior distinguished from that of male adolescents. Protecting female adolescents’ reproductive health is even more important since it is directly related to the childbirth of the next generation. This study, thus, investigated the predictors of sexually transmitted infection in adolescent females with sexual experiences based on the National Health Statistics in Korea. Methods: This study was conducted based on the National Health Statistics in Korea. The 11th Korea Youth Behavior Web-based Survey in 2016 was conducted in the type of anonymous self-reported survey in order to find out the health behavior of adolescents. The target recruitment group was middle and high school students nationwide as of April 2016, and 65,528 students from a total of 800 middle and high schools participated. The study was conducted in 537 female high school students (Grades 10–12) among them. The collected data were analyzed as complex sampling design using SPSS statistics 22. The strata, cluster, weight, and finite population correction provided by Korea Center for Disease Control & Prevention (KCDC) were reflected to constitute complex sample design files, which were used in the statistical analysis. The analysis methods included Rao-Scott chi-square test, complex samples general linear model, and complex samples multiple logistic regression analysis. Results: Out of 537 female adolescents, 11.9% (53 adolescents) had experiences of venereal infection. The predictors for venereal infection of the subjects were ‘age at first intercourse’ and ‘sexual intercourse after drinking’. The sexually transmitted infection of the subjects was decreased by 0.31 times (p=.006, 95%CI=0.13-0.71) for middle school students and 0.13 times (p<.001, 95%CI=0.05-0.32) for high school students whereas the age of the first sexual experience was under elementary school age. In addition, the sexually transmitted infection of the subjects was 3.54 times (p < .001, 95%CI=1.76-7.14) increased when they have experience of sexual relation after drinking alcohol, compared to those without the experience of sexual relation after drinking alcohol. Conclusions: The female adolescents had high probability of sexually transmitted infection if their age for the first sexual experience was low. Therefore, the female adolescents who start sexual experience earlier shall have practical sex education appropriate for their developmental stage. In addition, since the sexually transmitted infection increases, if they have sexual relations after drinking alcohol, the consideration for prevention of alcohol use or intervention of sex education shall be required. When health education intervention is conducted for health promotion for female adolescents in the future, it is necessary to reflect the result of this study.Keywords: adolescent, coitus, female, sexually transmitted diseases
Procedia PDF Downloads 192225 A Comparison of Methods for Estimating Dichotomous Treatment Effects: A Simulation Study
Authors: Jacqueline Y. Thompson, Sam Watson, Lee Middleton, Karla Hemming
Abstract:
Introduction: The odds ratio (estimated via logistic regression) is a well-established and common approach for estimating covariate-adjusted binary treatment effects when comparing a treatment and control group with dichotomous outcomes. Its popularity is primarily because of its stability and robustness to model misspecification. However, the situation is different for the relative risk and risk difference, which are arguably easier to interpret and better suited to specific designs such as non-inferiority studies. So far, there is no equivalent, widely acceptable approach to estimate an adjusted relative risk and risk difference when conducting clinical trials. This is partly due to the lack of a comprehensive evaluation of available candidate methods. Methods/Approach: A simulation study is designed to evaluate the performance of relevant candidate methods to estimate relative risks to represent conditional and marginal estimation approaches. We consider the log-binomial, generalised linear models (GLM) with iteratively weighted least-squares (IWLS) and model-based standard errors (SE); log-binomial GLM with convex optimisation and model-based SEs; log-binomial GLM with convex optimisation and permutation tests; modified-Poisson GLM IWLS and robust SEs; log-binomial generalised estimation equations (GEE) and robust SEs; marginal standardisation and delta method SEs; and marginal standardisation and permutation test SEs. Independent and identically distributed datasets are simulated from a randomised controlled trial to evaluate these candidate methods. Simulations are replicated 10000 times for each scenario across all possible combinations of sample sizes (200, 1000, and 5000), outcomes (10%, 50%, and 80%), and covariates (ranging from -0.05 to 0.7) representing weak, moderate or strong relationships. Treatment effects (ranging from 0, -0.5, 1; on the log-scale) will consider null (H0) and alternative (H1) hypotheses to evaluate coverage and power in realistic scenarios. Performance measures (bias, mean square error (MSE), relative efficiency, and convergence rates) are evaluated across scenarios covering a range of sample sizes, event rates, covariate prognostic strength, and model misspecifications. Potential Results, Relevance & Impact: There are several methods for estimating unadjusted and adjusted relative risks. However, it is unclear which method(s) is the most efficient, preserves type-I error rate, is robust to model misspecification, or is the most powerful when adjusting for non-prognostic and prognostic covariates. GEE estimations may be biased when the outcome distributions are not from marginal binary data. Also, it seems that marginal standardisation and convex optimisation may perform better than GLM IWLS log-binomial.Keywords: binary outcomes, statistical methods, clinical trials, simulation study
Procedia PDF Downloads 115224 Non-Mammalian Pattern Recognition Receptor from Rock Bream (Oplegnathus fasciatus): Genomic Characterization and Transcriptional Profile upon Bacterial and Viral Inductions
Authors: Thanthrige Thiunuwan Priyathilaka, Don Anushka Sandaruwan Elvitigala, Bong-Soo Lim, Hyung-Bok Jeong, Jehee Lee
Abstract:
Toll like receptors (TLRs) are a phylogeneticaly conserved family of pattern recognition receptors, which participates in the host immune responses against various pathogens and pathogen derived mitogen. TLR21, a non-mammalian type, is almost restricted to the fish species even though those can be identified rarely in avians and amphibians. Herein, this study was carried out to identify and characterize TLR21 from rock bream (Oplegnathus fasciatus) designated as RbTLR21, at transcriptional and genomic level. In this study, the full length cDNA and genomic sequence of RbTLR21 was identified using previously constructed cDNA sequence database and BAC library, respectively. Identified RbTLR21 sequence was characterized using several bioinformatics tools. The quantitative real time PCR (qPCR) experiment was conducted to determine tissue specific expressional distribution of RbTLR21. Further, transcriptional modulation of RbTLR21 upon the stimulation with Streptococcus iniae (S. iniae), rock bream iridovirus (RBIV) and Edwardsiella tarda (E. tarda) was analyzed in spleen tissues. The complete coding sequence of RbTLR21 was 2919 bp in length which can encode a protein consisting of 973 amino acid residues with molecular mass of 112 kDa and theoretical isoelectric point of 8.6. The anticipated protein sequence resembled a typical TLR domain architecture including C-terminal ectodomain with 16 leucine rich repeats, a transmembrane domain, cytoplasmic TIR domain and signal peptide with 23 amino acid residues. Moreover, protein folding pattern prediction of RbTLR21 exhibited well-structured and folded ectodomain, transmembrane domain and cytoplasmc TIR domain. According to the pair wise sequence analysis data, RbTLR21 showed closest homology with orange-spotted grouper (Epinephelus coioides) TLR21with 76.9% amino acid identity. Furthermore, our phylogenetic analysis revealed that RbTLR21 shows a close evolutionary relationship with its ortholog from Danio rerio. Genomic structure of RbTLR21 consisted of single exon similar to its ortholog of zebra fish. Sevaral putative transcription factor binding sites were also identified in 5ʹ flanking region of RbTLR21. The RBTLR 21 was ubiquitously expressed in all the tissues we tested. Relatively, high expression levels were found in spleen, liver and blood tissues. Upon induction with rock bream iridovirus, RbTLR21 expression was upregulated at the early phase of post induction period even though RbTLR21 expression level was fluctuated at the latter phase of post induction period. Post Edwardsiella tarda injection, RbTLR transcripts were upregulated throughout the experiment. Similarly, Streptococcus iniae induction exhibited significant upregulations of RbTLR21 mRNA expression in the spleen tissues. Collectively, our findings suggest that RbTLR21 is indeed a homolog of TLR21 family members and RbTLR21 may be involved in host immune responses against bacterial and DNA viral infections.Keywords: rock bream, toll like receptor 21 (TLR21), pattern recognition receptor, genomic characterization
Procedia PDF Downloads 542223 Impact of Experiential Learning on Executive Function, Language Development, and Quality of Life for Adults with Intellectual and Developmental Disabilities (IDD)
Authors: Mary Deyo, Zmara Harrison
Abstract:
This study reports the outcomes of an 8-week experiential learning program for 6 adults with Intellectual and Developmental Disabilities (IDD) at a day habilitation program. The intervention foci for this program include executive function, language learning in the domains of expressive, receptive, and pragmatic language, and quality of life. The interprofessional collaboration aimed at supporting adults with IDD to reach person-centered, functional goals across skill domains is critical. This study is a significant addition to the speech-language pathology literature in that it examines a therapy method that potentially meets this need while targeting domains within the speech-language pathology scope of practice. Communication therapy was provided during highly valued and meaningful hands-on learning experiences, referred to as the Garden Club, which incorporated all aspects of planting and caring for a garden as well as related journaling, sensory, cooking, art, and technology-based activities. Direct care staff and an undergraduate research assistant were trained by SLP to be impactful language guides during their interactions with participants in the Garden Club. SLP also provided direct therapy and modeling during Garden Club. Research methods used in this study included a mixed methods analysis of a literature review, a quasi-experimental implementation of communication therapy in the context of experiential learning activities, Quality of Life participant surveys, quantitative pre- post- data collection and linear mixed model analysis, qualitative data collection with qualitative content analysis and coding for themes. Outcomes indicated overall positive changes in expressive vocabulary, following multi-step directions, sequencing, problem-solving, planning, skills for building and maintaining meaningful social relationships, and participant perception of the Garden Project’s impact on their own quality of life. Implementation of this project also highlighted supports and barriers that must be taken into consideration when planning similar projects. Overall findings support the use of experiential learning projects in day habilitation programs for adults with IDD, as well as additional research to deepen understanding of best practices, supports, and barriers for implementation of experiential learning with this population. This research provides an important contribution to research in the fields of speech-language pathology and other professions serving adults with IDD by describing an interprofessional experiential learning program with positive outcomes for executive function, language learning, and quality of life.Keywords: experiential learning, adults, intellectual and developmental disabilities, expressive language, receptive language, pragmatic language, executive function, communication therapy, day habilitation, interprofessionalism, quality of life
Procedia PDF Downloads 127222 Electrical Degradation of GaN-based p-channel HFETs Under Dynamic Electrical Stress
Authors: Xuerui Niu, Bolin Wang, Xinchuang Zhang, Xiaohua Ma, Bin Hou, Ling Yang
Abstract:
The application of discrete GaN-based power switches requires the collaboration of silicon-based peripheral circuit structures. However, the packages and interconnection between the Si and GaN devices can introduce parasitic effects to the circuit, which has great impacts on GaN power transistors. GaN-based monolithic power integration technology is an emerging solution which can improve the stability of circuits and allow the GaN-based devices to achieve more functions. Complementary logic circuits consisting of GaN-based E-mode p-channel heterostructure field-effect transistors (p-HFETs) and E-mode n-channel HEMTs can be served as the gate drivers. E-mode p-HFETs with recessed gate have attracted increasing interest because of the low leakage current and large gate swing. However, they suffer from a poor interface between the gate dielectric and polarized nitride layers. The reliability of p-HFETs is analyzed and discussed in this work. In circuit applications, the inverter is always operated with dynamic gate voltage (VGS) rather than a constant VGS. Therefore, dynamic electrical stress has been simulated to resemble the operation conditions for E-mode p-HFETs. The dynamic electrical stress condition is as follows. VGS is a square waveform switching from -5 V to 0 V, VDS is fixed, and the source grounded. The frequency of the square waveform is 100kHz with the rising/falling time of 100 ns and duty ratio of 50%. The effective stress time is 1000s. A number of stress tests are carried out. The stress was briefly interrupted to measure the linear IDS-VGS, saturation IDS-VGS, As VGS switches from -5 V to 0 V and VDS = 0 V, devices are under negative-bias-instability (NBI) condition. Holes are trapped at the interface of oxide layer and GaN channel layer, which results in the reduction of VTH. The negative shift of VTH is serious at the first 10s and then changes slightly with the following stress time. However, different phenomenon is observed when VDS reduces to -5V. VTH shifts negatively during stress condition, and the variation in VTH increases with time, which is different from that when VDS is 0V. Two mechanisms exists in this condition. On the one hand, the electric field in the gate region is influenced by the drain voltage, so that the trapping behavior of holes in the gate region changes. The impact of the gate voltage is weakened. On the other hand, large drain voltage can induce the hot holes generation and lead to serious hot carrier stress (HCS) degradation with time. The poor-quality interface between the oxide layer and GaN channel layer at the gate region makes a major contribution to the high-density interface traps, which will greatly influence the reliability of devices. These results emphasize that the improved etching and pretreatment processes needs to be developed so that high-performance GaN complementary logics with enhanced stability can be achieved.Keywords: GaN-based E-mode p-HFETs, dynamic electric stress, threshold voltage, monolithic power integration technology
Procedia PDF Downloads 93221 Gender Gap in Returns to Social Entrepreneurship
Authors: Saul Estrin, Ute Stephan, Suncica Vujic
Abstract:
Background and research question: Gender differences in pay are present at all organisational levels, including at the very top. One possible way for women to circumvent organizational norms and discrimination is to engage in entrepreneurship because, as CEOs of their own organizations, entrepreneurs largely determine their own pay. While commercial entrepreneurship plays an important role in job creation and economic growth, social entrepreneurship has come to prominence because of its promise of addressing societal challenges such as poverty, social exclusion, or environmental degradation through market-based rather than state-sponsored activities. This opens the research question whether social entrepreneurship might be a form of entrepreneurship in which the pay of men and women is the same, or at least more similar; that is to say there is little or no gender pay gap. If the gender gap in pay persists also at the top of social enterprises, what are the factors, which might explain these differences? Methodology: The Oaxaca-Blinder Decomposition (OBD) is the standard approach of decomposing the gender pay gap based on the linear regression model. The OBD divides the gender pay gap into the ‘explained’ part due to differences in labour market characteristics (education, work experience, tenure, etc.), and the ‘unexplained’ part due to differences in the returns to those characteristics. The latter part is often interpreted as ‘discrimination’. There are two issues with this approach. (i) In many countries there is a notable convergence in labour market characteristics across genders; hence the OBD method is no longer revealing, since the largest portion of the gap remains ‘unexplained’. (ii) Adding covariates to a base model sequentially either to test a particular coefficient’s ‘robustness’ or to account for the ‘effects’ on this coefficient of adding covariates might be problematic, due to sequence-sensitivity when added covariates are correlated. Gelbach’s decomposition (GD) addresses latter by using the omitted variables bias formula, which constructs a conditional decomposition thus accounting for sequence-sensitivity when added covariates are correlated. We use GD to decompose the differences in gaps of pay (annual and hourly salary), size of the organisation (revenues), effort (weekly hours of work), and sources of finances (fees and sales, grants and donations, microfinance and loans, and investors’ capital) between men and women leading social enterprises. Database: Our empirical work is made possible by our collection of a unique dataset using respondent driven sampling (RDS) methods to address the problem that there is as yet no information on the underlying population of social entrepreneurs. The countries that we focus on are the United Kingdom, Spain, Romania and Hungary. Findings and recommendations: We confirm the existence of a gender pay gap between men and women leading social enterprises. This gap can be explained by differences in the accumulation of human capital, psychological and social factors, as well as cross-country differences. The results of this study contribute to a more rounded perspective, highlighting that although social entrepreneurship may be a highly satisfying occupation, it also perpetuates gender pay inequalities.Keywords: Gelbach’s decomposition, gender gap, returns to social entrepreneurship, values and preferences
Procedia PDF Downloads 244220 The Properties of Risk-based Approaches to Asset Allocation Using Combined Metrics of Portfolio Volatility and Kurtosis: Theoretical and Empirical Analysis
Authors: Maria Debora Braga, Luigi Riso, Maria Grazia Zoia
Abstract:
Risk-based approaches to asset allocation are portfolio construction methods that do not rely on the input of expected returns for the asset classes in the investment universe and only use risk information. They include the Minimum Variance Strategy (MV strategy), the traditional (volatility-based) Risk Parity Strategy (SRP strategy), the Most Diversified Portfolio Strategy (MDP strategy) and, for many, the Equally Weighted Strategy (EW strategy). All the mentioned approaches were based on portfolio volatility as a reference risk measure but in 2023, the Kurtosis-based Risk Parity strategy (KRP strategy) and the Minimum Kurtosis strategy (MK strategy) were introduced. Understandably, they used the fourth root of the portfolio-fourth moment as a proxy for portfolio kurtosis to work with a homogeneous function of degree one. This paper contributes mainly theoretically and methodologically to the framework of risk-based asset allocation approaches with two steps forward. First, a new and more flexible objective function considering a linear combination (with positive coefficients that sum to one) of portfolio volatility and portfolio kurtosis is used to alternatively serve a risk minimization goal or a homogeneous risk distribution goal. Hence, the new basic idea consists in extending the achievement of typical risk-based approaches’ goals to a combined risk measure. To give the rationale behind operating with such a risk measure, it is worth remembering that volatility and kurtosis are expressions of uncertainty, to be read as dispersion of returns around the mean and that both preserve adherence to a symmetric framework and consideration for the entire returns distribution as well, but also that they differ from each other in that the former captures the “normal” / “ordinary” dispersion of returns, while the latter is able to catch the huge dispersion. Therefore, the combined risk metric that uses two individual metrics focused on the same phenomena but differently sensitive to its intensity allows the asset manager to express, in the context of an objective function by varying the “relevance coefficient” associated with the individual metrics, alternatively, a wide set of plausible investment goals for the portfolio construction process while serving investors differently concerned with tail risk and traditional risk. Since this is the first study that also implements risk-based approaches using a combined risk measure, it becomes of fundamental importance to investigate the portfolio effects triggered by this innovation. The paper also offers a second contribution. Until the recent advent of the MK strategy and the KRP strategy, efforts to highlight interesting properties of risk-based approaches were inevitably directed towards the traditional MV strategy and SRP strategy. Previous literature established an increasing order in terms of portfolio volatility, starting from the MV strategy, through the SRP strategy, arriving at the EQ strategy and provided the mathematical proof for the “equalization effect” concerning marginal risks when the MV strategy is considered, and concerning risk contributions when the SRP strategy is considered. Regarding the validity of similar conclusions when referring to the MK strategy and KRP strategy, the development of a theoretical demonstration is still pending. This paper fills this gap.Keywords: risk parity, portfolio kurtosis, risk diversification, asset allocation
Procedia PDF Downloads 65219 The Location-Routing Problem with Pickup Facilities and Heterogeneous Demand: Formulation and Heuristics Approach
Authors: Mao Zhaofang, Xu Yida, Fang Kan, Fu Enyuan, Zhao Zhao
Abstract:
Nowadays, last-mile distribution plays an increasingly important role in the whole industrial chain delivery link and accounts for a large proportion of the whole distribution process cost. Promoting the upgrading of logistics networks and improving the layout of final distribution points has become one of the trends in the development of modern logistics. Due to the discrete and heterogeneous needs and spatial distribution of customer demand, which will lead to a higher delivery failure rate and lower vehicle utilization, last-mile delivery has become a time-consuming and uncertain process. As a result, courier companies have introduced a range of innovative parcel storage facilities, including pick-up points and lockers. The introduction of pick-up points and lockers has not only improved the users’ experience but has also helped logistics and courier companies achieve large-scale economy. Against the backdrop of the COVID-19 of the previous period, contactless delivery has become a new hotspot, which has also created new opportunities for the development of collection services. Therefore, a key issue for logistics companies is how to design/redesign their last-mile distribution network systems to create integrated logistics and distribution networks that consider pick-up points and lockers. This paper focuses on the introduction of self-pickup facilities in new logistics and distribution scenarios and the heterogeneous demands of customers. In this paper, we consider two types of demand, including ordinary products and refrigerated products, as well as corresponding transportation vehicles. We consider the constraints associated with self-pickup points and lockers and then address the location-routing problem with self-pickup facilities and heterogeneous demands (LRP-PFHD). To solve this challenging problem, we propose a mixed integer linear programming (MILP) model that aims to minimize the total cost, which includes the facility opening cost, the variable transport cost, and the fixed transport cost. Due to the NP-hardness of the problem, we propose a hybrid adaptive large-neighbourhood search algorithm to solve LRP-PFHD. We evaluate the effectiveness and efficiency of the proposed algorithm by using instances generated based on benchmark instances. The results demonstrate that the hybrid adaptive large neighbourhood search algorithm is more efficient than MILP solvers such as Gurobi for LRP-PFHD, especially for large-scale instances. In addition, we made a comprehensive analysis of some important parameters (e.g., facility opening cost and transportation cost) to explore their impacts on the results and suggested helpful managerial insights for courier companies.Keywords: city logistics, last-mile delivery, location-routing, adaptive large neighborhood search
Procedia PDF Downloads 78218 Hydrographic Mapping Based on the Concept of Fluvial-Geomorphological Auto-Classification
Authors: Jesús Horacio, Alfredo Ollero, Víctor Bouzas-Blanco, Augusto Pérez-Alberti
Abstract:
Rivers have traditionally been classified, assessed and managed in terms of hydrological, chemical and / or biological criteria. Geomorphological classifications had in the past a secondary role, although proposals like River Styles Framework, Catchment Baseline Survey or Stroud Rural Sustainable Drainage Project did incorporate geomorphology for management decision-making. In recent years many studies have been attracted to the geomorphological component. The geomorphological processes and their associated forms determine the structure of a river system. Understanding these processes and forms is a critical component of the sustainable rehabilitation of aquatic ecosystems. The fluvial auto-classification approach suggests that a river is a self-built natural system, with processes and forms designed to effectively preserve their ecological function (hydrologic, sedimentological and biological regime). Fluvial systems are formed by a wide range of elements with multiple non-linear interactions on different spatial and temporal scales. Besides, the fluvial auto-classification concept is built using data from the river itself, so that each classification developed is peculiar to the river studied. The variables used in the classification are specific stream power and mean grain size. A discriminant analysis showed that these variables are the best characterized processes and forms. The statistical technique applied allows to get an individual discriminant equation for each geomorphological type. The geomorphological classification was developed using sites with high naturalness. Each site is a control point of high ecological and geomorphological quality. The changes in the conditions of the control points will be quickly recognizable, and easy to apply a right management measures to recover the geomorphological type. The study focused on Galicia (NW Spain) and the mapping was made analyzing 122 control points (sites) distributed over eight river basins. In sum, this study provides a method for fluvial geomorphological classification that works as an open and flexible tool underlying the fluvial auto-classification concept. The hydrographic mapping is the visual expression of the results, such that each river has a particular map according to its geomorphological characteristics. Each geomorphological type is represented by a particular type of hydraulic geometry (channel width, width-depth ratio, hydraulic radius, etc.). An alteration of this geometry is indicative of a geomorphological disturbance (whether natural or anthropogenic). Hydrographic mapping is also dynamic because its meaning changes if there is a modification in the specific stream power and/or the mean grain size, that is, in the value of their equations. The researcher has to check annually some of the control points. This procedure allows to monitor the geomorphology quality of the rivers and to see if there are any alterations. The maps are useful to researchers and managers, especially for conservation work and river restoration.Keywords: fluvial auto-classification concept, mapping, geomorphology, river
Procedia PDF Downloads 367217 Analyzing the Effects of Bio-fibers on the Stiffness and Strength of Adhesively Bonded Thermoplastic Bio-fiber Reinforced Composites by a Mixed Experimental-Numerical Approach
Authors: Sofie Verstraete, Stijn Debruyne, Frederik Desplentere
Abstract:
Considering environmental issues, the interest to apply sustainable materials in industry increases. Specifically for composites, there is an emerging need for suitable materials and bonding techniques. As an alternative to traditional composites, short bio-fiber (cellulose-based flax) reinforced Polylactic Acid (PLA) is gaining popularity. However, these thermoplastic based composites show issues in adhesive bonding. This research focusses on analyzing the effects of the fibers near the bonding interphase. The research applies injection molded plate structures. A first important parameter concerns the fiber volume fraction, which directly affects adhesion characteristics of the surface. This parameter is varied between 0 (pure PLA) and 30%. Next to fiber volume fraction, the orientation of fibers near the bonding surface governs the adhesion characteristics of the injection molded parts. This parameter is not directly controlled in this work, but its effects are analyzed. Surface roughness also greatly determines surface wettability, thus adhesion. Therefore, this research work considers three different roughness conditions. Different mechanical treatments yield values up to 0.5 mm. In this preliminary research, only one adhesive type is considered. This is a two-part epoxy which is cured at 23 °C for 48 hours. In order to assure a dedicated parametric study, simple and reproduceable adhesive bonds are manufactured. Both single lap (substrate width 25 mm, thickness 3 mm, overlap length 10 mm) and double lap tests are considered since these are well documented and quite straightforward to conduct. These tests are conducted for the different substrate and surface conditions. Dog bone tensile testing is applied to retrieve the stiffness and strength characteristics of the substrates (with different fiber volume fractions). Numerical modelling (non-linear FEA) relates the effects of the considered parameters on the stiffness and strength of the different joints, obtained through the abovementioned tests. Ongoing work deals with developing dedicated numerical models, incorporating the different considered adhesion parameters. Although this work is the start of an extensive research project on the bonding characteristics of thermoplastic bio-fiber reinforced composites, some interesting results are already prominent. Firstly, a clear correlation between the surface roughness and the wettability of the substrates is observed. Given the adhesive type (and viscosity), it is noticed that an increase in surface energy is proportional to the surface roughness, to some extent. This becomes more pronounced when fiber volume fraction increases. Secondly, ultimate bond strength (single lap) also increases with increasing fiber volume fraction. On a macroscopic level, this confirms the positive effect of fibers near the adhesive bond line.Keywords: adhesive bonding, bio-fiber reinforced composite, flax fibers, lap joint
Procedia PDF Downloads 128216 Improving the Biomechanical Resistance of a Treated Tooth via Composite Restorations Using Optimised Cavity Geometries
Authors: Behzad Babaei, B. Gangadhara Prusty
Abstract:
The objective of this study is to assess the hypotheses that a restored tooth with a class II occlusal-distal (OD) cavity can be strengthened by designing an optimized cavity geometry, as well as selecting the composite restoration with optimized elastic moduli when there is a sharp de-bonded edge at the interface of the tooth and restoration. Methods: A scanned human maxillary molar tooth was segmented into dentine and enamel parts. The dentine and enamel profiles were extracted and imported into a finite element (FE) software. The enamel rod orientations were estimated virtually. Fifteen models for the restored tooth with different cavity occlusal depths (1.5, 2, and 2.5 mm) and internal cavity angles were generated. By using a semi-circular stone part, a 400 N load was applied to two contact points of the restored tooth model. The junctions between the enamel, dentine, and restoration were considered perfectly bonded. All parts in the model were considered homogeneous, isotropic, and elastic. The quadrilateral and triangular elements were employed in the models. A mesh convergence analysis was conducted to verify that the element numbers did not influence the simulation results. According to the criteria of a 5% error in the stress, we found that a total element number of over 14,000 elements resulted in the convergence of the stress. A Python script was employed to automatically assign 2-22 GPa moduli (with increments of 4 GPa) for the composite restorations, 18.6 GPa to the dentine, and two different elastic moduli to the enamel (72 GPa in the enamel rods’ direction and 63 GPa in perpendicular one). The linear, homogeneous, and elastic material models were considered for the dentine, enamel, and composite restorations. 108 FEA simulations were successively conducted. Results: The internal cavity angles (α) significantly altered the peak maximum principal stress at the interface of the enamel and restoration. The strongest structures against the contact loads were observed in the models with α = 100° and 105. Even when the enamel rods’ directional mechanical properties were disregarded, interestingly, the models with α = 100° and 105° exhibited the highest resistance against the mechanical loads. Regarding the effect of occlusal cavity depth, the models with 1.5 mm depth showed higher resistance to contact loads than the model with thicker cavities (2.0 and 2.5 mm). Moreover, the composite moduli in the range of 10-18 GPa alleviated the stress levels in the enamel. Significance: For the class II OD cavity models in this study, the optimal geometries, composite properties, and occlusal cavity depths were determined. Designing the cavities with α ≥100 ̊ was significantly effective in minimizing peak stress levels. The composite restoration with optimized properties reduced the stress concentrations on critical points of the models. Additionally, when more enamel was preserved, the sturdier enamel-restoration interface against the mechanical loads was observed.Keywords: dental composite restoration, cavity geometry, finite element approach, maximum principal stress
Procedia PDF Downloads 102215 Study of Elastic-Plastic Fatigue Crack in Functionally Graded Materials
Authors: Somnath Bhattacharya, Kamal Sharma, Vaibhav Sonkar
Abstract:
Composite materials emerged in the middle of the 20th century as a promising class of engineering materials providing new prospects for modern technology. Recently, a new class of composite materials known as functionally graded materials (FGMs) has drawn considerable attention of the scientific community. In general, FGMs are defined as composite materials in which the composition or microstructure or both are locally varied so that a certain variation of the local material properties is achieved. This gradual change in composition and microstructure of material is suitable to get gradient of properties and performances. FGMs are synthesized in such a way that they possess continuous spatial variations in volume fractions of their constituents to yield a predetermined composition. These variations lead to the formation of a non-homogeneous macrostructure with continuously varying mechanical and / or thermal properties in one or more than one direction. Lightweight functionally graded composites with high strength to weight and stiffness to weight ratios have been used successfully in aircraft industry and other engineering applications like in electronics industry and in thermal barrier coatings. In the present work, elastic-plastic crack growth problems (using Ramberg-Osgood Model) in an FGM plate under cyclic load has been explored by extended finite element method. Both edge and centre crack problems have been solved by taking additionally holes, inclusions and minor cracks under plane stress conditions. Both soft and hard inclusions have been implemented in the problems. The validity of linear elastic fracture mechanics theory is limited to the brittle materials. A rectangular plate of functionally graded material of length 100 mm and height 200 mm with 100% copper-nickel alloy on left side and 100% ceramic (alumina) on right side is considered in the problem. Exponential gradation in property is imparted in x-direction. A uniform traction of 100 MPa is applied to the top edge of the rectangular domain along y direction. In some problems, domain contains major crack along with minor cracks or / and holes or / and inclusions. Major crack is located the centre of the left edge or the centre of the domain. The discontinuities, such as minor cracks, holes, and inclusions are added either singly or in combination with each other. On the basis of this study, it is found that effect of minor crack in the domain’s failure crack length is minimum whereas soft inclusions have moderate effect and the effect of holes have maximum effect. It is observed that the crack growth is more before the failure in each case when hard inclusions are present in place of soft inclusions.Keywords: elastic-plastic, fatigue crack, functionally graded materials, extended finite element method (XFEM)
Procedia PDF Downloads 389214 A Numerical Hybrid Finite Element Model for Lattice Structures Using 3D/Beam Elements
Authors: Ahmadali Tahmasebimoradi, Chetra Mang, Xavier Lorang
Abstract:
Thanks to the additive manufacturing process, lattice structures are replacing the traditional structures in aeronautical and automobile industries. In order to evaluate the mechanical response of the lattice structures, one has to resort to numerical techniques. Ansys is a globally well-known and trusted commercial software that allows us to model the lattice structures and analyze their mechanical responses using either solid or beam elements. In this software, a script may be used to systematically generate the lattice structures for any size. On the one hand, solid elements allow us to correctly model the contact between the substrates (the supports of the lattice structure) and the lattice structure, the local plasticity, and the junctions of the microbeams. However, their computational cost increases rapidly with the size of the lattice structure. On the other hand, although beam elements reduce the computational cost drastically, it doesn’t correctly model the contact between the lattice structures and the substrates nor the junctions of the microbeams. Also, the notion of local plasticity is not valid anymore. Moreover, the deformed shape of the lattice structure doesn’t correspond to the deformed shape of the lattice structure using 3D solid elements. In this work, motivated by the pros and cons of the 3D and beam models, a numerically hybrid model is presented for the lattice structures to reduce the computational cost of the simulations while avoiding the aforementioned drawbacks of the beam elements. This approach consists of the utilization of solid elements for the junctions and beam elements for the microbeams connecting the corresponding junctions to each other. When the global response of the structure is linear, the results from the hybrid models are in good agreement with the ones from the 3D models for body-centered cubic with z-struts (BCCZ) and body-centered cubic without z-struts (BCC) lattice structures. However, the hybrid models have difficulty to converge when the effect of large deformation and local plasticity are considerable in the BCCZ structures. Furthermore, the effect of the junction’s size of the hybrid models on the results is investigated. For BCCZ lattice structures, the results are not affected by the junction’s size. This is also valid for BCC lattice structures as long as the ratio of the junction’s size to the diameter of the microbeams is greater than 2. The hybrid model can take into account the geometric defects. As a demonstration, the point clouds of two lattice structures are parametrized in a platform called LATANA (LATtice ANAlysis) developed by IRT-SystemX. In this process, for each microbeam of the lattice structures, an ellipse is fitted to capture the effect of shape variation and roughness. Each ellipse is represented by three parameters; semi-major axis, semi-minor axis, and angle of rotation. Having the parameters of the ellipses, the lattice structures are constructed in Spaceclaim (ANSYS) using the geometrical hybrid approach. The results show a negligible discrepancy between the hybrid and 3D models, while the computational cost of the hybrid model is lower than the computational cost of the 3D model.Keywords: additive manufacturing, Ansys, geometric defects, hybrid finite element model, lattice structure
Procedia PDF Downloads 112213 Urban Planning Patterns after (COVID-19): An Assessment toward Resiliency
Authors: Mohammed AL-Hasani
Abstract:
The Pandemic COVID-19 altered the daily habits and affected the functional performance of the cities after this crisis leaving remarkable impacts on many metropolises worldwide. It is so obvious that having more densification in the city leads to more threats altering this main approach that was called for achieving sustainable development. The main goal to achieve resiliency in the cities, especially in forcing risks, is to deal with a planning system that is able to resist, absorb, accommodate and recover from the impacts that had been affected. Many Cities in London, Wuhan, New York, and others worldwide carried different planning approaches and varied in reaction to safeguard the impacts of the pandemic. The cities globally varied from the radiant pattern predicted by Le Corbusier, or having multi urban centers more like the approach of Frank Lloyd Wright’s Broadacre City, or having linear growth or gridiron expansion that was common by Doxiadis, compact pattern, and many other hygiene patterns. These urban patterns shape the spatial distribution and Identify both open and natural spaces with gentrified and gentrifying areas. This crisis paid attention to reassess many planning approaches and examine the existing urban patterns focusing more on the aim of continuity and resiliency in managing the crises within the rapid transformation and the power of market forces. According to that, this paper hypothesized that those urban planning patterns determine the method of reaction in assuring quarantine for the inhabitance and the performance of public services and need to be updated through carrying out an innovative urban management system and adopt further resilience patterns in prospective urban planning approaches. This paper investigates the adaptivity and resiliency of variant urban planning patterns regarding selected cities worldwide that affected by COVID-19 and their role in applying certain management strategies in controlling the pandemic spread, finding out the main potentials that should be included in prospective planning approaches. The examination encompasses the spatial arrangement, blocks definition, plots arrangement, and urban space typologies. This paper aims to investigate the urban patterns to deliberate also the debate between densification as one of the more sustainable planning approaches and disaggregation tendency that was followed after the pandemic by restructuring and managing its application according to the assessment of the spatial distribution and urban patterns. The biggest long-term threat to dense cities proves the need to shift to online working and telecommuting, creating a mixture between using cyber and urban spaces to remobilize the city. Reassessing spatial design and growth, open spaces, urban population density, and public awareness are the main solutions that should be carried out to face the outbreak in our current cities that should be managed from global to tertiary levels and could develop criteria for designing the prospective citiesKeywords: COVID-19, densification, resiliency, urban patterns
Procedia PDF Downloads 130212 Recycling the Lanthanides from Permanent Magnets by Electrochemistry in Ionic Liquid
Authors: Celine Bonnaud, Isabelle Billard, Nicolas Papaiconomou, Eric Chainet
Abstract:
Thanks to their high magnetization and low mass, permanent magnets (NdFeB and SmCo) have quickly became essential for new energies (wind turbines, electrical vehicles…). They contain large quantities of neodymium, samarium and dysprosium, that have been recently classified as critical elements and that therefore need to be recycled. Electrochemical processes including electrodissolution followed by electrodeposition are an elegant and environmentally friendly solution for the recycling of such lanthanides contained in permanent magnets. However, electrochemistry of the lanthanides is a real challenge as their standard potentials are highly negative (around -2.5V vs ENH). Consequently, non-aqueous solvents are required. Ionic liquids (IL) are novel electrolytes exhibiting physico-chemical properties that fulfill many requirements of the sustainable chemistry principles, such as extremely low volatility and non-flammability. Furthermore, their chemical and electrochemical properties (solvation of metallic ions, large electrochemical windows, etc.) render them very attractive media to implement alternative and sustainable processes in view of integrated processes. All experiments that will be presented were carried out using butyl-methylpyrrolidinium bis(trifluoromethanesulfonyl)imide. Linear sweep, cyclic voltammetry and potentiostatic electrochemical techniques were used. The reliability of electrochemical experiments, performed without glove box, for the classic three electrodes cell used in this study has been assessed. Deposits were obtained by chronoamperometry and were characterized by scanning electron microscopy and energy-dispersive X-ray spectroscopy. The IL cathodic behavior under different constraints (argon, nitrogen, oxygen atmosphere or water content) and using several electrode materials (Pt, Au, GC) shows that with argon gas flow and gold as a working electrode, the cathodic potential can reach the maximum value of -3V vs Fc+/Fc; thus allowing a possible reduction of lanthanides. On a gold working electrode, the reduction potential of samarium and neodymium was found to be -1.8V vs Fc+/Fc while that of dysprosium was -2.1V vs Fc+/Fc. The individual deposits obtained were found to be porous and presented some significant amounts of C, N, F, S and O atoms. Selective deposition of neodymium in presence of dysprosium was also studied and will be discussed. Next, metallic Sm, Nd and Dy electrodes were used in replacement of Au, which induced changes in the reduction potential values and the deposit structures of lanthanides. The individual corrosion potentials were also measured in order to determine the parameters influencing the electrodissolution of these metals. Finally, a full recycling process was investigated. Electrodissolution of a real permanent magnet sample was monitored kinetically. Then, the sequential electrodeposition of all lanthanides contained in the IL was investigated. Yields, quality of the deposits and consumption of chemicals will be discussed in depth, in view of the industrial feasibility of this process for real permanent magnets recycling.Keywords: electrodeposition, electrodissolution, ionic liquids, lanthanides, rcycling
Procedia PDF Downloads 274211 Perceived Procedural Justice and Organizational Citizenship Behavior: Evidence from a Security Organization
Authors: Noa Nelson, Orit Appel, Rachel Ben-ari
Abstract:
Organizational Citizenship Behavior (OCB) is voluntary employee behavior that contributes to the organization beyond formal job requirements. It can take different forms, such as helping teammates (OCB toward individuals; hence, OCB-I), or staying after hours to attend a task force (OCB toward the organization; hence, OCB-O). Generally, OCB contributes substantially to organizational climate, goals, productivity, and resilience, so organizations need to understand what encourages it. This is particularly challenging in security organizations. Security work is characterized by high levels of stress and burnout, which is detrimental to OCB, and security organizational design emphasizes formal rules and clear hierarchies, leaving employees with less freedom for voluntary behavior. The current research explored the role of Perceived Procedural Justice (PPJ) in enhancing OCB in a security organization. PPJ refers to how fair decision-making processes are perceived to be. It involves the sense that decision makers are objective, attentive to everyone's interests, respectful in their communications and participatory - allowing individuals a voice in decision processes. Justice perceptions affect motivation, and it was specifically suggested that PPJ creates an attachment to one's organization and personal interest in its success. Accordingly, PPJ had been associated with OCB, but hardly any research tested their association with security organizations. The current research was conducted among prison guards in the Israel Prison Service, to test a correlational and a causal association between PPJ and OCB. It differentiated between perceptions of direct commander procedural justice (CPJ), and perceptions of organization procedural justice (OPJ), hypothesizing that CPJ would relate to OCB-I, while OPJ would relate to OCB-O. In the first study, 336 prison guards (305 male) from 10 different prisons responded to questionnaires measuring their own CPJ, OPJ, OCB-I, and OCB-O. Hierarchical linear regression analyses indicated the significance of commander procedural justice (CPJ): It associated with OCB-I and also associated with OPJ, which, in turn, associated with OCB-O. The second study tested CPJ's causal effects on prison guards' OCB-I and OCB-O; 311 prison guards (275 male) from 14 different prisons read scenarios that described either high or low CPJ, and then evaluated the likelihood of that commander's prison guards performing OCB-I and OCB-O. In this study, CPJ enhanced OCB-O directly. It also contributed to OCB-I, indirectly: CPJ enhanced the motivation for collaboration with the commander, which respondents also evaluated after reading scenarios. Collaboration, in turn, associated with OCB-I. The studies demonstrate that procedural justice, especially commander's PJ, promotes OCB in security work environments. This is important because extraordinary teamwork and motivation are needed to deal with emergency situations and with delicate security challenges. Following the studies, the Israel Prison Service implemented personal procedural justice training for commanders and unit level programs for procedurally just decision processes. From a theoretical perspective, the studies extend the knowledge on PPJ and OCB to security work environments and contribute evidence on PPJ's causal effects. They also call for further research, to understand the mechanisms through which different types of PPJ affect different types of OCB.Keywords: organizational citizenship behavior, perceived procedural justice, prison guards, security organizations
Procedia PDF Downloads 221210 Assessing the Structure of Non-Verbal Semantic Knowledge: The Evaluation and First Results of the Hungarian Semantic Association Test
Authors: Alinka Molnár-Tóth, Tímea Tánczos, Regina Barna, Katalin Jakab, Péter Klivényi
Abstract:
Supported by neuroscientific findings, the so-called Hub-and-Spoke model of the human semantic system is based on two subcomponents of semantic cognition, namely the semantic control process and semantic representation. Our semantic knowledge is multimodal in nature, as the knowledge system stored in relation to a conception is extensive and broad, while different aspects of the conception may be relevant depending on the purpose. The motivation of our research is to develop a new diagnostic measurement procedure based on the preservation of semantic representation, which is appropriate to the specificities of the Hungarian language and which can be used to compare the non-verbal semantic knowledge of healthy and aphasic persons. The development of the test will broaden the Hungarian clinical diagnostic toolkit, which will allow for more specific therapy planning. The sample of healthy persons (n=480) was determined by the last census data for the representativeness of the sample. Based on the concept of the Pyramids and Palm Tree Test, and according to the characteristics of the Hungarian language, we have elaborated a test based on different types of semantic information, in which the subjects are presented with three pictures: they have to choose the one that best fits the target word above from the two lower options, based on the semantic relation defined. We have measured 5 types of semantic knowledge representations: associative relations, taxonomy, motional representations, concrete as well as abstract verbs. As the first step in our data analysis, we examined the normal distribution of our results, and since it was not normally distributed (p < 0.05), we used nonparametric statistics further into the analysis. Using descriptive statistics, we could determine the frequency of the correct and incorrect responses, and with this knowledge, we could later adjust and remove the items of questionable reliability. The reliability was tested using Cronbach’s α, and it can be safely said that all the results were in an acceptable range of reliability (α = 0.6-0.8). We then tested for the potential gender differences using the Mann Whitney-U test, however, we found no difference between the two (p < 0.05). Likewise, we didn’t see that the age had any effect on the results using one-way ANOVA (p < 0.05), however, the level of education did influence the results (p > 0.05). The relationships between the subtests were observed by the nonparametric Spearman’s rho correlation matrix, showing statistically significant correlation between the subtests (p > 0.05), signifying a linear relationship between the measured semantic functions. A margin of error of 5% was used in all cases. The research will contribute to the expansion of the clinical diagnostic toolkit and will be relevant for the individualised therapeutic design of treatment procedures. The use of a non-verbal test procedure will allow an early assessment of the most severe language conditions, which is a priority in the differential diagnosis. The measurement of reaction time is expected to advance prodrome research, as the tests can be easily conducted in the subclinical phase.Keywords: communication disorders, diagnostic toolkit, neurorehabilitation, semantic knowlegde
Procedia PDF Downloads 103209 Good Governance Complementary to Corruption Abatement: A Cross-Country Analysis
Authors: Kamal Ray, Tapati Bhattacharya
Abstract:
Private use of public office for private gain could be a tentative definition of corruption and most distasteful event of corruption is that it is not there, nor that it is pervasive, but it is socially acknowledged in the global economy, especially in the developing nations. We attempted to assess the interrelationship between the Corruption perception index (CPI) and the principal components of governance indicators as per World Bank like Control of Corruption (CC), rule of law (RL), regulatory quality (RQ) and government effectiveness (GE). Our empirical investigation concentrates upon the degree of reflection of governance indicators upon the CPI in order to single out the most powerful corruption-generating indicator in the selected countries. We have collected time series data on above governance indicators such as CC, RL, RQ and GE of the selected eleven countries from the year of 1996 to 2012 from World Bank data set. The countries are USA, UK, France, Germany, Greece, China, India, Japan, Thailand, Brazil, and South Africa. Corruption Perception Index (CPI) of the countries mentioned above for the period of 1996 to 2012is also collected. Graphical method of simple line diagram against the time series data on CPI is applied for quick view for the relative positions of different trend lines of different nations. The correlation coefficient is enough to assess primarily the degree and direction of association between the variables as we get the numerical data on governance indicators of the selected countries. The tool of Granger Causality Test (1969) is taken into account for investigating causal relationships between the variables, cause and effect to speak of. We do not need to verify stationary test as length of time series is short. Linear regression is taken as a tool for quantification of a change in explained variables due to change in explanatory variable in respect of governance vis a vis corruption. A bilateral positive causal link between CPI and CC is noticed in UK, index-value of CC increases by 1.59 units as CPI increases by one unit and CPI rises by 0.39 units as CC rises by one unit, and hence it has a multiplier effect so far as reduction in corruption is concerned in UK. GE causes strongly to the reduction of corruption in UK. In France, RQ is observed to be a most powerful indicator in reducing corruption whereas it is second most powerful indicator after GE in reducing of corruption in Japan. Governance-indicator like GE plays an important role to push down the corruption in Japan. In China and India, GE is proactive as well as influencing indicator to curb corruption. The inverse relationship between RL and CPI in Thailand indicates that ongoing machineries related to RL is not complementary to the reduction of corruption. The state machineries of CC in S. Africa are highly relevant to reduce the volume of corruption. In Greece, the variations of CPI positively influence the variations of CC and the indicator like GE is effective in controlling corruption as reflected by CPI. All the governance-indicators selected so far have failed to arrest their state level corruptions in USA, Germany and Brazil.Keywords: corruption perception index, governance indicators, granger causality test, regression
Procedia PDF Downloads 304208 Application of Combined Cluster and Discriminant Analysis to Make the Operation of Monitoring Networks More Economical
Authors: Norbert Magyar, Jozsef Kovacs, Peter Tanos, Balazs Trasy, Tamas Garamhegyi, Istvan Gabor Hatvani
Abstract:
Water is one of the most important common resources, and as a result of urbanization, agriculture, and industry it is becoming more and more exposed to potential pollutants. The prevention of the deterioration of water quality is a crucial role for environmental scientist. To achieve this aim, the operation of monitoring networks is necessary. In general, these networks have to meet many important requirements, such as representativeness and cost efficiency. However, existing monitoring networks often include sampling sites which are unnecessary. With the elimination of these sites the monitoring network can be optimized, and it can operate more economically. The aim of this study is to illustrate the applicability of the CCDA (Combined Cluster and Discriminant Analysis) to the field of water quality monitoring and optimize the monitoring networks of a river (the Danube), a wetland-lake system (Kis-Balaton & Lake Balaton), and two surface-subsurface water systems on the watershed of Lake Neusiedl/Lake Fertő and on the Szigetköz area over a period of approximately two decades. CCDA combines two multivariate data analysis methods: hierarchical cluster analysis and linear discriminant analysis. Its goal is to determine homogeneous groups of observations, in our case sampling sites, by comparing the goodness of preconceived classifications obtained from hierarchical cluster analysis with random classifications. The main idea behind CCDA is that if the ratio of correctly classified cases for a grouping is higher than at least 95% of the ratios for the random classifications, then at the level of significance (α=0.05) the given sampling sites don’t form a homogeneous group. Due to the fact that the sampling on the Lake Neusiedl/Lake Fertő was conducted at the same time at all sampling sites, it was possible to visualize the differences between the sampling sites belonging to the same or different groups on scatterplots. Based on the results, the monitoring network of the Danube yields redundant information over certain sections, so that of 12 sampling sites, 3 could be eliminated without loss of information. In the case of the wetland (Kis-Balaton) one pair of sampling sites out of 12, and in the case of Lake Balaton, 5 out of 10 could be discarded. For the groundwater system of the catchment area of Lake Neusiedl/Lake Fertő all 50 monitoring wells are necessary, there is no redundant information in the system. The number of the sampling sites on the Lake Neusiedl/Lake Fertő can decrease to approximately the half of the original number of the sites. Furthermore, neighbouring sampling sites were compared pairwise using CCDA and the results were plotted on diagrams or isoline maps showing the location of the greatest differences. These results can help researchers decide where to place new sampling sites. The application of CCDA proved to be a useful tool in the optimization of the monitoring networks regarding different types of water bodies. Based on the results obtained, the monitoring networks can be operated more economically.Keywords: combined cluster and discriminant analysis, cost efficiency, monitoring network optimization, water quality
Procedia PDF Downloads 349207 Exploring the Neural Correlates of Different Interaction Types: A Hyperscanning Investigation Using the Pattern Game
Authors: Beata Spilakova, Daniel J. Shaw, Radek Marecek, Milan Brazdil
Abstract:
Hyperscanning affords a unique insight into the brain dynamics underlying human interaction by simultaneously scanning two or more individuals’ brain responses while they engage in dyadic exchange. This provides an opportunity to observe dynamic brain activations in all individuals participating in interaction, and possible interbrain effects among them. The present research aims to provide an experimental paradigm for hyperscanning research capable of delineating among different forms of interaction. Specifically, the goal was to distinguish between two dimensions: (1) interaction structure (concurrent vs. turn-based) and (2) goal structure (competition vs cooperation). Dual-fMRI was used to scan 22 pairs of participants - each pair matched on gender, age, education and handedness - as they played the Pattern Game. In this simple interactive task, one player attempts to recreate a pattern of tokens while the second player must either help (cooperation) or prevent the first achieving the pattern (competition). Each pair played the game iteratively, alternating their roles every round. The game was played in two consecutive sessions: first the players took sequential turns (turn-based), but in the second session they placed their tokens concurrently (concurrent). Conventional general linear model (GLM) analyses revealed activations throughout a diffuse collection of brain regions: The cooperative condition engaged medial prefrontal cortex (mPFC) and posterior cingulate cortex (PCC); in the competitive condition, significant activations were observed in frontal and prefrontal areas, insula cortices and the thalamus. Comparisons between the turn-based and concurrent conditions revealed greater precuneus engagement in the former. Interestingly, mPFC, PCC and insulae are linked repeatedly to social cognitive processes. Similarly, the thalamus is often associated with a cognitive empathy, thus its activation may reflect the need to predict the opponent’s upcoming moves. Frontal and prefrontal activation most likely represent the higher attentional and executive demands of the concurrent condition, whereby subjects must simultaneously observe their co-player and place his own tokens accordingly. The activation of precuneus in the turn-based condition may be linked to self-other distinction processes. Finally, by performing intra-pair correlations of brain responses we demonstrate condition-specific patterns of brain-to-brain coupling in mPFC and PCC. Moreover, the degree of synchronicity in these neural signals related to performance on the game. The present results, then, show that different types of interaction recruit different brain systems implicated in social cognition, and the degree of inter-player synchrony within these brain systems is related to nature of the social interaction.Keywords: brain-to-brain coupling, hyperscanning, pattern game, social interaction
Procedia PDF Downloads 340206 Probabilistic Study of Impact Threat to Civil Aircraft and Realistic Impact Energy
Authors: Ye Zhang, Chuanjun Liu
Abstract:
In-service aircraft is exposed to different types of threaten, e.g. bird strike, ground vehicle impact, and run-way debris, or even lightning strike, etc. To satisfy the aircraft damage tolerance design requirements, the designer has to understand the threatening level for different types of the aircraft structures, either metallic or composite. Exposing to low-velocity impacts may produce very serious internal damages such as delaminations and matrix cracks without leaving visible mark onto the impacted surfaces for composite structures. This internal damage can cause significant reduction in the load carrying capacity of structures. The semi-probabilistic method provides a practical and proper approximation to establish the impact-threat based energy cut-off level for the damage tolerance evaluation of the aircraft components. Thus, the probabilistic distribution of impact threat and the realistic impact energy level cut-offs are the essential establishments required for the certification of aircraft composite structures. A new survey of impact threat to civil aircraft in-service has recently been carried out based on field records concerning around 500 civil aircrafts (mainly single aisles) and more than 4.8 million flight hours. In total 1,006 damages caused by low-velocity impact events had been screened out from more than 8,000 records including impact dents, scratches, corrosions, delaminations, cracks etc. The impact threat dependency on the location of the aircraft structures and structural configuration was analyzed. Although the survey was mainly focusing on the metallic structures, the resulting low-energy impact data are believed likely representative to general civil aircraft, since the service environments and the maintenance operations are independent of the materials of the structures. The probability of impact damage occurrence (Po) and impact energy exceedance (Pe) are the two key parameters for describing the statistic distribution of impact threat. With the impact damage events from the survey, Po can be estimated as 2.1x10-4 per flight hour. Concerning the calculation of Pe, a numerical model was developed using the commercial FEA software ABAQUS to backward estimate the impact energy based on the visible damage characteristics. The relationship between the visible dent depth and impact energy was established and validated by drop-weight impact experiments. Based on survey results, Pe was calculated and assumed having a log-linear relationship versus the impact energy. As the product of two aforementioned probabilities, Po and Pe, it is reasonable and conservative to assume Pa=PoxPe=10-5, which indicates that the low-velocity impact events are similarly likely as the Limit Load events. Combing Pa with two probabilities Po and Pe obtained based on the field survey, the cutoff level of realistic impact energy was estimated and valued as 34 J. In summary, a new survey was recently done on field records of civil aircraft to investigate the probabilistic distribution of impact threat. Based on the data, two probabilities, Po and Pe, were obtained. Considering a conservative assumption of Pa, the cutoff energy level for the realistic impact energy has been determined, which provides potential applicability in damage tolerance certification of future civil aircraft.Keywords: composite structure, damage tolerance, impact threat, probabilistic
Procedia PDF Downloads 308205 A Case Study Report on Acoustic Impact Assessment and Mitigation of the Hyprob Research Plant
Authors: D. Bianco, A. Sollazzo, M. Barbarino, G. Elia, A. Smoraldi, N. Favaloro
Abstract:
The activities, described in the present paper, have been conducted in the framework of the HYPROB-New Program, carried out by the Italian Aerospace Research Centre (CIRA) promoted and funded by the Italian Ministry of University and Research (MIUR) in order to improve the National background on rocket engine systems for space applications. The Program has the strategic objective to improve National system and technology capabilities in the field of liquid rocket engines (LRE) for future Space Propulsion Systems applications, with specific regard to LOX/LCH4 technology. The main purpose of the HYPROB program is to design and build a Propulsion Test Facility (HIMP) allowing test activities on Liquid Thrusters. The development of skills in liquid rocket propulsion can only pass through extensive test campaign. Following its mission, CIRA has planned the development of new testing facilities and infrastructures for space propulsion characterized by adequate sizes and instrumentation. The IMP test cell is devoted to testing articles representative of small combustion chambers, fed with oxygen and methane, both in liquid and gaseous phase. This article describes the activities that have been carried out for the evaluation of the acoustic impact, and its consequent mitigation. The impact of the simulated acoustic disturbance has been evaluated, first, using an approximated method based on experimental data by Baumann and Coney, included in “Noise and Vibration Control Engineering” edited by Vér and Beranek. This methodology, used to evaluate the free-field radiation of jet in ideal acoustical medium, analyzes in details the jet noise and assumes sources acting at the same time. It considers as principal radiation sources the jet mixing noise, caused by the turbulent mixing of jet gas and the ambient medium. Empirical models, allowing a direct calculation of the Sound Pressure Level, are commonly used for rocket noise simulation. The model named after K. Eldred is probably one of the most exploited in this area. In this paper, an improvement of the Eldred Standard model has been used for a detailed investigation of the acoustical impact of the Hyprob facility. This new formulation contains an explicit expression for the acoustic pressure of each equivalent noise source, in terms of amplitude and phase, allowing the investigation of the sources correlation effects and their propagation through wave equations. In order to enhance the evaluation of the facility acoustic impact, including an assessment of the mitigation strategies to be set in place, a more advanced simulation campaign has been conducted using both an in-house code for noise propagation and scattering, and a commercial code for industrial noise environmental impact, CadnaA. The noise prediction obtained with the revised Eldred-based model has then been used for formulating an empirical/BEM (Boundary Element Method) hybrid approach allowing the evaluation of the barrier mitigation effect, at the design. This approach has been compared with the analogous empirical/ray-acoustics approach, implemented within CadnaA using a customized definition of sources and directivity factor. The resulting impact evaluation study is reported here, along with the design-level barrier optimization for noise mitigation.Keywords: acoustic impact, industrial noise, mitigation, rocket noise
Procedia PDF Downloads 146204 Deciphering Tumor Stroma Interactions in Retinoblastoma
Authors: Rajeswari Raguraman, Sowmya Parameswaran, Krishnakumar Subramanian, Jagat Kanwar, Rupinder Kanwar
Abstract:
Background: Tumor microenvironment has been implicated in several cancers to regulate cell growth, invasion and metastasis culminating in outcome of therapy. Tumor stroma consists of multiple cell types that are in constant cross-talk with the tumor cells to favour a pro-tumorigenic environment. Not much is known about the existence of tumor microenvironment in the pediatric intraocular malignancy, Retinoblastoma (RB). In the present study, we aim to understand the multiple stromal cellular subtypes and tumor stromal interactions expressed in RB tumors. Materials and Methods: Immunohistochemistry for stromal cell markers CD31, CD68, alpha-smooth muscle (α-SMA), vimentin and glial fibrillary acidic protein (GFAP) was performed on formalin fixed paraffin embedded tissues sections of RB (n=12). The differential expression of stromal target molecules; fibroblast activation protein (FAP), tenascin-C (TNC), osteopontin (SPP1), bone marrow stromal antigen 2 (BST2), stromal derived factor 2 and 4 (SDF2 and SDF4) in primary RB tumors (n=20) and normal retina (n=5) was studied by quantitative reverse transcriptase polymerase chain reaction (qRT-PCR) and Western blotting. The differential expression was correlated with the histopathological features of RB. The interaction between RB cell lines (Weri-Rb-1, NCC-RbC-51) and Bone marrow stromal cells (BMSC) was also studied using direct co-culture and indirect co-culture methods. The functional effect of the co-culture methods on the RB cells was evaluated by invasion and proliferation assays. Global gene expression was studied by using Affymetrix 3’ IVT microarray. Pathway prediction was performed using KEGG and the key molecules were validated using qRT-PCR. Results: The immunohistochemistry revealed the presence of several stromal cell types such as endothelial cells (CD31+;Vim+/-); macrophages (CD68+;Vim+/-); Fibroblasts (Vim+; CD31-;CD68- );myofibroblasts (α-SMA+/ Vim+) and invading retinal astrocytes/ differentiated retinal glia (GFAP+; Vim+). A characteristic distribution of these stromal cell types was observed in the tumor microenvironment, with endothelial cells predominantly seen in blood vessels and macrophages near actively proliferating tumor or necrotic areas. Retinal astrocytes and glia were predominant near the optic nerve regions in invasive tumors with sparse distribution in tumor foci. Fibroblasts were widely distributed with rare evidence of myofibroblasts in the tumor. Both gene and protein expression revealed statistically significant (P<0.05) up-regulation of FAP, TNC and BST2 in primary RB tumors compared to the normal retina. Co-culture of BMSC with RB cells promoted invasion and proliferation of RB cells in direct and indirect contact methods respectively. Direct co-culture of RB cell lines with BMSC resulted in gene expression changes in ECM-receptor interaction, focal adhesion, IL-8 and TGF-β signaling pathways associated with cancer. In contrast, various metabolic pathways such a glucose, fructose and amino acid metabolism were significantly altered under the indirect co-culture condition. Conclusion: The study suggests that the close interaction between RB cells and the stroma might be involved in RB tumor invasion and progression which is likely to be mediated by ECM-receptor interactions and secretory factors. Targeting the tumor stroma would be an attractive option for redesigning treatment strategies for RB.Keywords: gene expression profiles, retinoblastoma, stromal cells, tumor microenvironment
Procedia PDF Downloads 385