Search results for: single packaged
1159 The Determinants of Co-Production for Value Co-Creation: Quadratic Effects
Authors: Li-Wei Wu, Chung-Yu Wang
Abstract:
Recently, interest has been generated in the search for a new reference framework for value creation that is centered on the co-creation process. Co-creation implies cooperative value creation between service firms and customers and requires the building of experiences as well as the resolution of problems through the combined effort of the parties in the relationship. For customers, values are always co-created through their participation in services. Customers can ultimately determine the value of the service in use. This new approach emphasizes that a customer’s participation in the service process is considered indispensable to value co-creation. An important feature of service in the context of exchange is co-production, which implies that a certain amount of participation is needed from customers to co-produce a service and hence co-create value. Co-production no doubt helps customers better understand and take charge of their own roles in the service process. Thus, this proposal is to encourage co-production, thus facilitating value co-creation of that is reflected in both customers and service firms. Four determinants of co-production are identified in this study, namely, commitment, trust, asset specificity, and decision-making uncertainty. Commitment is an essential dimension that directly results in successful cooperative behaviors. Trust helps establish a relational environment that is fundamental to cross-border cooperation. Asset specificity motivates co-production because this determinant may enhance return on asset investment. Decision-making uncertainty prompts customers to collaborate with service firms in making decisions. In other words, customers adjust their roles and are increasingly engaged in co-production when commitment, trust, asset specificity, and decision-making uncertainty are enhanced. Although studies have examined the preceding effects, to our best knowledge, none has empirically examined the simultaneous effects of all the curvilinear relationships in a single study. When these determinants are excessive, however, customers will not engage in co-production process. In brief, we suggest that the relationships of commitment, trust, asset specificity, and decision-making uncertainty with co-production are curvilinear or are inverse U-shaped. These new forms of curvilinear relationships have not been identified in existing literature on co-production; therefore, they complement extant linear approaches. Most importantly, we aim to consider both the bright and the dark sides of the determinants of co-production.Keywords: co-production, commitment, trust, asset specificity, decision-making uncertainty
Procedia PDF Downloads 1871158 Knowledge and Utilization of Mammography among Undergraduate Female Students in a Nigerian University
Authors: Ali Arazeem Abdullahi, Mariam Seedat-Khan, Bamidele S. Akanni
Abstract:
Background: Like the rest of the world, cancer of the breast is a life-threatening disease to Nigerian women. The utilization of mammography is however very poor among the general population. Whereas, there strong indications that women who engage in the regular screening of breast cancer using mammography are more likely to have a lower risk of developing and dying from advanced breast cancer compared to unscreened women. This study examined knowledge of breast cancer and utilization of mammography among undergraduate female students at the University of Ilorin, Nigeria. Health Belief Model (HBM) was deployed to guide the conduct of the study. Method: Self-administered questionnaire was used to collect data from 292 undergraduate female students from the faculties of Social and Management Sciences of the University. A simple random sampling technique was used to select the respondents. Data was analyzed using both descriptive and inferential statistics. Results: The study found that apart from high knowledge of breast cancer and mammography, perceived threat, perceived susceptibility and perceived seriousness of breast cancer were equally high. However, the uptake of mammography was very poor largely due to perceived barriers including being single and young and poor history of breast cancer in families (cues to action). The test of hypotheses showed that there is a weak relationship of about 6.8% between knowledge of breast cancer and utilization of mammography (p-value= 0.244) at 0.05 level of significance. However, 64.4% of the respondents were willing to utilize mammography in the future if the opportunity arises. While the study found a significant statistical relationship between the perceived benefits of mammography and its utilization among the respondents, no significant statistical association was found between the socio-demographic characteristics of the respondents and the uptake of mammography. Recommendations: Findings highlight the need for health education interventions to promote breast cancer screening and the utilization mammography, while addressing barriers to the uptake of mammography among female undergraduate students of the University of Ilorin and Nigeria in general.Keywords: cancer of the breast, mammography, female undergraduate students, health belief model, University of Ilorin
Procedia PDF Downloads 2411157 Clinicians’ Experiences with IT Systems in a UK District General Hospital: A Qualitative Analysis
Authors: Sunny Deo, Eve Barnes, Peter Arnold-Smith
Abstract:
Introduction: Healthcare technology is a rapidly expanding field in healthcare, with enthusiasts suggesting a revolution in the quality and efficiency of healthcare delivery based on the utilisation of better e-healthcare, including the move to paperless healthcare. The role and use of computers and programmes for healthcare have been increasing over the past 50 years. Despite this, there is no standardised method of assessing the quality of hardware and software utilised by frontline healthcare workers. Methods and subjects: Based on standard Patient Related Outcome Measures, a questionnaire was devised with the aim of providing quantitative and qualitative data on clinicians’ perspectives of their hospital’s Information Technology (IT). The survey was distributed via the Institution’s Intranet to all contracted doctors, and the survey's qualitative results were analysed. Qualitative opinions were grouped as positive, neutral, or negative and further sub-grouped into speed/usability, software/hardware, integration, IT staffing, clinical risk, and wellbeing. Analysis was undertaken on the basis of doctor seniority and by specialty. Results: There were 196 responses, with 51% from senior doctors (consultant grades) and the rest from junior grades, with the largest group of respondents 52% coming from medicine specialties. Differences in the proportion of principle and sub-groups were noted by seniority and specialty. Negative themes were by far the commonest stated opinion type, occurring in almost 2/3’s of responses (63%), while positive comments occurred less than 1 in 10 (8%). Conclusions: This survey confirms strongly negative attitudes to the current state of electronic documentation and IT in a large single-centre cohort of hospital-based frontline physicians after two decades of so-called progress to a paperless healthcare system. Greater use would provide further insights and potentially optimise the focus of development and delivery to improve the quality and effectiveness of IT for clinicians and their patients.Keywords: information technology, electronic patient records, digitisation, paperless healthcare
Procedia PDF Downloads 901156 Decoding the Construction of Identity and Struggle for Self-Assertion in Toni Morrison and Selected Indian Authors
Authors: Madhuri Goswami
Abstract:
The matrix of power establishes the hegemonic dominance and supremacy of one group through exercising repression and relegation upon the other. However, the injustice done to any race, ethnicity, or caste has instigated the protest and resistance through various modes -social campaigns, political movements, literary expression and so on. Consequently, the search for identity, the means of claiming it and strive for recognition have evolved as the persistent phenomena all through the world. In the discourse of protest and minority literature, these two discourses -African American and Indian Dalit- surprisingly, share wrath and anger, hope and aspiration, and quest for identity and struggle for self-assertion. African American and Indian Dalit are two geographically and culturally apart communities that stand together on a single platform. This paper has sought to comprehend the form and investigate the formation of identity in general and in the literary work of Toni Morrison and Indian Dalit writing, particular, i.e., Black identity and Dalit identity. The study has speculated two types of identity, namely, individual or self and social or collective identity in the literary province of these marginalized literature. Morrison’s work outsources that self-identity is not merely a reflection of an inner essence; it is constructed through social circumstances and relations. Likewise, Dalit writings too have a fair record of discovery of self-hood and formation of identity, which connects to the realization of self-assertion and worthiness of their culture among Dalit writers. Bama, Pawar, Limbale, Pawde, and Kamble investigate their true self concealed amid societal alienation. The study has found that the struggle for recognition is, in fact, the striving to become the definer, instead of just being defined; and, this striving eventually, leads to the introspection among them. To conclude, Morrison as well as Indian marginalized authors, despite being set quite distant, communicate the relation between individual and community in the context of self-consciousness, self-identification and (self) introspection. This research opens a scope for further research to find out similar phenomena and trace an analogy in other world literatures.Keywords: identity, introspection, self-access, struggle for recognition
Procedia PDF Downloads 1511155 Evaluation of Complications Observed in Porcelain Fused to Metal Crowns Placed at a Teaching Institution
Authors: Shizrah Jamal, Robia Ghafoor, Farhan Raza
Abstract:
Porcelain fused to metal crown is the most versatile variety of crown that is commonly placed worldwide. Various complications have been reported in the PFM crowns with use over the period of time. These include chipping of the porcelain, recurrent caries, loss of retention, open contacts, and tooth fracture. The objective of the present study was to determine the frequency of these complications in crowns cemented over a period of five years in a tertiary care hospital and also to report the survival of these crowns. A retrospective study was conducted in Dental clinics, Aga Khan University Hospital in which 150 PFM crowns cemented over a period of five years were evaluated. Patient demographics, oral hygiene habits, para-functional habits, crown insertion and follow-up dates were recorded in a specially designed proforma. All PFM crowns fulfilling the inclusion criteria were assessed both clinically and radiographically for the presence of any complication. SPSS version 22.0 was used for statistical analysis. Frequency distribution and proportion of complications were determined. Chi-square test was used to determine the association of complications of PFM crowns with multiple variables including tooth wear, opposing dentition and betel nut chewing. Kaplan- meier survival analysis was used to determine the survival of PFM crowns over the period of five years. Level of significance was kept at 0.05. A total of 107 patients, with a mean age of 43.51 + 12.4 years, having 150 PFM crowns were evaluated. The most common complication observed was open proximal contacts (8.7%) followed by porcelain chipping (6%), decementation (5.3%), and abutment fracture (1.3%). Chi square test showed that there was no statistically significant association of PFM crown complication with tooth wear, betel nut and opposing dentition (p-value <0.05). The overall success and survival rates of PFM crowns turned out to be 78.7 and 84.7% respectively. Within the limitations of the study, it can be concluded that PFM crowns are an effective treatment modality with high success and survival rates. Since it was a single centered study; the results should be generalized with caution.Keywords: chipping, complication, crown, survival rate
Procedia PDF Downloads 2071154 Simulation of Concrete Wall Subjected to Airblast by Developing an Elastoplastic Spring Model in Modelica Modelling Language
Authors: Leo Laine, Morgan Johansson
Abstract:
To meet the civilizations future needs for safe living and low environmental footprint, the engineers designing the complex systems of tomorrow will need efficient ways to model and optimize these systems for their intended purpose. For example, a civil defence shelter and its subsystem components needs to withstand, e.g. airblast and ground shock from decided design level explosion which detonates with a certain distance from the structure. In addition, the complex civil defence shelter needs to have functioning air filter systems to protect from toxic gases and provide clean air, clean water, heat, and electricity needs to also be available through shock and vibration safe fixtures and connections. Similar complex building systems can be found in any concentrated living or office area. In this paper, the authors use a multidomain modelling language called Modelica to model a concrete wall as a single degree of freedom (SDOF) system with elastoplastic properties with the implemented option of plastic hardening. The elastoplastic model was developed and implemented in the open source tool OpenModelica. The simulation model was tested on the case with a transient equivalent reflected pressure time history representing an airblast from 100 kg TNT detonating 15 meters from the wall. The concrete wall is approximately regarded as a concrete strip of 1.0 m width. This load represents a realistic threat on any building in a city like area. The OpenModelica model results were compared with an Excel implementation of a SDOF model with an elastic-plastic spring using simple fixed timestep central difference solver. The structural displacement results agreed very well with each other when it comes to plastic displacement magnitude, elastic oscillation displacement, and response times.Keywords: airblast from explosives, elastoplastic spring model, Modelica modelling language, SDOF, structural response of concrete structure
Procedia PDF Downloads 1291153 Light Weight Fly Ash Based Composite Material for Thermal Insulation Applications
Authors: Bharath Kenchappa, Kunigal Shivakumar
Abstract:
Lightweight, low thermal conductivity and high temperature resistant materials or the system with moderate mechanical properties and capable of taking high heating rates are needed in both commercial and military applications. A single material with these attributes is very difficult to find and one needs to come with innovative ideas to make such material system using what is available. To bring down the cost of the system, one has to be conscious about the cost of basic materials. Such a material system can be called as the thermal barrier system. This paper focuses on developing, testing and characterization of material system for thermal barrier applications. The material developed is porous, low density, low thermal conductivity of 0.1062 W/m C and glass transition temperature about 310 C. Also, the thermal properties of the developed material was measured in both longitudinal and thickness direction to highlight the fact that the material shows isotropic behavior. The material is called modified Eco-Core which uses only less than 9% weight of high-char resin in the composite. The filler (reinforcing material) is a component of fly ash called Cenosphere, they are hollow micro-bubbles made of ceramic materials. Special mixing-technique is used to surface coat the fillers with a thin layer of resin to develop a point-to-point contact of particles. One could use commercial ceramic micro-bubbles instead of Cenospheres, but it is expensive. The bulk density of Cenospheres is about 0.35 g/cc and we could accomplish the composite density of about 0.4 g/cc. One percent filler weight of 3mm length standard drywall grade fibers was used to bring the added toughness. Both thermal and mechanical characterization was performed and properties are documented. For higher temperature applications (up to 1,000 C), a hybrid system was developed using an aerogel mat. Properties of combined material was characterized and documented. Thermal tests were conducted on both the bare modified Eco-Core and hybrid materials to assess the suitability of the material to a thermal barrier application. The hybrid material system was found to meet the requirement of the application.Keywords: aerogel, fly ash, porous material, thermal barrier
Procedia PDF Downloads 1091152 Climate Change Adaptation: Methodologies and Tools to Define Resilience Scenarios for Existing Buildings in Mediterranean Urban Areas
Authors: Francesca Nicolosi, Teresa Cosola
Abstract:
Climate changes in Mediterranean areas, such as the increase of average seasonal temperatures, the urban heat island phenomenon, the intensification of solar radiation and the extreme weather threats, cause disruption events, so that climate adaptation has become a pressing issue. Due to the strategic role that the built heritage holds in terms of environmental impact and energy waste and its potentiality, it is necessary to assess the vulnerability and the adaptive capacity of the existing building to climate change, in order to define different mitigation scenarios. The aim of this research work is to define an optimized and integrated methodology for the assessment of resilience levels and adaptation scenarios for existing buildings in Mediterranean urban areas. Moreover, the study of resilience indicators allows us to define building environmental and energy performance in order to identify the design and technological solutions for the improvement of the building and its urban area potentialities. The methodology identifies step-by-step different phases, starting from the detailed study of characteristic elements of urban system: climatic, natural, human, typological and functional components are analyzed in their critical factors and their potential. Through the individuation of the main perturbing factors and the vulnerability degree of the system to the risks linked to climate change, it is possible to define mitigation and adaptation scenarios. They can be different, according to the typological, functional and constructive features of the analyzed system, divided into categories of intervention, and characterized by different analysis levels (from the single building to the urban area). The use of software simulations allows obtaining information on the overall behavior of the building and the urban system, to generate predictive models in the medium and long-term environmental and energy retrofit and to make a comparative study of the mitigation scenarios identified. The studied methodology is validated on a case study.Keywords: climate impact mitigation, energy efficiency, existing building heritage, resilience
Procedia PDF Downloads 2381151 Estimation of Fragility Curves Using Proposed Ground Motion Selection and Scaling Procedure
Authors: Esra Zengin, Sinan Akkar
Abstract:
Reliable and accurate prediction of nonlinear structural response requires specification of appropriate earthquake ground motions to be used in nonlinear time history analysis. The current research has mainly focused on selection and manipulation of real earthquake records that can be seen as the most critical step in the performance based seismic design and assessment of the structures. Utilizing amplitude scaled ground motions that matches with the target spectra is commonly used technique for the estimation of nonlinear structural response. Representative ground motion ensembles are selected to match target spectrum such as scenario-based spectrum derived from ground motion prediction equations, Uniform Hazard Spectrum (UHS), Conditional Mean Spectrum (CMS) or Conditional Spectrum (CS). Different sets of criteria exist among those developed methodologies to select and scale ground motions with the objective of obtaining robust estimation of the structural performance. This study presents ground motion selection and scaling procedure that considers the spectral variability at target demand with the level of ground motion dispersion. The proposed methodology provides a set of ground motions whose response spectra match target median and corresponding variance within a specified period interval. The efficient and simple algorithm is used to assemble the ground motion sets. The scaling stage is based on the minimization of the error between scaled median and the target spectra where the dispersion of the earthquake shaking is preserved along the period interval. The impact of the spectral variability on nonlinear response distribution is investigated at the level of inelastic single degree of freedom systems. In order to see the effect of different selection and scaling methodologies on fragility curve estimations, results are compared with those obtained by CMS-based scaling methodology. The variability in fragility curves due to the consideration of dispersion in ground motion selection process is also examined.Keywords: ground motion selection, scaling, uncertainty, fragility curve
Procedia PDF Downloads 5821150 Development of Knitted Seersucker Fabric for Improved Comfort Properties
Authors: Waqas Ashraf, Yasir Nawab, Haritham Khan, Habib Awais, Shahbaz Ahmad
Abstract:
Seersucker is a popular lightweight fabric widely used in men’s and women’s suiting, casual wear, children’s clothing, house robes, bed spreads and for spring and summer wear. The puckered effect generates air spaces between body and the fabric, keeping the wearer cool in hot conditions. The aim of this work was to develop knitted seersucker fabric on single cylinder weft knitting machine using plain jersey structure. Core spun cotton yarn and cotton spun yarn of same linear density were used. Core spun cotton yarn, contains cotton fiber in the sheath and elastase filament in the core. The both yarn were fed at regular interval to feeders on the machine. The loop length and yarn tension were kept constant at each feeder. The samples were then scoured and bleached. After wet processing, the fabric samples were washed and tumble dried. Parameters like loop length, stitch density and areal density were measured after conditioning these samples for 24 hours in Standard atmospheric condition. Produced sample has a regular puckering stripe along the width of the fabric with same height. The stitch density of both the flat and puckered area of relaxed fabric was found to be different .Air permeability and moisture management tests were performed. The results indicated that the knitted seersucker fabric has better wicking and moisture management properties as the flat area contact, whereas puckered area held away from the skin. Seersucker effect in knitted fabric was achieved by the difference of contraction of both sets of courses produced from different types of yarns. The seer sucker fabric produce by knitting technique is less expensive as compared to woven seer sucker fabric as there is no need of yarn preparation. The knitted seersucker fabric is more practicable for summer dresses, skirts, blouses, shirts, trousers and shorts.Keywords: air permeability, knitted structure, moisture management, seersucker
Procedia PDF Downloads 3241149 Generalized Correlation Coefficient in Genome-Wide Association Analysis of Cognitive Ability in Twins
Authors: Afsaneh Mohammadnejad, Marianne Nygaard, Jan Baumbach, Shuxia Li, Weilong Li, Jesper Lund, Jacob v. B. Hjelmborg, Lene Christensen, Qihua Tan
Abstract:
Cognitive impairment in the elderly is a key issue affecting the quality of life. Despite a strong genetic background in cognition, only a limited number of single nucleotide polymorphisms (SNPs) have been found. These explain a small proportion of the genetic component of cognitive function, thus leaving a large proportion unaccounted for. We hypothesize that one reason for this missing heritability is the misspecified modeling in data analysis concerning phenotype distribution as well as the relationship between SNP dosage and the phenotype of interest. In an attempt to overcome these issues, we introduced a model-free method based on the generalized correlation coefficient (GCC) in a genome-wide association study (GWAS) of cognitive function in twin samples and compared its performance with two popular linear regression models. The GCC-based GWAS identified two genome-wide significant (P-value < 5e-8) SNPs; rs2904650 near ZDHHC2 on chromosome 8 and rs111256489 near CD6 on chromosome 11. The kinship model also detected two genome-wide significant SNPs, rs112169253 on chromosome 4 and rs17417920 on chromosome 7, whereas no genome-wide significant SNPs were found by the linear mixed model (LME). Compared to the linear models, more meaningful biological pathways like GABA receptor activation, ion channel transport, neuroactive ligand-receptor interaction, and the renin-angiotensin system were found to be enriched by SNPs from GCC. The GCC model outperformed the linear regression models by identifying more genome-wide significant genetic variants and more meaningful biological pathways related to cognitive function. Moreover, GCC-based GWAS was robust in handling genetically related twin samples, which is an important feature in handling genetic confounding in association studies.Keywords: cognition, generalized correlation coefficient, GWAS, twins
Procedia PDF Downloads 1211148 Optical Flow Technique for Supersonic Jet Measurements
Authors: Haoxiang Desmond Lim, Jie Wu, Tze How Daniel New, Shengxian Shi
Abstract:
This paper outlines the development of a novel experimental technique in quantifying supersonic jet flows, in an attempt to avoid seeding particle problems frequently associated with particle-image velocimetry (PIV) techniques at high Mach numbers. Based on optical flow algorithms, the idea behind the technique involves using high speed cameras to capture Schlieren images of the supersonic jet shear layers, before they are subjected to an adapted optical flow algorithm based on the Horn-Schnuck method to determine the associated flow fields. The proposed method is capable of offering full-field unsteady flow information with potentially higher accuracy and resolution than existing point-measurements or PIV techniques. Preliminary study via numerical simulations of a circular de Laval jet nozzle successfully reveals flow and shock structures typically associated with supersonic jet flows, which serve as useful data for subsequent validation of the optical flow based experimental results. For experimental technique, a Z-type Schlieren setup is proposed with supersonic jet operated in cold mode, stagnation pressure of 8.2 bar and exit velocity of Mach 1.5. High-speed single-frame or double-frame cameras are used to capture successive Schlieren images. As implementation of optical flow technique to supersonic flows remains rare, the current focus revolves around methodology validation through synthetic images. The results of validation test offers valuable insight into how the optical flow algorithm can be further improved to improve robustness and accuracy. Details of the methodology employed and challenges faced will be further elaborated in the final conference paper should the abstract be accepted. Despite these challenges however, this novel supersonic flow measurement technique may potentially offer a simpler way to identify and quantify the fine spatial structures within the shock shear layer.Keywords: Schlieren, optical flow, supersonic jets, shock shear layer
Procedia PDF Downloads 3111147 Aerodynamic Optimization of Oblique Biplane by Using Supercritical Airfoil
Authors: Asma Abdullah, Awais Khan, Reem Al-Ghumlasi, Pritam Kumari, Yasir Nawaz
Abstract:
Introduction: This study verified the potential applications of two Oblique Wing configurations that were initiated by the Germans Aerodynamicists during the WWII. Due to the end of the war, this project was not completed and in this research is targeting the revival of German Oblique biplane configuration. The research draws upon the use of two Oblique wings mounted on the top and bottom of the fuselage through a single pivot. The wings are capable of sweeping at different angles ranging from 0° at takeoff to 60° at cruising Altitude. The top wing, right half, behaves like a forward swept wing and the left half, behaves like a backward swept wing. Vice Versa applies to the lower wing. This opposite deflection of the top and lower wing cancel out the rotary moment created by each wing and the aircraft remains stable. Problem to better understand or solve: The purpose of this research is to investigate the potential of achieving improved aerodynamic performance and efficiency of flight at a wide range of sweep angles. This will help examine the most accurate value for the sweep angle at which the aircraft will possess both stability and better aerodynamics. Explaining the methods used: The Aircraft configuration is designed using Solidworks after which a series of Aerodynamic prediction are conducted, both in the subsonic and the supersonic flow regime. Computations are carried on Ansys Fluent. The results are then compared to theoretical and flight data of different Supersonic fighter aircraft of the same category (AD-1) and with the Wind tunnel testing model at subsonic speed. Results: At zero sweep angle, the aircraft has an excellent lift coefficient value with almost double that found for fighter jets. In acquiring of supersonic speed the sweep angle is increased to maximum 60 degrees depending on the mission profile. General findings: Oblique biplane can be the future fighter jet aircraft because of its high value performance in terms of aerodynamics, cost, structural design and weight.Keywords: biplane, oblique wing, sweep angle, supercritical airfoil
Procedia PDF Downloads 2761146 Developing Women Entrepreneurial Leadership: 'From Vision to Practice
Authors: Saira Maqbool, Qaisara Parveen, Muhammad Arshad Dahar
Abstract:
Improving females' involvement in management and enterprises in Pakistan requires the development of female entrepreneurs as leaders. Entrepreneurial education aims for providing students the knowledge, aptitudes and motivation to energize innovative accomplishment in various settings. Assortments of venture instruction are advertised at all stages of mentoring, from fundamental or discretionary institutes through graduate institutional platforms. The business enterprise will be considered the procedure by which a looming business visionary or business person pursues after openings without respect to the resources they directly regulate. This entails the ability of the business visionary to join every single other generation. This study explores the relationship between developing Women's Leadership skills and Entrepreneurship Education The essential reason for this consider was to analyze the role of Entrepreneurship Edification (EE) towards women's Leadership and develop entrepreneurial intentions among students. The major goal of this study was to foster entrepreneurial attitudes among PMAS Arid Agriculture University undergraduate students concerning their choice to work for themselves. This study focuses on the motivation and interest of female students in the social sciences to build entrepreneurial leadership skills. The quantitative analysis used a true-experimental, pretest-posttest control group research design. Female undergraduate students from PMAS Arid Agriculture University made up the study population. For entrepreneurial activity, a training module has been created. The students underwent a three-week training program at PMAS Arid Agriculture University, where they learned about entrepreneurial leadership abilities. The quantitative data were analyzed using descriptive statistics and T-tests. The findings indicated that students acquired entrepreneurial leadership skills and intentions after training. They have decided to launch their businesses as leaders. It is advised that other PMAS Arid Agriculture University departments use the training module and course outline because the research's usage of them has important results.Keywords: business, entrepreneurial, intentions, leadership, women
Procedia PDF Downloads 651145 Magnetic Nanoparticles Coated with Modified Polysaccharides for the Immobilization of Glycoproteins
Authors: Kinga Mylkie, Pawel Nowak, Marta Z. Borowska
Abstract:
The most important proteins in human serum responsible for drug binding are human serum albumin (HSA) and α1-acid glycoprotein (AGP). The AGP molecule is a glycoconjugate containing a single polypeptide chain composed of 183 amino acids (the core of the protein), and five glycan branched chains (sugar part) covalently linked by an N-glycosidic bond with aspartyl residues (Asp(N) -15, -38, -54, -75, - 85) of polypeptide chain. This protein plays an important role in binding alkaline drugs, a large group of drugs used in psychiatry, some acid drugs (e.g., coumarin anticoagulants), and neutral drugs (steroid hormones). The main goal of the research was to obtain magnetic nanoparticles coated with biopolymers in a chemically modified form, which will have highly reactive functional groups able to effectively immobilize the glycoprotein (acid α1-glycoprotein) without losing the ability to bind active substances. The first phase of the project involved the chemical modification of biopolymer starch. Modification of starch was carried out by methods of organic synthesis, leading to the preparation of a polymer enriched on its surface with aldehyde groups, which in the next step was coupled with 3-aminophenylboronic acid. Magnetite nanoparticles coated with starch were prepared by in situ co-precipitation and then oxidized with a 1 M sodium periodate solution to form a dialdehyde starch coating. Afterward, the reaction between the magnetite nanoparticles coated with dialdehyde starch and 3-aminophenylboronic acid was carried out. The obtained materials consist of a magnetite core surrounded by a layer of modified polymer, which contains on its surface dihydroxyboryl groups of boronic acids which are capable of binding glycoproteins. Magnetic nanoparticles obtained as carriers for plasma protein immobilization were fully characterized by ATR-FTIR, TEM, SEM, and DLS. The glycoprotein was immobilized on the obtained nanoparticles. The amount of mobilized protein was determined by the Bradford method.Keywords: glycoproteins, immobilization, magnetic nanoparticles, polysaccharides
Procedia PDF Downloads 1281144 Gene Expressions in Left Ventricle Heart Tissue of Rat after 150 Mev Proton Irradiation
Abstract:
Introduction: In mediastinal radiotherapy and to a lesser extend also in total-body irradiation (TBI) radiation exposure may lead to development of cardiac diseases. Radiation-induced heart disease is dose-dependent and it is characterized by a loss of cardiac function, associated with progressive heart cells degeneration. We aimed to determine the in-vivo radiation effects on fibronectin, ColaA1, ColaA2, galectin and TGFb1 gene expression levels in left ventricle heart tissues of rats after irradiation. Material and method: Four non-treatment adult Wistar rats as control group (group A) were selected. In group B, 4 adult Wistar rats irradiated to 20 Gy single dose of 150 Mev proton beam locally in heart only. In heart plus lung irradiate group (group C) 4 adult rats was irradiated by 50% of lung laterally plus heart radiation that mentioned in before group. At 8 weeks after radiation animals sacrificed and left ventricle heart dropped in liquid nitrogen for RNA extraction by Absolutely RNA® Miniprep Kit (Stratagen, Cat no. 400800). cDNA was synthesized using M-MLV reverse transcriptase (Life Technologies, Cat no. 28025-013). We used Bio-Rad machine (Bio Rad iQ5 Real Time PCR) for QPCR testing by relative standard curve method. Results: We found that gene expression of fibronectin in group C significantly increased compared to control group, but it was not showed significant change in group B compared to group A. The levels of gene expressions of Cola1 and Cola2 in mRNA did not show any significant changes between normal and radiation groups. Changes of expression of galectin target significantly increased only in group C compared to group A. TGFb1 expressions in group C more than group B showed significant enhancement compared to group A. Conclusion: In summary we can say that 20 Gy of proton exposure of heart tissue may lead to detectable damages in heart cells and may distribute function of them as a component of heart tissue structure in molecular level.Keywords: gene expression, heart damage, proton irradiation, radiotherapy
Procedia PDF Downloads 4881143 Exploring the Applications of Neural Networks in the Adaptive Learning Environment
Authors: Baladitya Swaika, Rahul Khatry
Abstract:
Computer Adaptive Tests (CATs) is one of the most efficient ways for testing the cognitive abilities of students. CATs are based on Item Response Theory (IRT) which is based on item selection and ability estimation using statistical methods of maximum information selection/selection from posterior and maximum-likelihood (ML)/maximum a posteriori (MAP) estimators respectively. This study aims at combining both classical and Bayesian approaches to IRT to create a dataset which is then fed to a neural network which automates the process of ability estimation and then comparing it to traditional CAT models designed using IRT. This study uses python as the base coding language, pymc for statistical modelling of the IRT and scikit-learn for neural network implementations. On creation of the model and on comparison, it is found that the Neural Network based model performs 7-10% worse than the IRT model for score estimations. Although performing poorly, compared to the IRT model, the neural network model can be beneficially used in back-ends for reducing time complexity as the IRT model would have to re-calculate the ability every-time it gets a request whereas the prediction from a neural network could be done in a single step for an existing trained Regressor. This study also proposes a new kind of framework whereby the neural network model could be used to incorporate feature sets, other than the normal IRT feature set and use a neural network’s capacity of learning unknown functions to give rise to better CAT models. Categorical features like test type, etc. could be learnt and incorporated in IRT functions with the help of techniques like logistic regression and can be used to learn functions and expressed as models which may not be trivial to be expressed via equations. This kind of a framework, when implemented would be highly advantageous in psychometrics and cognitive assessments. This study gives a brief overview as to how neural networks can be used in adaptive testing, not only by reducing time-complexity but also by being able to incorporate newer and better datasets which would eventually lead to higher quality testing.Keywords: computer adaptive tests, item response theory, machine learning, neural networks
Procedia PDF Downloads 1731142 Joubert Syndrome and Related Disorders: A Single Center Experience
Authors: Ali Al Orf, Khawaja Bilal Waheed
Abstract:
Background and objective: Joubert syndrome (JS) is a rare, autosomal-recessive condition. Early recognition is important for management and counseling. Magnetic resonance imaging (MRI) can help in diagnosis. Therefore, we sought to evaluate clinical presentation and MRI findings in Joubert syndrome and related disorders. Method: A retrospective review of genetically proven cases of Joubert syndromes and related disorders was reviewed for their clinical presentation, demographic information, and magnetic resonance imaging findings in a period of the last 10 years. Two radiologists documented magnetic resonance imaging (MRI) findings. The presence of hypoplasia of the cerebellar vermis with hypoplasia of the superior cerebellar peduncle resembling the “Molar Tooth Sign” in the mid-brain was documented. Genetic testing results were collected to label genes linked to the diagnoses. Results: Out of 12 genetically proven JS cases, most were females (9/12), and nearly all presented with hypotonia, ataxia, developmental delay, intellectual impairment, and speech disorders. 5/12 children presented at age of 1 or below. The molar tooth sign was seen in 10/12 cases. Two cases were associated with other brain findings. Most of the cases were found associated with consanguineous marriage Conclusion and discussion: The molar tooth sign is a frequent and reliable sign of JS and related disorders. Genes related to defective cilia result in malfunctioning in the retina, renal tubule, and neural cell migration, thus producing heterogeneous syndrome complexes known as “ciliopathies.” Other ciliopathies like Senior-Loken syndrome, Bardet Biedl syndrome, and isolated nephronophthisis must be considered as the differential diagnosis of JS. The main imaging findings are the partial or complete absence of the cerebellar vermis, hypoplastic cerebellar peduncles (giving MTS), and (bat-wing appearance) fourth ventricular deformity. LimitationsSingle-center, small sample size, and retrospective nature of the study were a few of the study limitations.Keywords: Joubart syndrome, magnetic resonance imaging, molar tooth sign, hypotonia
Procedia PDF Downloads 931141 Performance Analysis of Three Absorption Heat Pump Cycles, Full and Partial Loads Operations
Authors: B. Dehghan, T. Toppi, M. Aprile, M. Motta
Abstract:
The environmental concerns related to global warming and ozone layer depletion along with the growing worldwide demand for heating and cooling have brought an increasing attention toward ecological and efficient Heating, Ventilation, and Air Conditioning (HVAC) systems. Furthermore, since space heating accounts for a considerable part of the European primary/final energy use, it has been identified as one of the sectors with the most challenging targets in energy use reduction. Heat pumps are commonly considered as a technology able to contribute to the achievement of the targets. Current research focuses on the full load operation and seasonal performance assessment of three gas-driven absorption heat pump cycles. To do this, investigations of the gas-driven air-source ammonia-water absorption heat pump systems for small-scale space heating applications are presented. For each of the presented cycles, both full-load under various temperature conditions and seasonal performances are predicted by means of numerical simulations. It has been considered that small capacity appliances are usually equipped with fixed geometry restrictors, meaning that the solution mass flow rate is driven by the pressure difference across the associated restrictor valve. Results show that gas utilization efficiency (GUE) of the cycles varies between 1.2 and 1.7 for both full and partial loads and vapor exchange (VX) cycle is found to achieve the highest efficiency. It is noticed that, for typical space heating applications, heat pumps operate over a wide range of capacities and thermal lifts. Thus, partially, the novelty introduced in the paper is the investigation based on a seasonal performance approach, following the method prescribed in a recent European standard (EN 12309). The overall result is a modest variation in the seasonal performance for analyzed cycles, from 1.427 (single-effect) to 1.493 (vapor-exchange).Keywords: absorption cycles, gas utilization efficiency, heat pump, seasonal performance, vapor exchange cycle
Procedia PDF Downloads 1091140 The Effect of MOOC-Based Distance Education in Academic Engagement and Its Components on Kerman University Students
Authors: Fariba Dortaj, Reza Asadinejad, Akram Dortaj, Atena Baziyar
Abstract:
The aim of this study was to determine the effect of distance education (based on MOOC) on the components of academic engagement of Kerman PNU. The research was quasi-experimental method that cluster sampling with an appropriate volume was used in this study (one class in experimental group and one class in controlling group). Sampling method is single-stage cluster sampling. The statistical society is students of Kerman Payam Noor University, which) were selected 40 of them as sample (20 students in the control group and 20 students in experimental group). To test the hypothesis, it was used the analysis of univariate and Co-covariance to offset the initial difference (difference of control) in the experimental group and the control group. The instrument used in this study is academic engagement questionnaire of Zerang (2012) that contains component of cognitive, behavioral and motivational engagement. The results showed that there is no significant difference between mean scores of academic components of academic engagement in experimental group and the control group on the post-test, after elimination of the pre-test. The adjusted mean scores of components of academic engagement in the experimental group were higher than the adjusted average of scores after the test in the control group. The use of technology-based education in distance education has been effective in increasing cognitive engagement, motivational engagement and behavioral engagement among students. Experimental variable with the effect size 0.26, predicted 26% of cognitive engagement component variance. Experimental variable with the effect size 0.47, predicted 47% of the motivational engagement component variance. Experimental variable with the effect size 0.40, predicted 40% of behavioral engagement component variance. So teaching with technology (MOOC) has a positive impact on increasing academic engagement and academic performance of students in educational technology. The results suggest that technology (MOOC) is used to enrich the teaching of other lessons of PNU.Keywords: educational technology, distance education, components of academic engagement, mooc technology
Procedia PDF Downloads 1491139 Designing Stochastic Non-Invasively Applied DC Pulses to Suppress Tremors in Multiple Sclerosis by Computational Modeling
Authors: Aamna Lawrence, Ashutosh Mishra
Abstract:
Tremors occur in 60% of the patients who have Multiple Sclerosis (MS), the most common demyelinating disease that affects the central and peripheral nervous system, and are the primary cause of disability in young adults. While pharmacological agents provide minimal benefits, surgical interventions like Deep Brain Stimulation and Thalamotomy are riddled with dangerous complications which make non-invasive electrical stimulation an appealing treatment of choice for dealing with tremors. Hence, we hypothesized that if the non-invasive electrical stimulation parameters (mainly frequency) can be computed by mathematically modeling the nerve fibre to take into consideration the minutest details of the axon morphologies, tremors due to demyelination can be optimally alleviated. In this computational study, we have modeled the random demyelination pattern in a nerve fibre that typically manifests in MS using the High-Density Hodgkin-Huxley model with suitable modifications to account for the myelin. The internode of the nerve fibre in our model could have up to ten demyelinated regions each having random length and myelin thickness. The arrival time of action potentials traveling the demyelinated and the normally myelinated nerve fibre between two fixed points in space was noted, and its relationship with the nerve fibre radius ranging from 5µm to 12µm was analyzed. It was interesting to note that there were no overlaps between the arrival time for action potentials traversing the demyelinated and normally myelinated nerve fibres even when a single internode of the nerve fibre was demyelinated. The study gave us an opportunity to design DC pulses whose frequency of application would be a function of the random demyelination pattern to block only the delayed tremor-causing action potentials. The DC pulses could be delivered to the peripheral nervous system non-invasively by an electrode bracelet that would suppress any shakiness beyond it thus paving the way for wearable neuro-rehabilitative technologies.Keywords: demyelination, Hodgkin-Huxley model, non-invasive electrical stimulation, tremor
Procedia PDF Downloads 1271138 Amrita Bose-Einstein Condensate Solution Formed by Gold Nanoparticles Laser Fusion and Atmospheric Water Generation
Authors: Montree Bunruanses, Preecha Yupapin
Abstract:
In this work, the quantum material called Amrita (elixir) is made from top-down gold into nanometer particles by fusing 99% gold with a laser and mixing it with drinking water using the atmospheric water (AWG) production system, which is made of water with air. The high energy laser power destroyed the four natural force bindings from gravity-weak-electromagnetic and strong coupling forces, where finally it was the purified Bose-Einstein condensate (BEC) states. With this method, gold atoms in the form of spherical single crystals with a diameter of 30-50 nanometers are obtained and used. They were modulated (activated) with a frequency generator into various matrix structures mixed with AWG water to be used in the upstream conversion (quantum reversible) process, which can be applied on humans both internally or externally by drinking or applying on the treated surfaces. Doing both space (body) and time (mind) will go back to the origin and start again from the coupling of space-time on both sides of time at fusion (strong coupling force) and push out (Big Bang) at the equilibrium point (singularity) occurs as strings and DNA with neutrinos as coupling energy. There is no distortion (purification), which is the point where time and space have not yet been determined, and there is infinite energy. Therefore, the upstream conversion is performed. It is reforming DNA to make it be purified. The use of Amrita is a method used for people who cannot meditate (quantum meditation). Various cases were applied, where the results show that the Amrita can make the body and the mind return to their pure origins and begin the downstream process with the Big Bang movement, quantum communication in all dimensions, DNA reformation, frequency filtering, crystal body forming, broadband quantum communication networks, black hole forming, quantum consciousness, body and mind healing, etc.Keywords: quantum materials, quantum meditation, quantum reversible, Bose-Einstein condensate
Procedia PDF Downloads 751137 Using Multiomic Plasma Profiling From Liquid Biopsies to Identify Potential Signatures for Disease Diagnostics in Late-Stage Non-small Cell Lung Cancer (NSCLC) in Trinidad and Tobago
Authors: Nicole Ramlachan, Samuel Mark West
Abstract:
Lung cancer is the leading cause of cancer-associated deaths in North America, with the vast majority being non-small cell lung cancer (NSCLC), with a five-year survival rate of only 24%. Non-invasive discovery of biomarkers associated with early-diagnosis of NSCLC can enable precision oncology efforts using liquid biopsy-based multiomics profiling of plasma. Although tissue biopsies are currently the gold standard for tumor profiling, this method presents many limitations since these are invasive, risky, and sometimes hard to obtain as well as only giving a limited tumor profile. Blood-based tests provides a less-invasive, more robust approach to interrogate both tumor- and non-tumor-derived signals. We intend to examine 30 stage III-IV NSCLC patients pre-surgery and collect plasma samples.Cell-free DNA (cfDNA) will be extracted from plasma, and next-generation sequencing (NGS) performed. Through the analysis of tumor-specific alterations, including single nucleotide variants (SNVs), insertions, deletions, copy number variations (CNVs), and methylation alterations, we intend to identify tumor-derived DNA—ctDNA among the total pool of cfDNA. This would generate data to be used as an accurate form of cancer genotyping for diagnostic purposes. Using liquid biopsies offer opportunities to improve the surveillance of cancer patients during treatment and would supplement current diagnosis and tumor profiling strategies previously not readily available in Trinidad and Tobago. It would be useful and advantageous to use this in diagnosis and tumour profiling as well as to monitor cancer patients, providing early information regarding disease evolution and treatment efficacy, and reorient treatment strategies in, timethereby improving clinical oncology outcomes.Keywords: genomics, multiomics, clinical genetics, genotyping, oncology, diagnostics
Procedia PDF Downloads 1591136 Fermentation of Pretreated Herbaceous Cellulosic Wastes to Ethanol by Anaerobic Cellulolytic and Saccharolytic Thermophilic Clostridia
Authors: Lali Kutateladze, Tamar Urushadze, Tamar Dudauri, Besarion Metreveli, Nino Zakariashvili, Izolda Khokhashvili, Maya Jobava
Abstract:
Lignocellulosic waste streams from agriculture, paper and wood industry are renewable, plentiful and low-cost raw materials that can be used for large-scale production of liquid and gaseous biofuels. As opposed to prevailing multi-stage biotechnological processes developed for bioconversion of cellulosic substrates to ethanol where high-cost cellulase preparations are used, Consolidated Bioprocessing (CBP) offers to accomplish cellulose and xylan hydrolysis followed by fermentation of both C6 and C5 sugars to ethanol in a single-stage process. Syntrophic microbial consortium comprising of anaerobic, thermophilic, cellulolytic, and saccharolytic bacteria in the genus Clostridia with improved ethanol productivity and high tolerance to fermentation end-products had been proposed for achieving CBP. 65 new strains of anaerobic thermophilic cellulolytic and saccharolytic Clostridia were isolated from different wetlands and hot springs in Georgia. Using new isolates, fermentation of mechanically pretreated wheat straw and corn stalks was done under oxygen-free nitrogen environment in thermophilic conditions (T=550C) and pH 7.1. Process duration was 120 hours. Liquid and gaseous products of fermentation were analyzed on a daily basis using Perkin-Elmer gas chromatographs with flame ionization and thermal detectors. Residual cellulose, xylan, xylose, and glucose were determined using standard methods. Cellulolytic and saccharolytic bacteria strains degraded mechanically pretreated herbaceous cellulosic wastes and fermented glucose and xylose to ethanol, acetic acid and gaseous products like hydrogen and CO2. Specifically, maximum yield of ethanol was reached at 96 h of fermentation and varied between 2.9 – 3.2 g/ 10 g of substrate. The content of acetic acid didn’t exceed 0.35 g/l. Other volatile fatty acids were detected in trace quantities.Keywords: anaerobic bacteria, cellulosic wastes, Clostridia sp, ethanol
Procedia PDF Downloads 2921135 Robustness of the Deep Chroma Extractor and Locally-Normalized Quarter Tone Filters in Automatic Chord Estimation under Reverberant Conditions
Authors: Luis Alvarado, Victor Poblete, Isaac Gonzalez, Yetzabeth Gonzalez
Abstract:
In MIREX 2016 (http://www.music-ir.org/mirex), the deep neural network (DNN)-Deep Chroma Extractor, proposed by Korzeniowski and Wiedmer, reached the highest score in an audio chord recognition task. In the present paper, this tool is assessed under acoustic reverberant environments and distinct source-microphone distances. The evaluation dataset comprises The Beatles and Queen datasets. These datasets are sequentially re-recorded with a single microphone in a real reverberant chamber at four reverberation times (0 -anechoic-, 1, 2, and 3 s, approximately), as well as four source-microphone distances (32, 64, 128, and 256 cm). It is expected that the performance of the trained DNN will dramatically decrease under these acoustic conditions with signals degraded by room reverberation and distance to the source. Recently, the effect of the bio-inspired Locally-Normalized Cepstral Coefficients (LNCC), has been assessed in a text independent speaker verification task using speech signals degraded by additive noise at different signal-to-noise ratios with variations of recording distance, and it has also been assessed under reverberant conditions with variations of recording distance. LNCC showed a performance so high as the state-of-the-art Mel Frequency Cepstral Coefficient filters. Based on these results, this paper proposes a variation of locally-normalized triangular filters called Locally-Normalized Quarter Tone (LNQT) filters. By using the LNQT spectrogram, robustness improvements of the trained Deep Chroma Extractor are expected, compared with classical triangular filters, and thus compensating the music signal degradation improving the accuracy of the chord recognition system.Keywords: chord recognition, deep neural networks, feature extraction, music information retrieval
Procedia PDF Downloads 2311134 Methods for Enhancing Ensemble Learning or Improving Classifiers of This Technique in the Analysis and Classification of Brain Signals
Authors: Seyed Mehdi Ghezi, Hesam Hasanpoor
Abstract:
This scientific article explores enhancement methods for ensemble learning with the aim of improving the performance of classifiers in the analysis and classification of brain signals. The research approach in this field consists of two main parts, each with its own strengths and weaknesses. The choice of approach depends on the specific research question and available resources. By combining these approaches and leveraging their respective strengths, researchers can enhance the accuracy and reliability of classification results, consequently advancing our understanding of the brain and its functions. The first approach focuses on utilizing machine learning methods to identify the best features among the vast array of features present in brain signals. The selection of features varies depending on the research objective, and different techniques have been employed for this purpose. For instance, the genetic algorithm has been used in some studies to identify the best features, while optimization methods have been utilized in others to identify the most influential features. Additionally, machine learning techniques have been applied to determine the influential electrodes in classification. Ensemble learning plays a crucial role in identifying the best features that contribute to learning, thereby improving the overall results. The second approach concentrates on designing and implementing methods for selecting the best classifier or utilizing meta-classifiers to enhance the final results in ensemble learning. In a different section of the research, a single classifier is used instead of multiple classifiers, employing different sets of features to improve the results. The article provides an in-depth examination of each technique, highlighting their advantages and limitations. By integrating these techniques, researchers can enhance the performance of classifiers in the analysis and classification of brain signals. This advancement in ensemble learning methodologies contributes to a better understanding of the brain and its functions, ultimately leading to improved accuracy and reliability in brain signal analysis and classification.Keywords: ensemble learning, brain signals, classification, feature selection, machine learning, genetic algorithm, optimization methods, influential features, influential electrodes, meta-classifiers
Procedia PDF Downloads 741133 Design and Analysis for a 4-Stage Crash Energy Management System for Railway Vehicles
Authors: Ziwen Fang, Jianran Wang, Hongtao Liu, Weiguo Kong, Kefei Wang, Qi Luo, Haifeng Hong
Abstract:
A 4-stage crash energy management (CEM) system for subway rail vehicles used by Massachusetts Bay Transportation Authority (MBTA) in the USA is developed in this paper. The 4 stages of this new CEM system include 1) energy absorbing coupler (draft gear and shear bolts), 2) primary energy absorbers (aluminum honeycomb structured box), 3) secondary energy absorbers (crush tube), and 4) collision post and corner post. A sliding anti-climber and a fixed anti-climber are designed at the front of the vehicle cooperating with the 4-stage CEM to maximize the energy to be absorbed and minimize the damage to passengers and crews. In order to investigate the effectiveness of this CEM system, both finite element (FE) methods and crashworthiness test have been employed. The whole vehicle consists of 3 married pairs, i.e., six cars. In the FE approach, full-scale railway car models are developed and different collision cases such as a single moving car impacting a rigid wall, two moving cars into a rigid wall, two moving cars into two stationary cars, six moving cars into six stationary cars and so on are investigated. The FE analysis results show that the railway vehicle incorporating this CEM system has a superior crashworthiness performance. In the crashworthiness test, a simplified vehicle front end including the sliding anti-climber, the fixed anti-climber, the primary energy absorbers, the secondary energy absorber, the collision post and the corner post is built and impacted to a rigid wall. The same test model is also analyzed in the FE and the results such as crushing force, stress, and strain of critical components, acceleration and velocity curves are compared and studied. FE results show very good comparison to the test results.Keywords: railway vehicle collision, crash energy management design, finite element method, crashworthiness test
Procedia PDF Downloads 4011132 Micro-Droplet Formation in a Microchannel under the Effect of an Electric Field: Experiment
Authors: Sercan Altundemir, Pinar Eribol, A. Kerem Uguz
Abstract:
Microfluidics systems allow many-large scale laboratory applications to be miniaturized on a single device in order to reduce cost and advance fluid control. Moreover, such systems enable to generate and control droplets which have a significant role on improved analysis for many chemical and biological applications. For example, they can be employed as the model for cells in microfluidic systems. In this work, the interfacial instability of two immiscible Newtonian liquids flowing in a microchannel is investigated. When two immiscible liquids are in laminar regime, a flat interface is formed between them. If a direct current electric field is applied, the interface may deform, i.e. may become unstable and it may be ruptured and form micro-droplets. First, the effect of thickness ratio, total flow rate, viscosity ratio of the silicone oil and ethylene glycol liquid couple on the critical voltage at which the interface starts to destabilize is investigated. Then the droplet sizes are measured under the effect of these parameters at various voltages. Moreover, the effect of total flow rate on the time elapsed for the interface to be ruptured to form droplets by hitting the wall of the channel is analyzed. It is observed that an increase in the viscosity or the thickness ratio of the silicone oil to the ethylene glycol has a stabilizing effect, i.e. a higher voltage is needed while the total flow rate has no effect on it. However, it is observed that an increase in the total flow rate results in shortening of the elapsed time for the interface to hit the wall. Moreover, the droplet size decreases down to 0.1 μL with an increase in the applied voltage, the viscosity ratio or the total flow rate or a decrease in the thickness ratio. In addition to these observations, two empirical models for determining the critical electric number, i.e., the dimensionless voltage and the droplet size and another model which is a combination of both models, for determining the droplet size at the critical voltage are established.Keywords: droplet formation, electrohydrodynamics, microfluidics, two-phase flow
Procedia PDF Downloads 1731131 Aboriginal Head and Neck Cancer Patients Have Different Patterns of Metastatic Involvement, and Have More Advanced Disease at Diagnosis
Authors: Kim Kennedy, Daren Gibson, Stephanie Flukes, Chandra Diwakarla, Lisa Spalding, Leanne Pilkington, Andrew Redfern
Abstract:
Introduction: The mortality gap in Aboriginal Head and Neck Cancer is well known, but the reasons for poorer survival are not well established. Aim: We aimed to evaluate the locoregional and metastatic involvement, and stage at diagnosis, in Aboriginal compared with non-Aboriginal patients. Methods: We performed a retrospective cohort analysis of 320 HNC patients from a single centre in Western Australia, identifying 80 Aboriginal patients and 240 non-Aboriginal patients matched on a 1:3 ratio by sites, histology, rurality, and age. We collected data on the patient characteristics, tumour features, regions involved, stage at diagnosis, treatment history, and survival and relapse patterns, including sites of metastatic and locoregional involvement. Results: Aboriginal patients had a significantly higher incidence of lung metastases (26.3% versus 13.7%, p=0.009). Aboriginal patients also had a numerically but non-statistically significant higher incidence of thoracic nodal involvement (10% vs 5.8%) and malignant pleural effusions (3.8% vs 2.5%). Aboriginal patients also had a numerically but not statistically significantly higher incidence of adrenal and bony involvement. Interestingly, non-Aboriginal patients had an increased rate of cutaneous (2.1% vs 0%) and liver metastases (4.6% vs 2.5%) compared with Aboriginal patients. In terms of locoregional involvement, Aboriginal patients were more than twice as likely to have contralateral neck involvement (58.8% vs 24.2%, p<0.00001), and 30% more likely to have ipsilateral neck lymph node involvement (78.8% vs 60%, p=0.002) than non-Aboriginal patients. Aboriginal patients had significantly more advanced disease at diagnosis (p=0.008). Aboriginal compared with non-Aboriginal patients were less likely to present with stage I (7.5% vs 22.5%), stage II (11.3% vs 13.8%), or stage III disease (13.8% vs 17.1%), and more likely to present with more advanced stage IVA (42.5% vs 34.6%), stage IVB (15% vs 7.1%), or stage IVC (10% vs 5%) disease (p=0.008). Number of regions of disease involvement was higher in Aboriginal patients (median 3, mean 3.64, range 1-10) compared with non-Aboriginal patients (median 2, mean 2.80, range 1-12). Conclusion: Aboriginal patients had a significantly higher incidence of lung metastases, and significantly more frequent involvement of ipsilateral and contralateral neck lymph nodes. Aboriginal patients also had significantly more advanced disease at presentation with a higher stage at diagnosis. We are performing further analyses to investigate explanations for these findings.Keywords: head and neck cancer, Aboriginal, metastases, locoregional, pattern of relapse, sites of disease
Procedia PDF Downloads 681130 Surface Water Flow of Urban Areas and Sustainable Urban Planning
Authors: Sheetal Sharma
Abstract:
Urban planning is associated with land transformation from natural areas to modified and developed ones which leads to modification of natural environment. The basic knowledge of relationship between both should be ascertained before proceeding for the development of natural areas. Changes on land surface due to build up pavements, roads and similar land cover, affect surface water flow. There is a gap between urban planning and basic knowledge of hydrological processes which should be known to the planners. The paper aims to identify these variations in surface flow due to urbanization for a temporal scale of 40 years using Storm Water Management Mode (SWMM) and again correlating these findings with the urban planning guidelines in study area along with geological background to find out the suitable combinations of land cover, soil and guidelines. For the purpose of identifying the changes in surface flows, 19 catchments were identified with different geology and growth in 40 years facing different ground water levels fluctuations. The increasing built up, varying surface runoff are studied using Arc GIS and SWMM modeling, regression analysis for runoff. Resulting runoff for various land covers and soil groups with varying built up conditions were observed. The modeling procedures also included observations for varying precipitation and constant built up in all catchments. All these observations were combined for individual catchment and single regression curve was obtained for runoff. Thus, it was observed that alluvial with suitable land cover was better for infiltration and least generation of runoff but excess built up could not be sustained on alluvial soil. Similarly, basalt had least recharge and most runoff demanding maximum vegetation over it. Sandstone resulted in good recharging if planned with more open spaces and natural soils with intermittent vegetation. Hence, these observations made a keystone base for planners while planning various land uses on different soils. This paper contributes and provides a solution to basic knowledge gap, which urban planners face during development of natural surfaces.Keywords: runoff, built up, roughness, recharge, temporal changes
Procedia PDF Downloads 277