Search results for: stagnation point flow
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9220

Search results for: stagnation point flow

610 Understanding National Soccer Jersey Design from a Material Culture Perspective: A Content Analysis and Wardrobe Interviews with Canadian Consumers

Authors: Olivia Garcia, Sandra Tullio-Pow

Abstract:

The purpose of this study was to understand what design attributes make the most ideal (wearable and memorable) national soccer jersey. The research probed Canadian soccer enthusiasts to better understand their jersey-purchasing rationale. The research questions framing this study were: how do consumers feel about their jerseys? How do these feelings influence their choices? There has been limited research on soccer jerseys from a material culture perspective, and it is not inclusive of national soccer jerseys. The results of this study may be used for product developers and advertisers who are looking to better understand the consumer base for national soccer jersey design. A mixed methods approach informed the research. To begin, a content analysis of all the home jerseys from the 2018 World Cup was done. Information such as size range, main colour, fibre content, brand, collar details, availability, sleeve length, place of manufacturing, pattern, price, fabric as per company, neckline, availability on company website, jersey inspiration, and badge/crest details were noted. Following the content analysis, wardrobe interviews were conducted with six consumers/fans. Participants brought two or more jerseys to the interviews, where the jerseys acted as clothing probes to recount information. Interview questions were semi-structured and focused on the participants’ relationship with the sport, their personal background, who they cheered for, why they bought the jerseys, and fit preferences. The goal of the inquiry was to pull out information on how participants feel about their jerseys and why. Finally, an interview with an industry professional was done. This interview was semi-structured, focusing on basic questions regarding sportswear design, sales, the popularity of soccer, and the manufacturing and marketing process. The findings proved that national soccer jerseys are an integral part of material culture. Women liked more fitted jerseys, and men liked more comfortable jerseys. Jerseys should be made with a cooling, comfortable fabric and should always prevent peeling. The symbols on jerseys are there to convey a team’s history and are most typically placed on the left chest. Jerseys should always represent the flag and/or the country’s colours and should use designs that are both fashionable and innovative. Jersey design should always consider the opinions of the consumers to help influence the design process. Jerseys should always use concepts surrounding culture, as consumers feel connected to the jerseys that represent the culture and/or family they have grown up with. Jerseys should use a team’s history, as well as the nostalgia associated with the team, as consumers prefer jerseys that reflect important moments in soccer. Jerseys must also sit at a reasonable price point for consumers, with an experience to go along with the jersey purchase. In conclusion, national soccer jerseys are considered sites of attachment and memories and play an integral part in the study of material culture.

Keywords: Design, Fashion, Material Culture, Sport

Procedia PDF Downloads 92
609 Spatial Accessibility Analysis of Kabul City Public Transport

Authors: Mohammad Idrees Yusofzai, Hirobata Yasuhiro, Matsuo Kojiro

Abstract:

Kabul is the capital of Afghanistan. It is the focal point of educational, industrial, etc. of Afghanistan. Additionally, the population of Kabul has grown recently and will increase because of return of refugees and shifting of people from other province to Kabul city. However, this increase in population, the issues of urban congestion and other related problems of urban transportation in Kabul city arises. One of the problems is public transport (large buses) service and needs to be modified and enhanced especially large bus routes that are operating in each zone of the 22 zone of Kabul City. To achieve the above mentioned goal of improving public transport, Spatial Accessibility Analysis is one of the important attributes to assess the effectiveness of transportation system and urban transport policy of a city, because accessibility indicator as an alternative tool to support public policy that aims the reinforcement of sustainable urban space. The case study of this research compares the present model (present bus route) and the modified model of public transport. Furthermore, present model, the bus routes in most of the zones are active, however, with having low frequency and unpublished schedule, and accessibility result is analyzed in four cases, based on the variables of accessibility. Whereas in modified model all zones in Kabul is taken into consideration with having specified origin and high frequency. Indeed the number of frequencies is kept high; however, this number is based on the number of buses Millie Bus Enterprise Authority (MBEA) owns. The same approach of cases is applied in modified model to figure out the best accessibility for the modified model. Indeed, the modified model is having a positive impact in congestion level in Kabul city. Besides, analyses of person trip and trip distribution have been also analyzed because how people move in the study area by each mode of transportation. So, the general aims of this research are to assess the present movement of people, identify zones in need of public transport and assess equity level of accessibility in Kabul city. The framework of methodology used in this research is based on gravity analysis model of accessibility; besides, generalized cost (time) of travel and travel mode is calculated. The main data come from person trip survey, socio-economic characteristics, demographic data by Japan International Cooperation Agency, 2008, study of Kabul city and also from the previous researches on travel pattern and the remaining data regarding present bus line and routes have been from MBEA. In conclusion, this research explores zones where public transport accessibility level is high and where it is low. It was found that both models the downtown area or central zones of Kabul city is having high level accessibility. Besides, the present model is the most unfavorable compared with the modified model based on the accessibility analysis.

Keywords: accessibility, bus generalized cost, gravity model, public transportation network

Procedia PDF Downloads 177
608 Design, Development and Testing of Polymer-Glass Microfluidic Chips for Electrophoretic Analysis of Biological Sample

Authors: Yana Posmitnaya, Galina Rudnitskaya, Tatyana Lukashenko, Anton Bukatin, Anatoly Evstrapov

Abstract:

An important area of biological and medical research is the study of genetic mutations and polymorphisms that can alter gene function and cause inherited diseases and other diseases. The following methods to analyse DNA fragments are used: capillary electrophoresis and electrophoresis on microfluidic chip (MFC), mass spectrometry with electrophoresis on MFC, hybridization assay on microarray. Electrophoresis on MFC allows to analyse small volumes of samples with high speed and throughput. A soft lithography in polydimethylsiloxane (PDMS) was chosen for operative fabrication of MFCs. A master-form from silicon and photoresist SU-8 2025 (MicroChem Corp.) was created for the formation of micro-sized structures in PDMS. A universal topology which combines T-injector and simple cross was selected for the electrophoretic separation of the sample. Glass K8 and PDMS Sylgard® 184 (Dow Corning Corp.) were used for fabrication of MFCs. Electroosmotic flow (EOF) plays an important role in the electrophoretic separation of the sample. Therefore, the estimate of the quantity of EOF and the ways of its regulation are of interest for the development of the new methods of the electrophoretic separation of biomolecules. The following methods of surface modification were chosen to change EOF: high-frequency (13.56 MHz) plasma treatment in oxygen and argon at low pressure (1 mbar); 1% aqueous solution of polyvinyl alcohol; 3% aqueous solution of Kolliphor® P 188 (Sigma-Aldrich Corp.). The electroosmotic mobility was evaluated by the method of Huang X. et al., wherein the borate buffer was used. The influence of physical and chemical methods of treatment on the wetting properties of the PDMS surface was controlled by the sessile drop method. The most effective way of surface modification of MFCs, from the standpoint of obtaining the smallest value of the contact angle and the smallest value of the EOF, was the processing with aqueous solution of Kolliphor® P 188. This method of modification has been selected for the treatment of channels of MFCs, which are used for the separation of mixture of oligonucleotides fluorescently labeled with the length of chain with 10, 20, 30, 40 and 50 nucleotides. Electrophoresis was performed on the device MFAS-01 (IAI RAS, Russia) at the separation voltage of 1500 V. 6% solution of polydimethylacrylamide with the addition of 7M carbamide was used as the separation medium. The separation time of components of the mixture was determined from electropherograms. The time for untreated MFC was ~275 s, and for the ones treated with solution of Kolliphor® P 188 – ~ 220 s. Research of physical-chemical methods of surface modification of MFCs allowed to choose the most effective way for reducing EOF – the modification with aqueous solution of Kolliphor® P 188. In this case, the separation time of the mixture of oligonucleotides decreased about 20%. The further optimization of method of modification of channels of MFCs will allow decreasing the separation time of sample and increasing the throughput of analysis.

Keywords: electrophoresis, microfluidic chip, modification, nucleic acid, polydimethylsiloxane, soft lithography

Procedia PDF Downloads 400
607 Impact of pH Control on Peptide Profile and Antigenicity of Whey Hydrolysates

Authors: Natalia Caldeira De Carvalho, Tassia Batista Pessato, Luis Gustavo R. Fernandes, Ricardo L. Zollner, Flavia Maria Netto

Abstract:

Protein hydrolysates are ingredients of enteral diets and hypoallergenic formulas. Enzymatic hydrolysis is the most commonly used method for reducing the antigenicity of milk protein. The antigenicity and physicochemical characteristics of the protein hydrolysates depend on the reaction parameters. Among them, pH has been pointed out as of the major importance. Hydrolysis reaction in laboratory scale is commonly carried out under controlled pH (pH-stat). However, from the industrial point of view, controlling pH during hydrolysis reaction may be infeasible. This study evaluated the impact of pH control on the physicochemical properties and antigenicity of the hydrolysates of whey proteins with Alcalase. Whey protein isolate (WPI) solutions containing 3 and 7 % protein (w/v) were hydrolyzed with Alcalase 50 and 100 U g-1 protein at 60°C for 180 min. The reactions were carried out under controlled and uncontrolled pH conditions. Hydrolyses performed under controlled pH (pH-stat) were initially adjusted and maintained at pH 8.5. Hydrolyses carried out without pH control were initially adjusted to pH 8.5. Degree of hydrolysis (DH) was determined by OPA method, peptides profile was evaluated by HPLC-RP, and molecular mass distribution by SDS-PAGE/Tricine. The residual α-lactalbumin (α-La) and β-lactoglobulin (β-Lg) concentrations were determined using commercial ELISA kits. The specific IgE and IgG binding capacity of hydrolysates was evaluated by ELISA technique, using polyclonal antibodies obtained by immunization of female BALB/c mice with α-La, β-Lg and BSA. In hydrolysis under uncontrolled pH, the pH dropped from 8.5 to 7.0 during the first 15 min, remaining constant throughout the process. No significant difference was observed between the DH of the hydrolysates obtained under controlled and uncontrolled pH conditions. Although all hydrolysates showed hydrophilic character and low molecular mass peptides, hydrolysates obtained with and without pH control exhibited different chromatographic profiles. Hydrolysis under uncontrolled pH released, predominantly, peptides between 3.5 and 6.5 kDa, while hydrolysis under controlled pH released peptides smaller than 3.5 kDa. Hydrolysis with Alcalase under all conditions studied decreased by 99.9% the α-La and β-Lg concentrations in the hydrolysates detected by commercial kits. In general, β-Lg concentrations detected in the hydrolysates obtained under uncontrolled pH were significantly higher (p<0.05) than those detected in hydrolysates produced with pH control. The anti-α-La and anti-β-Lg IgE and IgG responses to all hydrolysates decreased significantly compared to WPI. Levels of specific IgE and IgG to the hydrolysates were below 25 and 12 ng ml-1, respectively. Despite the differences in peptide composition and α-La and β-Lg concentrations, no significant difference was found between IgE and IgG binding capacity of hydrolysates obtained with or without pH control. These results highlight the impact of pH on the hydrolysates characteristics and their concentrations of antigenic protein. Divergence between the antigen detection by commercial ELISA kits and specific IgE and IgG binding response was found in this study. This result shows that lower protein detection does not imply in lower protein antigenicity. Thus, the use of commercial kits for allergen contamination analysis should be cautious.

Keywords: allergy, enzymatic hydrolysis, milk protein, pH conditions, physicochemical characteristics

Procedia PDF Downloads 293
606 Palliative Care Referral Behavior Among Nurse Practitioners in Hospital Medicine

Authors: Sharon Jackson White

Abstract:

Purpose: Nurse practitioners (NPs) practicing within hospital medicine play a significant role in caring for patients who might benefit from palliative care (PC) services. Using the Theory of Planned Behavior, the purpose of this study was to examine the relationships among facilitators to referral, barriers to referral, self-efficacy with end-of-life discussions, history of referral, and referring to PC among NPs in hospital medicine. Hypotheses: 1) Perceived facilitators to referral will be associated with a higher history of referral and a higher number of referrals to PC. 2) Perceived barriers to referral will be associated with a lower history of referral and a lower number of referrals to PC. 3) Increased self-efficacy with end-of-life discussions will be associated with a higher history of referral and a higher number of referrals to PC. 4) Perceived facilitators to referral, perceived barriers to referral, and self–efficacy with end-of-life discussions will contribute to a significant variance in the history of referral to PC. 5) Perceived facilitators to referral, perceived barriers to referral, and self–efficacy with end-of-life discussions will contribute to a significant variance in the number of referrals to PC. Significance: Previous studies of referring patients to PC within the hospital setting care have focused on physician practices. Identifying factors that influence NPs referring hospitalized patients to PC is essential to ensure that patients have access to these important services. This study incorporates the SNRS mission of advancing nursing research through the dissemination of research findings and the promotion of nursing science. Methods: A cross-sectional, predictive correlational study was conducted. History of referral to PC, facilitators to referring to PC, barriers to referring to PC, self-efficacy in end-of-life discussions, and referral to PC were measured using the PC referral case study survey, facilitators and barriers to PC referral survey, and self-assessment with end-of-life discussions survey. Data were analyzed descriptively and with Pearson’s Correlation, Spearman’s Rho, point-biserial correlation, multiple regression, logistic regression, Chi-Square test, and the Mann-Whitney U test. Results: Only one facilitator (PC team being helpful with establishing goals of care) was significantly associated with referral to PC. Three variables were statistically significant in relation to the history of referring to PC: “Inclined to refer: PC can help decrease the length of stay in hospital”, “Most inclined to refer: Patients with serious illnesses and/or poor prognoses”, and “Giving bad news to a patient or family member”. No predictor variables contributed a significant variance in the number of referrals to PC for all three case studies. There were no statistically significant results showing a relationship between the history of referral and referral to PC. All five hypotheses were partially supported. Discussion: Findings from this study emphasize the need for further research on NPs who work in hospital settings and what factors influence their behaviors of referring to PC. Since there is an increase in NPs practicing within hospital settings, future studies should use a larger sample size and incorporate hospital medicine NPs and other types of NPs that work in hospitals.

Keywords: palliative care, nurse practitioners, hospital medicine, referral

Procedia PDF Downloads 59
605 Predicting Loss of Containment in Surface Pipeline using Computational Fluid Dynamics and Supervised Machine Learning Model to Improve Process Safety in Oil and Gas Operations

Authors: Muhammmad Riandhy Anindika Yudhy, Harry Patria, Ramadhani Santoso

Abstract:

Loss of containment is the primary hazard that process safety management is concerned within the oil and gas industry. Escalation to more serious consequences all begins with the loss of containment, starting with oil and gas release from leakage or spillage from primary containment resulting in pool fire, jet fire and even explosion when reacted with various ignition sources in the operations. Therefore, the heart of process safety management is avoiding loss of containment and mitigating its impact through the implementation of safeguards. The most effective safeguard for the case is an early detection system to alert Operations to take action prior to a potential case of loss of containment. The detection system value increases when applied to a long surface pipeline that is naturally difficult to monitor at all times and is exposed to multiple causes of loss of containment, from natural corrosion to illegal tapping. Based on prior researches and studies, detecting loss of containment accurately in the surface pipeline is difficult. The trade-off between cost-effectiveness and high accuracy has been the main issue when selecting the traditional detection method. The current best-performing method, Real-Time Transient Model (RTTM), requires analysis of closely positioned pressure, flow and temperature (PVT) points in the pipeline to be accurate. Having multiple adjacent PVT sensors along the pipeline is expensive, hence generally not a viable alternative from an economic standpoint.A conceptual approach to combine mathematical modeling using computational fluid dynamics and a supervised machine learning model has shown promising results to predict leakage in the pipeline. Mathematical modeling is used to generate simulation data where this data is used to train the leak detection and localization models. Mathematical models and simulation software have also been shown to provide comparable results with experimental data with very high levels of accuracy. While the supervised machine learning model requires a large training dataset for the development of accurate models, mathematical modeling has been shown to be able to generate the required datasets to justify the application of data analytics for the development of model-based leak detection systems for petroleum pipelines. This paper presents a review of key leak detection strategies for oil and gas pipelines, with a specific focus on crude oil applications, and presents the opportunities for the use of data analytics tools and mathematical modeling for the development of robust real-time leak detection and localization system for surface pipelines. A case study is also presented.

Keywords: pipeline, leakage, detection, AI

Procedia PDF Downloads 175
604 Ways to Prevent Increased Wear of the Drive Box Parts and the Central Drive of the Civil Aviation Turbo Engine Based on Tribology

Authors: Liudmila Shabalinskaya, Victor Golovanov, Liudmila Milinis, Sergey Loponos, Alexander Maslov, D. O. Frolov

Abstract:

The work is devoted to the rapid laboratory diagnosis of the condition of aircraft friction units, based on the application of the nondestructive testing method by analyzing the parameters of wear particles, or tribodiagnostics. The most important task of tribodiagnostics is to develop recommendations for the selection of more advanced designs, materials and lubricants based on data on wear processes for increasing the life and ensuring the safety of the operation of machines and mechanisms. The object of tribodiagnostics in this work are the tooth gears of the central drive and the gearboxes of the gas turbine engine of the civil aviation PS-90A type, in which rolling friction and sliding friction with slip occur. The main criterion for evaluating the technical state of lubricated friction units of a gas turbine engine is the intensity and rate of wear of the friction surfaces of the friction unit parts. When the engine is running, oil samples are taken and the state of the friction surfaces is evaluated according to the parameters of the wear particles contained in the oil sample, which carry important and detailed information about the wear processes in the engine transmission units. The parameters carrying this information include the concentration of wear particles and metals in the oil, the dispersion composition, the shape, the size ratio and the number of particles, the state of their surfaces, the presence in the oil of various mechanical impurities of non-metallic origin. Such a morphological analysis of wear particles has been introduced into the order of monitoring the status and diagnostics of various aircraft engines, including a gas turbine engine, since the type of wear characteristic of the central drive and the drive box is surface fatigue wear and the beginning of its development, accompanied by the formation of microcracks, leads to the formation of spherical, up to 10 μm in size, and in the aftermath of flocculent particles measuring 20-200 μm in size. Tribodiagnostics using the morphological analysis of wear particles includes the following techniques: ferrography, filtering, and computer analysis of the classification and counting of wear particles. Based on the analysis of several series of oil samples taken from the drive box of the engine during their operating time, a study was carried out of the processes of wear kinetics. Based on the results of the study and comparing the series of criteria for tribodiagnostics, wear state ratings and statistics of the results of morphological analysis, norms for the normal operating regime were developed. The study allowed to develop levels of wear state for friction surfaces of gearing and a 10-point rating system for estimating the likelihood of the occurrence of an increased wear mode and, accordingly, prevention of engine failures in flight.

Keywords: aviation, box of drives, morphological analysis, tribodiagnostics, tribology, ferrography, filtering, wear particle

Procedia PDF Downloads 250
603 Integration of Corporate Social Responsibility Criteria in Employee Variable Remuneration Plans

Authors: Jian Wu

Abstract:

Since a few years, some French companies have integrated CRS (corporate social responsibility) criteria in their variable remuneration plans to ‘restore a good working atmosphere’ and ‘preserve the natural environment’. These CSR criteria are based on concerns on environment protection, social aspects, and corporate governance. In June 2012, a report on this practice has been made jointly by ORSE (which means Observatory on CSR in French) and PricewaterhouseCoopers. Facing this initiative from the business world, we need to examine whether it has a real economic utility. We adopt a theoretical approach for our study. First, we examine the debate between the ‘orthodox’ point of view in economics and the CSR school of thought. The classical economic model asserts that in a capitalist economy, exists a certain ‘invisible hand’ which helps to resolve all problems. When companies seek to maximize their profits, they are also fulfilling, de facto, their duties towards society. As a result, the only social responsibility that firms should have is profit-searching while respecting the minimum legal requirement. However, the CSR school considers that, as long as the economy system is not perfect, there is no ‘invisible hand’ which can arrange all in a good order. This means that we cannot count on any ‘divine force’ which makes corporations responsible regarding to society. Something more needs to be done in addition to firms’ economic and legal obligations. Then, we reply on some financial theories and empirical evident to examine the sound foundation of CSR. Three theories developed in corporate governance can be used. Stakeholder theory tells us that corporations owe a duty to all of their stakeholders including stockholders, employees, clients, suppliers, government, environment, and society. Social contract theory tells us that there are some tacit ‘social contracts’ between a company and society itself. A firm has to respect these contracts if it does not want to be punished in the form of fine, resource constraints, or bad reputation. Legitime theory tells us that corporations have to ‘legitimize’ their actions toward society if they want to continue to operate in good conditions. As regards empirical results, we present a literature review on the relationship between the CSR performance and the financial performance of a firm. We note that, due to difficulties in defining these performances, this relationship remains still ambiguous despite numerous research works realized in the field. Finally, we are curious to know whether the integration of CSR criteria in variable remuneration plans – which is practiced so far in big companies – should be extended to other ones. After investigation, we note that two groups of firms have the greatest need. The first one involves industrial sectors whose activities have a direct impact on the environment, such as petroleum and transport companies. The second one involves companies which are under pressures in terms of return to deal with international competition.

Keywords: corporate social responsibility, corporate governance, variable remuneration, stakeholder theory

Procedia PDF Downloads 171
602 Urban Open Source: Synthesis of a Citizen-Centric Framework to Design Densifying Cities

Authors: Shaurya Chauhan, Sagar Gupta

Abstract:

Prominent urbanizing centres across the globe like Delhi, Dhaka, or Manila have exhibited that development often faces a challenge in bridging the gap among the top-down collective requirements of the city and the bottom-up individual aspirations of the ever-diversifying population. When this exclusion is intertwined with rapid urbanization and diversifying urban demography: unplanned sprawl, poor planning, and low-density development emerge as automated responses. In parallel, new ideas and methods of densification and public participation are being widely adopted as sustainable alternatives for the future of urban development. This research advocates a collaborative design method for future development: one that allows rapid application with its prototypical nature and an inclusive approach with mediation between the 'user' and the 'urban', purely with the use of empirical tools. Building upon the concepts and principles of 'open-sourcing' in design, the research establishes a design framework that serves the current user requirements while allowing for future citizen-driven modifications. This is synthesized as a 3-tiered model: user needs – design ideology – adaptive details. The research culminates into a context-responsive 'open source project development framework' (hereinafter, referred to as OSPDF) that can be used for on-ground field applications. To bring forward specifics, the research looks at a 300-acre redevelopment in the core of a rapidly urbanizing city as a case encompassing extreme physical, demographic, and economic diversity. The suggestive measures also integrate the region’s cultural identity and social character with the diverse citizen aspirations, using architecture and urban design tools, and references from recognized literature. This framework, based on a vision – feedback – execution loop, is used for hypothetical development at the five prevalent scales in design: master planning, urban design, architecture, tectonics, and modularity, in a chronological manner. At each of these scales, the possible approaches and avenues for open- sourcing are identified and validated, through hit-and-trial, and subsequently recorded. The research attempts to re-calibrate the architectural design process and make it more responsive and people-centric. Analytical tools such as Space, Event, and Movement by Bernard Tschumi and Five-Point Mental Map by Kevin Lynch, among others, are deep rooted in the research process. Over the five-part OSPDF, a two-part subsidiary process is also suggested after each cycle of application, for a continued appraisal and refinement of the framework and urban fabric with time. The research is an exploration – of the possibilities for an architect – to adopt the new role of a 'mediator' in development of the contemporary urbanity.

Keywords: open source, public participation, urbanization, urban development

Procedia PDF Downloads 137
601 Mindfulness and the Purpose of Being in the Present

Authors: Indujeeva Keerthila Peiris

Abstract:

The secular view of mindfulness has some connotation to the original meaning of mindfulness mentioned in the Theravada Buddhist texts (Pāli Canon), but there is a substantial difference in the meaning of the two. Secular Mindfulness Based Interventions (MBI) focus on stilling the mind, which may provide short-term benefits and help individuals to deal with physical pain, grief, and distress. However, as with many popular educational innovations, the foundational values of mindfulness strategies have been distorted and subverted in a number of instances in which ‘McMindfulness’ programmes have been implemented with a view to reducing mindfulness mediation as a self-help technique that is easily misappropriated for the exclusive pursuit of corporate objectives, employee pacification, and commercial profit. The intention of this paper is not to critique the misappropriations of mindfulness. Instead, to go back to the root source and bring insights from the Buddhist Pāli Canon and its associated teachings on mindfulness in its own terms. In the Buddha’s discourses, as preserved in the Pāli Canon, there is nothing more significant than the understanding and practice of ‘Satipatthãna’. The Satipatthāna Sutta , the ‘Discourse on the Establishment of Mindfulness,’ opens with a proclamation highlighting both the purpose of this training and its methodology. The right practice of mindfulness is the gateway to understanding the Buddha’s teaching. However, although this concept is widely discussed among the Dhamma practitioners, it is the least understood one of them all. The purpose of this paper is to understand deeper meaning of mindfulness as it was originally intended by the Teacher. The natural state of mind is that it wanders. It wanders into the past, the present, and the future. One’s ability to hold attention to a mind object (emotion, thought, feeling, sensation, sense impression) called ‘concentration’. The intentional concentration process does not lead to wisdom. However, the development of wisdom starts when the mind is calm, concentrated, and unified. The practice of insight contemplation aims at gaining a direct understanding of the real nature of phenomena. According to the Buddha’s teaching, there are three basic facts of all existence: 1) impermanence (anicca in Pāli) ; 2) fabrication (also commonly known as suffering, unsatisfactoriness, sankhara or dukka in Pāli); 3) not-self (insubstantiality or impersonality, annatta in Pāli ). The entire Buddhist doctrine is based on these three facts. The problem is our ignorance covers reality. It is not that a person sees the emptiness of them or that we try to see the emptiness of our experience by conceptually thinking that they are empty. It is an experiential outcome that happens when the cause-and- effect overrides the self-view (sakkaya dhitti), and ignorance is known as ignorance and eradicated once and for all. Therefore, the right view (samma dhitti) is the starting point of the path, not ethical conduct (sila) or samadhi (jhana). In order to develop the right view, we need to first listen to the correct Dhamma and possess Yoniso manasikara (right comprehension) to know the five aggregates as five aggregates.

Keywords: mindfulness, spirituality, buddhism, pali canon

Procedia PDF Downloads 64
600 The Examination of Prospective ICT Teachers’ Attitudes towards Application of Computer Assisted Instruction

Authors: Agâh Tuğrul Korucu, Ismail Fatih Yavuzaslan, Lale Toraman

Abstract:

Nowadays, thanks to development of technology, integration of technology into teaching and learning activities is spreading. Increasing technological literacy which is one of the expected competencies for individuals of 21st century is associated with the effective use of technology in education. The most important factor in effective use of technology in education institutions is ICT teachers. The concept of computer assisted instruction (CAI) refers to the utilization of information and communication technology as a tool aided teachers in order to make education more efficient and improve its quality in the process of educational. Teachers can use computers in different places and times according to owned hardware and software facilities and characteristics of the subject and student in CAI. Analyzing teachers’ use of computers in education is significant because teachers are the ones who manage the course and they are the most important element in comprehending the topic by students. To accomplish computer-assisted instruction efficiently is possible through having positive attitude of teachers. Determination the level of knowledge, attitude and behavior of teachers who get the professional knowledge from educational faculties and elimination of deficiencies if any are crucial when teachers are at the faculty. Therefore, the aim of this paper is to identify ICT teachers' attitudes toward computer-assisted instruction in terms of different variables. Research group consists of 200 prospective ICT teachers studying at Necmettin Erbakan University Ahmet Keleşoğlu Faculty of Education CEIT department. As data collection tool of the study; “personal information form” developed by the researchers and used to collect demographic data and "the attitude scale related to computer-assisted instruction" are used. The scale consists of 20 items. 10 of these items show positive feature, while 10 of them show negative feature. The Kaiser-Meyer-Olkin (KMO) coefficient of the scale is found 0.88 and Barlett test significance value is found 0.000. The Cronbach’s alpha reliability coefficient of the scale is found 0.93. In order to analyze the data collected by data collection tools computer-based statistical software package used; statistical techniques such as descriptive statistics, t-test, and analysis of variance are utilized. It is determined that the attitudes of prospective instructors towards computers do not differ according to their educational branches. On the other hand, the attitudes of prospective instructors who own computers towards computer-supported education are determined higher than those of the prospective instructors who do not own computers. It is established that the departments of students who previously received computer lessons do not affect this situation so much. The result is that; the computer experience affects the attitude point regarding the computer-supported education positively.

Keywords: computer based instruction, teacher candidate, attitude, technology based instruction, information and communication technologies

Procedia PDF Downloads 285
599 Polyurethane Membrane Mechanical Property Study for a Novel Carotid Covered Stent

Authors: Keping Zuo, Jia Yin Chia, Gideon Praveen Kumar Vijayakumar, Foad Kabinejadian, Fangsen Cui, Pei Ho, Hwa Liang Leo

Abstract:

Carotid artery is the major vessel supplying blood to the brain. Carotid artery stenosis is one of the three major causes of stroke and the stroke is the fourth leading cause of death and the first leading cause of disability in most developed countries. Although there is an increasing interest in carotid artery stenting for treatment of cervical carotid artery bifurcation therosclerotic disease, currently available bare metal stents cannot provide an adequate protection against the detachment of the plaque fragments over diseased carotid artery, which could result in the formation of micro-emboli and subsequent stroke. Our research group has recently developed a novel preferential covered-stent for carotid artery aims to prevent friable fragments of atherosclerotic plaques from flowing into the cerebral circulation, and yet retaining the ability to preserve the flow of the external carotid artery. The preliminary animal studies have demonstrated the potential of this novel covered-stent design for the treatment of carotid therosclerotic stenosis. The purpose of this study is to evaluate the biomechanical property of PU membrane of different concentration configurations in order to refine the stent coating technique and enhance the clinical performance of our novel carotid covered stent. Results from this study also provide necessary material property information crucial for accurate simulation analysis for our stents. Method: Medical grade Polyurethane (ChronoFlex AR) was used to prepare PU membrane specimens. Different PU membrane configurations were subjected to uniaxial test: 22%, 16%, and 11% PU solution were made by mixing the original solution with proper amount of the Dimethylacetamide (DMAC). The specimens were then immersed in physiological saline solution for 24 hours before test. All specimens were moistened with saline solution before mounting and subsequent uniaxial testing. The specimens were preconditioned by loading the PU membrane sample to a peak stress of 5.5 Mpa for 10 consecutive cycles at a rate of 50 mm/min. The specimens were then stretched to failure at the same loading rate. Result: The results showed that the stress-strain response curves of all PU membrane samples exhibited nonlinear characteristic. For the ultimate failure stress, 22% PU membrane was significantly higher than 16% (p<0.05). In general, our preliminary results showed that lower concentration PU membrane is stiffer than the higher concentration one. From the perspective of mechanical properties, 22% PU membrane is a better choice for the covered stent. Interestingly, the hyperelastic Ogden model is able to accurately capture the nonlinear, isotropic stress-strain behavior of PU membrane with R2 of 0.9977 ± 0.00172. This result will be useful for future biomechanical analysis of our stent designs and will play an important role for computational modeling of our covered stent fatigue study.

Keywords: carotid artery, covered stent, nonlinear, hyperelastic, stress, strain

Procedia PDF Downloads 299
598 Impact of Material Chemistry and Morphology on Attrition Behavior of Excipients during Blending

Authors: Sri Sharath Kulkarni, Pauline Janssen, Alberto Berardi, Bastiaan Dickhoff, Sander van Gessel

Abstract:

Blending is a common process in the production of pharmaceutical dosage forms where the high shear is used to obtain a homogenous dosage. The shear required can lead to uncontrolled attrition of excipients and affect API’s. This has an impact on the performance of the formulation as this can alter the structure of the mixture. Therefore, it is important to understand the driving mechanisms for attrition. The aim of this study was to increase the fundamental understanding of the attrition behavior of excipients. Attrition behavior of the excipients was evaluated using a high shear blender (Procept Form-8, Zele, Belgium). Twelve pure excipients are tested, with morphologies varying from crystalline (sieved), granulated to spray dried (round to fibrous). Furthermore, materials include lactose, microcrystalline cellulose (MCC), di-calcium phosphate (DCP), and mannitol. The rotational speed of the blender was set at 1370 rpm to have the highest shear with a Froude (Fr) number 9. Varying blending times of 2-10 min were used. Subsequently, after blending, the excipients were analyzed for changes in particle size distribution (PSD). This was determined (n = 3) by dry laser diffraction (Helos/KR, Sympatec, Germany). Attrition was found to be a surface phenomenon which occurs in the first minutes of the high shear blending process. An increase of blending time above 2 mins showed no change in particle size distribution. Material chemistry was identified as a key driver for differences in the attrition behavior between different excipients. This is mainly related to the proneness to fragmentation, which is known to be higher for materials such as DCP and mannitol compared to lactose and MCC. Secondly, morphology also was identified as a driver of the degree of attrition. Granular products consisting of irregular surfaces showed the highest reduction in particle size. This is due to the weak solid bonds created between the primary particles during the granulation process. Granular DCP and mannitol show a reduction of 80-90% in x10(µm) compared to a 20-30% drop for granular lactose (monohydrate and anhydrous). Apart from the granular lactose, all the remaining morphologies of lactose (spray dried-round, sieved-tomahawk, milled) show little change in particle size. Similar observations have been made for spray-dried fibrous MCC. All these morphologies have little irregular or sharp surfaces and thereby are less prone to fragmentation. Therefore, products containing brittle materials such as mannitol and DCP are more prone to fragmentation when exposed to shear. Granular products with irregular surfaces lead to an increase in attrition. While spherical, crystalline, or fibrous morphologies show reduced impact during high shear blending. These changes in size will affect the functionality attributes of the formulation, such as flow, API homogeneity, tableting, formation of dust, etc. Hence it is important for formulators to fully understand the excipients to make the right choices.

Keywords: attrition, blending, continuous manufacturing, excipients, lactose, microcrystalline cellulose, shear

Procedia PDF Downloads 103
597 Content Monetization as a Mark of Media Economy Quality

Authors: Bela Lebedeva

Abstract:

Characteristics of the Web as a channel of information dissemination - accessibility and openness, interactivity and multimedia news - become wider and cover the audience quickly, positively affecting the perception of content, but blur out the understanding of the journalistic work. As a result audience and advertisers continue migrating to the Internet. Moreover, online targeting allows monetizing not only the audience (as customarily given to traditional media) but also the content and traffic more accurately. While the users identify themselves with the qualitative characteristics of the new market, its actors are formed. Conflict of interests is laid in the base of the economy of their relations, the problem of traffic tax as an example. Meanwhile, content monetization actualizes fiscal interest of the state too. The balance of supply and demand is often violated due to the political risks, particularly in terms of state capitalism, populism and authoritarian methods of governance such social institutions as the media. A unique example of access to journalistic material, limited by monetization of content is a television channel Dozhd' (Rain) in Russian web space. Its liberal-minded audience has a better possibility for discussion. However, the channel could have been much more successful in terms of unlimited free speech. Avoiding state pressure and censorship its management has decided to save at least online performance and monetizing all of the content for the core audience. The study Methodology was primarily based on the analysis of journalistic content, on the qualitative and quantitative analysis of the audience. Reconstructing main events and relationships of actors on the market for the last six years researcher has reached some conclusions. First, under the condition of content monetization the capitalization of its quality will always strive to quality characteristics of user, thereby identifying him. Vice versa, the user's demand generates high-quality journalism. The second conclusion follows the previous one. The growth of technology, information noise, new political challenges, the economy volatility and the cultural paradigm change – all these factors form the content paying model for an individual user. This model defines him as a beneficiary of specific knowledge and indicates the constant balance of supply and demand other conditions being equal. As a result, a new economic quality of information is created. This feature is an indicator of the market as a self-regulated system. Monetized information quality is less popular than that of the Public Broadcasting Service, but this audience is able to make decisions. These very users keep the niche sectors which have more potential of technology development, including the content monetization ways. The third point of the study allows develop it in the discourse of media space liberalization. This cultural phenomenon may open opportunities for the development of social and economic relations architecture both locally and regionally.

Keywords: content monetization, state capitalism, media liberalization, media economy, information quality

Procedia PDF Downloads 232
596 Coordinative Remote Sensing Observation Technology for a High Altitude Barrier Lake

Authors: Zhang Xin

Abstract:

Barrier lakes are lakes formed by storing water in valleys, river valleys or riverbeds after being blocked by landslide, earthquake, debris flow, and other factors. They have great potential safety hazards. When the water is stored to a certain extent, it may burst in case of strong earthquake or rainstorm, and the lake water overflows, resulting in large-scale flood disasters. In order to ensure the safety of people's lives and property in the downstream, it is very necessary to monitor the barrier lake. However, it is very difficult and time-consuming to manually monitor the barrier lake in high altitude areas due to the harsh climate and steep terrain. With the development of earth observation technology, remote sensing monitoring has become one of the main ways to obtain observation data. Compared with a single satellite, multi-satellite remote sensing cooperative observation has more advantages; its spatial coverage is extensive, observation time is continuous, imaging types and bands are abundant, it can monitor and respond quickly to emergencies, and complete complex monitoring tasks. Monitoring with multi-temporal and multi-platform remote sensing satellites can obtain a variety of observation data in time, acquire key information such as water level and water storage capacity of the barrier lake, scientifically judge the situation of the barrier lake and reasonably predict its future development trend. In this study, The Sarez Lake, which formed on February 18, 1911, in the central part of the Pamir as a result of blockage of the Murgab River valley by a landslide triggered by a strong earthquake with magnitude of 7.4 and intensity of 9, is selected as the research area. Since the formation of Lake Sarez, it has aroused widespread international concern about its safety. At present, the use of mechanical methods in the international analysis of the safety of Lake Sarez is more common, and remote sensing methods are seldom used. This study combines remote sensing data with field observation data, and uses the 'space-air-ground' joint observation technology to study the changes in water level and water storage capacity of Lake Sarez in recent decades, and evaluate its safety. The situation of the collapse is simulated, and the future development trend of Lake Sarez is predicted. The results show that: 1) in recent decades, the water level of Lake Sarez has not changed much and remained at a stable level; 2) unless there is a strong earthquake or heavy rain, it is less likely that the Lake Sarez will be broken under normal conditions, 3) lake Sarez will remain stable in the future, but it is necessary to establish an early warning system in the Lake Sarez area for remote sensing of the area, 4) the coordinative remote sensing observation technology is feasible for the high altitude barrier lake of Sarez.

Keywords: coordinative observation, disaster, remote sensing, geographic information system, GIS

Procedia PDF Downloads 113
595 Association of Depression with Physical Inactivity and Time Watching Television: A Cross-Sectional Study with the Brazilian Population PNS, 2013

Authors: Margareth Guimaraes Lima, Marilisa Berti A. Barros, Deborah Carvalho Malta

Abstract:

The relationship between physical activity (PA) and depression has been investigated, in both, observational and clinical studies: PA can integrate the treatments for depression; the physical inactivity (PI) may contribute to increase depression symptoms; and on the other hand, emotional problems can decrease PA. The main of this study was analyze the association among leisure and transportation PI and time watching television (TV) according to depression (minor and major), evaluated with the Patient Health Questionnaire (PHQ-9). The association was also analyzed by gender. This is a cross-sectional study. Data were obtained from the National Health Survey 2013 (PNS), performed with representative sample of the Brazilian adult population, in 2013. The PNS collected information from 60,202 individuals, aged 18 years or more. The independent variable were: leisure time physical inactivity (LTPI), considering inactive or insufficiently actives (categories were linked for analyzes), those who do not performed a minimum of 150 or 74 minutes of moderate or vigorous LTPA, respectively, by week; transportation physical inactivity (TPI), individuals who did not reached 150 minutes, by week, travelling by bicycle or on foot to work or other activities; daily time watching TV > 5 hours. The principal independent variable was depression, identified by PHQ-9. Individuals were classified with major depression, with > 5 symptoms, more than seven days, but one of the symptoms was “depressive mood” or “lack of interest or pleasure”. The others had minor depression. The variables used to adjustment were gender, age, schooling and chronic disease. The prevalence of LTPI, TPI and TV time were estimated according to depression, and differences were tested with Chi-Square test. Adjusted prevalence ratios were estimated using multiple Poisson regression models. The analyzes also had stratification by gender. Mean age of the studied population was 42.9 years old (CI95%:42.6-43.2) and 52.9% were women. 77.5% and 68.1% were inactive or insufficiently active in leisure and transportation, respectively and 13.3% spent time watching TV 5 > hours. 6% and 4.1% of the Brazilian population were diagnosed with minor or major depression. LTPI prevalence was 5% and 9% higher among individuals with minor and major depression, respectively, comparing with no depression. The prevalence of TPI was 7% higher in those with major depression. Considering larger time watching TV, the prevalence was 45% and 74% higher among those with minor and major depression, respectively. Analyzing by gender, the associations were greater in men than women and TPI was note be associated, in women. The study detected the higher prevalence of leisure time physical inactivity and, especially, time spent watching TV, among individuals with major and minor depression, after to adjust for a number of potential confounding factors. TPI was only associated with major disorders and among men. Considering the cross-sectional design of the research, these associations can point out the importance of the mental problems control of the population to increase PA and decrease the sedentary lifestyle; on the other hand, the study highlight the need of interventions by encouraging people with depression, to practice PA, even to transportation.

Keywords: depression, physical activity, PHQ-9, sedentary lifestyle

Procedia PDF Downloads 146
594 The Dynamics of a Droplet Spreading on a Steel Surface

Authors: Evgeniya Orlova, Dmitriy Feoktistov, Geniy Kuznetsov

Abstract:

Spreading of a droplet over a solid substrate is a key phenomenon observed in the following engineering applications: thin film coating, oil extraction, inkjet printing, and spray cooling of heated surfaces. Droplet cooling systems are known to be more effective than film or rivulet cooling systems. It is caused by the greater evaporation surface area of droplets compared with the film of the same mass and wetting surface. And the greater surface area of droplets is connected with the curvature of the interface. Location of the droplets on the cooling surface influences on the heat transfer conditions. The close distance between the droplets provides intensive heat removal, but there is a possibility of their coalescence in the liquid film. The long distance leads to overheating of the local areas of the cooling surface and the occurrence of thermal stresses. To control the location of droplets is possible by changing the roughness, structure and chemical composition of the surface. Thus, control of spreading can be implemented. The most important characteristic of spreading of droplets on solid surfaces is a dynamic contact angle, which is a function of the contact line speed or capillary number. However, there is currently no universal equation, which would describe the relationship between these parameters. This paper presents the results of the experimental studies of water droplet spreading on metal substrates with different surface roughness. The effect of the droplet growth rate and the surface roughness on spreading characteristics was studied at low capillary numbers. The shadow method using high speed video cameras recording up to 10,000 frames per seconds was implemented. A droplet profile was analyzed by Axisymmetric Drop Shape Analyses techniques. According to change of the dynamic contact angle and the contact line speed three sequential spreading stages were observed: rapid increase in the dynamic contact angle; monotonous decrease in the contact angle and the contact line speed; and form of the equilibrium contact angle at constant contact line. At low droplet growth rate, the dynamic contact angle of the droplet spreading on the surfaces with the maximum roughness is found to increase throughout the spreading time. It is due to the fact that the friction force on such surfaces is significantly greater than the inertia force; and the contact line is pinned on microasperities of a relief. At high droplet growth rate the contact angle decreases during the second stage even on the surfaces with the maximum roughness, as in this case, the liquid does not fill the microcavities, and the droplet moves over the “air cushion”, i.e. the interface is a liquid/gas/solid system. Also at such growth rates pulsation of liquid flow was detected; and the droplet oscillates during the spreading. Thus, obtained results allow to conclude that it is possible to control spreading by using the surface roughness and the growth rate of droplets on surfaces as varied factors. Also, the research findings may be used for analyzing heat transfer in rivulet and drop cooling systems of high energy equipment.

Keywords: contact line speed, droplet growth rate, dynamic contact angle, shadow system, spreading

Procedia PDF Downloads 318
593 Cinematic Transgression and Sexuality: A Study of Rituparno Ghosh's ‘Queer Trilogy’

Authors: Sudipta Garai

Abstract:

Films as a cultural, social practice remains a dominant space for creation and destruction of ideologies and practices which make the sociological viewing, analysis, and interpretation of the same a complex affair. It remains the doorway between the interpretations and understanding of the writer/director and the reader/viewer. India, being a multi-linguistic culture, the media plays a much intriguing role than that of newspaper, books, stories, novels or any other medium of expression. Known to be the largest democracy, the State seem to guarantee and safeguard people’s choices and life of dignity through its Fundamental Rights and Directives. However, the laws contradict themselves when it comes to IPC 377 criminalizing anything except penovaginal sexual intercourse restricting alternative sexual preferences and practices questioning its sense of ‘democracy.' In this context, the issue of homosexuality came up in bits and pieces through various representations in ‘popular’ cinema mostly with sudden references of mockery and laughter where the explicit narratives of ‘queer’ seemed missing. Rituparno Ghosh, an eminent film maker of Bengal, came up as the ‘queer’ face in Kolkata specifically through his ‘queer’ trilogy (Memories in March, 2010; Arekti Premer Golpo, 2010; Chitrangada: A Crowning Wish, 2012) coming out of his own closet and speaking about his own sexual choices not only through the explicit narratives in films but also in person which made these films an important point of departure in Bengali film history. A sociological reading of these films through a discourse analysis is being done with the critical questions of ‘choice,' ’freedom,' ‘love and marriage’ and most importantly the ‘change.' This study not only focuses on the films and its analysis of content but also to engage with its audience, queer and not in order to extend beyond the art form into the actual vulnerabilities of life and experiences through informal interviews, focused group discussions and engaging with the real life narratives. A research of this kind is always looked upon as a medium of change hoping for a better world wiping away the discrimination and ‘shame’ the ‘queer’ faces in their everyday life, but a social science research is limited but its ‘time’ and academic boundary where the hope of change might be initiated but not fulfilled. The experiences and reflections of the ‘queer’ not only redefined the narratives of the films but also me as a researcher. The perspectives of the ‘hetero-normative’ informants gave a broader picture of the study and the socio-cultural complications that are intrigued with the ideas of resistance and change. The issues on subjectivity, power, and position couldn’t be wiped out in a study of this kind as both politics and aesthetics become integrated with each other in the creation of any art form be it films or a study of research.

Keywords: cinema, alternative sexualities, narratives, sexual choices, state and society

Procedia PDF Downloads 352
592 Slope Stability Assessment in Metasedimentary Deposit of an Opencast Mine: The Case of the Dikuluwe-Mashamba (DIMA) Mine in the DR Congo

Authors: Dina Kon Mushid, Sage Ngoie, Tshimbalanga Madiba, Kabutakapua Kakanda

Abstract:

Slope stability assessment is still the biggest challenge in mining activities and civil engineering structures. The slope in an opencast mine frequently reaches multiple weak layers that lead to the instability of the pit. Faults and soft layers throughout the rock would increase weathering and erosion rates. Therefore, it is essential to investigate the stability of the complex strata to figure out how stable they are. In the Dikuluwe-Mashamba (DIMA) area, the lithology of the stratum is a set of metamorphic rocks whose parent rocks are sedimentary rocks with a low degree of metamorphism. Thus, due to the composition and metamorphism of the parent rock, the rock formation is different in hardness and softness, which means that when the content of dolomitic and siliceous is high, the rock is hard. It is softer when the content of argillaceous and sandy is high. Therefore, from the vertical direction, it appears as a weak and hard layer, and from the horizontal direction, it seems like a smooth and hard layer in the same rock layer. From the structural point of view, the main structures in the mining area are the Dikuluwe dipping syncline and the Mashamba dipping anticline, and the occurrence of rock formations varies greatly. During the folding process of the rock formation, the stress will concentrate on the soft layer, causing the weak layer to be broken. At the same time, the phenomenon of interlayer dislocation occurs. This article aimed to evaluate the stability of metasedimentary rocks of the Dikuluwe-Mashamba (DIMA) open-pit mine using limit equilibrium and stereographic methods Based on the presence of statistical structural planes, the stereographic projection was used to study the slope's stability and examine the discontinuity orientation data to identify failure zones along the mine. The results revealed that the slope angle is too steep, and it is easy to induce landslides. The numerical method's sensitivity analysis showed that the slope angle and groundwater significantly impact the slope safety factor. The increase in the groundwater level substantially reduces the stability of the slope. Among the factors affecting the variation in the rate of the safety factor, the bulk density of soil is greater than that of rock mass, the cohesion of soil mass is smaller than that of rock mass, and the friction angle in the rock mass is much larger than that in the soil mass. The analysis showed that the rock mass structure types are mostly scattered and fragmented; the stratum changes considerably, and the variation of rock and soil mechanics parameters is significant.

Keywords: slope stability, weak layer, safety factor, limit equilibrium method, stereography method

Procedia PDF Downloads 253
591 Technology Management for Early Stage Technologies

Authors: Ming Zhou, Taeho Park

Abstract:

Early stage technologies have been particularly challenging to manage due to high degrees of their numerous uncertainties. Most research results directly out of a research lab tend to be at their early, if not the infant stage. A long while uncertain commercialization process awaits these lab results. The majority of such lab technologies go nowhere and never get commercialized due to various reasons. Any efforts or financial resources put into managing these technologies turn fruitless. High stake naturally calls for better results, which make a patenting decision harder to make. A good and well protected patent goes a long way for commercialization of the technology. Our preliminary research showed that there was not a simple yet productive procedure for such valuation. Most of the studies now have been theoretical and overly comprehensive where practical suggestions were non-existent. Hence, we attempted to develop a simple and highly implementable procedure for efficient and scalable valuation. We thoroughly reviewed existing research, interviewed practitioners in the Silicon Valley area, and surveyed university technology offices. Instead of presenting another theoretical and exhaustive research, we aimed at developing a practical guidance that a government agency and/or university office could easily deploy and get things moving to later steps of managing early stage technologies. We provided a procedure to thriftily value and make the patenting decision. A patenting index was developed using survey data and expert opinions. We identified the most important factors to be used in the patenting decision using survey ratings. The rating then assisted us in generating good relative weights for the later scoring and weighted averaging step. More importantly, we validated our procedure by testing it with our practitioner contacts. Their inputs produced a general yet highly practical cut schedule. Such schedule of realistic practices has yet to be witnessed our current research. Although a technology office may choose to deviate from our cuts, what we offered here at least provided a simple and meaningful starting point. This procedure was welcomed by practitioners in our expert panel and university officers in our interview group. This research contributed to our current understanding and practices of managing early stage technologies by instating a heuristically simple yet theoretical solid method for the patenting decision. Our findings generated top decision factors, decision processes and decision thresholds of key parameters. This research offered a more practical perspective which further completed our extant knowledge. Our results could be impacted by our sample size and even biased a bit by our focus on the Silicon Valley area. Future research, blessed with bigger data size and more insights, may want to further train and validate our parameter values in order to obtain more consistent results and analyze our decision factors for different industries.

Keywords: technology management, early stage technology, patent, decision

Procedia PDF Downloads 337
590 A Galectin from Rock Bream Oplegnathus fasciatus: Molecular Characterization and Immunological Properties

Authors: W. S. Thulasitha, N. Umasuthan, G. I. Godahewa, Jehee Lee

Abstract:

In fish, innate immune defense is the first immune response against microbial pathogens which consists of several antimicrobial components. Galectins are one of the carbohydrate binding lectins that have the ability to identify pathogen by recognition of pathogen associated molecular patterns. Galectins play a vital role in the regulation of innate and adaptive immune responses. Rock bream Oplegnathus fasciatus is one of the most important cultured species in Korea and Japan. Considering the losses due to microbial pathogens, present study was carried out to understand the molecular and functional characteristics of a galectin in normal and pathogenic conditions, which could help to establish an understanding about immunological components of rock bream. Complete cDNA of rock bream galectin like protein B (rbGal like B) was identified from the cDNA library, and the in silico analysis was carried out using bioinformatic tools. Genomic structure was derived from the BAC library by sequencing a specific clone and using Spidey. Full length of rbGal like B (contig14775) cDNA containing 517 nucleotides was identified from the cDNA library which comprised of 435 bp in the open reading frame encoding a deduced protein composed of 145 amino acids. The molecular mass of putative protein was predicted as 16.14 kDa with an isoelectric point of 8.55. A characteristic conserved galactose binding domain was located from 12 to 145 amino acids. Genomic structure of rbGal like B consisted of 4 exons and 3 introns. Moreover, pairwise alignment showed that rock bream rbGal like B shares highest similarity (95.9 %) and identity (91 %) with Takifugu rubripes galectin related protein B like and lowest similarity (55.5 %) and identity (32.4 %) with Homo sapiens. Multiple sequence alignment demonstrated that the galectin related protein B was conserved among vertebrates. A phylogenetic analysis revealed that rbGal like B protein clustered together with other fish homologs in fish clade. It showed closer evolutionary link with Takifugu rubripes. Tissue distribution and expression patterns of rbGal like B upon immune challenges were performed using qRT-PCR assays. Among all tested tissues, level of rbGal like B expression was significantly high in gill tissue followed by kidney, intestine, heart and spleen. Upon immune challenges, it showed an up-regulated pattern of expression with Edwardsiella tarda, rock bream irido virus and poly I:C up to 6 h post injection and up to 24 h with LPS. However, In the presence of Streptococcus iniae rbGal like B showed an up and down pattern of expression with the peak at 6 - 12 h. Results from the present study revealed the phylogenetic position and role of rbGal like B in response to microbial infection in rock bream.

Keywords: galectin like protein B, immune response, Oplegnathus fasciatus, molecular characterization

Procedia PDF Downloads 343
589 Thermal-Mechanical Analysis of a Bridge Deck to Determine Residual Weld Stresses

Authors: Evy Van Puymbroeck, Wim Nagy, Ken Schotte, Heng Fang, Hans De Backer

Abstract:

The knowledge of residual stresses for welded bridge components is essential to determine the effect of the residual stresses on the fatigue life behavior. The residual stresses of an orthotropic bridge deck are determined by simulating the welding process with finite element modelling. The stiffener is placed on top of the deck plate before welding. A chained thermal-mechanical analysis is set up to determine the distribution of residual stresses for the bridge deck. First, a thermal analysis is used to determine the temperatures of the orthotropic deck for different time steps during the welding process. Twin wire submerged arc welding is used to construct the orthotropic plate. A double ellipsoidal volume heat source model is used to describe the heat flow through a material for a moving heat source. The heat input is used to determine the heat flux which is applied as a thermal load during the thermal analysis. The heat flux for each element is calculated for different time steps to simulate the passage of the welding torch with the considered welding speed. This results in a time dependent heat flux that is applied as a thermal loading. Thermal material behavior is specified by assigning the properties of the material in function of the high temperatures during welding. Isotropic hardening behavior is included in the model. The thermal analysis simulates the heat introduced in the two plates of the orthotropic deck and calculates the temperatures during the welding process. After the calculation of the temperatures introduced during the welding process in the thermal analysis, a subsequent mechanical analysis is performed. For the boundary conditions of the mechanical analysis, the actual welding conditions are considered. Before welding, the stiffener is connected to the deck plate by using tack welds. These tack welds are implemented in the model. The deck plate is allowed to expand freely in an upwards direction while it rests on a firm and flat surface. This behavior is modelled by using grounded springs. Furthermore, symmetry points and lines are used to prevent the model to move freely in other directions. In the thermal analysis, a mechanical material model is used. The calculated temperatures during the thermal analysis are introduced during the mechanical analysis as a time dependent load. The connection of the elements of the two plates in the fusion zone is realized with a glued connection which is activated when the welding temperature is reached. The mechanical analysis results in a distribution of the residual stresses. The distribution of the residual stresses of the orthotropic bridge deck is compared with results from literature. Literature proposes uniform tensile yield stresses in the weld while the finite element modelling showed tensile yield stresses at a short distance from the weld root or the weld toe. The chained thermal-mechanical analysis results in a distribution of residual weld stresses for an orthotropic bridge deck. In future research, the effect of these residual stresses on the fatigue life behavior of welded bridge components can be studied.

Keywords: finite element modelling, residual stresses, thermal-mechanical analysis, welding simulation

Procedia PDF Downloads 165
588 Developing a Framework for Designing Digital Assessments for Middle-school Aged Deaf or Hard of Hearing Students in the United States

Authors: Alexis Polanco Jr, Tsai Lu Liu

Abstract:

Research on digital assessment for deaf and hard of hearing (DHH) students is negligible. Part of this stems from the DHH assessment design existing at the intersection of the emergent disciplines of usability, accessibility, and child-computer interaction (CCI). While these disciplines have some prevailing guidelines —e.g. in user experience design (UXD), there is Jacob Nielsen’s 10 Usability Heuristics (Nielsen-10); for accessibility, there are the Web Content Accessibility Guidelines (WCAG) & the Principles of Universal Design (PUD)— this research was unable to uncover a unified set of guidelines. Given that digital assessments have lasting implications for the funding and shaping of U.S. school districts, it is vital that cross-disciplinary guidelines emerge. As a result, this research seeks to provide a framework by which these disciplines can share knowledge. The framework entails a process of asking subject-matter experts (SMEs) and design & development professionals to self-describe their fields of expertise, how their work might serve DHH students, and to expose any incongruence between their ideal process and what is permissible at their workplace. This research used two rounds of mixed methods. The first round consisted of structured interviews with SMEs in usability, accessibility, CCI, and DHH education. These practitioners were not designers by trade but were revealed to use designerly work processes. In addition to asking these SMEs about their field of expertise, work process, etc., these SMEs were asked to comment about whether they believed Nielsen-10 and/or PUD were sufficient for designing products for middle-school DHH students. This first round of interviews revealed that Nielsen-10 and PUD were, at best, a starting point for creating middle-school DHH design guidelines or, at worst insufficient. The second round of interviews followed a semi-structured interview methodology. The SMEs who were interviewed in the first round were asked open-ended follow-up questions about their semantic understanding of guidelines— going from the most general sense down to the level of design guidelines for DHH middle school students. Designers and developers who were never interviewed previously were asked the same questions that the SMEs had been asked across both rounds of interviews. In terms of the research goals: it was confirmed that the design of digital assessments for DHH students is inherently cross-disciplinary. Unexpectedly, 1) guidelines did not emerge from the interviews conducted in this study, and 2) the principles of Nielsen-10 and PUD were deemed to be less relevant than expected. Given the prevalence of Nielsen-10 in UXD curricula across academia and certificate programs, this poses a risk to the efficacy of DHH assessments designed by UX designers. Furthermore, the following findings emerged: A) deep collaboration between the disciplines of usability, accessibility, and CCI is low to non-existent; B) there are no universally agreed-upon guidelines for designing digital assessments for DHH middle school students; C) these disciplines are structured academically and professionally in such a way that practitioners may not know to reach out to other disciplines. For example, accessibility teams at large organizations do not have designers and accessibility specialists on the same team.

Keywords: deaf, hard of hearing, design, guidelines, education, assessment

Procedia PDF Downloads 58
587 Voyage Analysis of a Marine Gas Turbine Engine Installed to Power and Propel an Ocean-Going Cruise Ship

Authors: Mathias U. Bonet, Pericles Pilidis, Georgios Doulgeris

Abstract:

A gas turbine-powered cruise Liner is scheduled to transport pilgrim passengers from Lagos-Nigeria to the Islamic port city of Jeddah in Saudi Arabia. Since the gas turbine is an air breathing machine, changes in the density and/or mass flow at the compressor inlet due to an encounter with variations in weather conditions induce negative effects on the performance of the power plant during the voyage. In practice, all deviations from the reference atmospheric conditions of 15 oC and 1.103 bar tend to affect the power output and other thermodynamic parameters of the gas turbine cycle. Therefore, this paper seeks to evaluate how a simple cycle marine gas turbine power plant would react under a variety of scenarios that may be encountered during a voyage as the ship sails across the Atlantic Ocean and the Mediterranean Sea before arriving at its designated port of discharge. It is also an assessment that focuses on the effect of varying aerodynamic and hydrodynamic conditions which deteriorate the efficient operation of the propulsion system due to an increase in resistance that results from some projected levels of the ship hull fouling. The investigated passenger ship is designed to run at a service speed of 22 knots and cover a distance of 5787 nautical miles. The performance evaluation consists of three separate voyages that cover a variety of weather conditions in winter, spring and summer seasons. Real-time daily temperatures and the sea states for the selected transit route were obtained and used to simulate the voyage under the aforementioned operating conditions. Changes in engine firing temperature, power output as well as the total fuel consumed per voyage including other performance variables were separately predicted under both calm and adverse weather conditions. The collated data were obtained online from the UK Meteorological Office as well as the UK Hydrographic Office websites, while adopting the Beaufort scale for determining the magnitude of sea waves resulting from rough weather situations. The simulation of the gas turbine performance and voyage analysis was effected through the use of an integrated Cranfield-University-developed computer code known as ‘Turbomatch’ and ‘Poseidon’. It is a project that is aimed at developing a method for predicting the off design behavior of the marine gas turbine when installed and operated as the main prime mover for both propulsion and powering of all other auxiliary services onboard a passenger cruise liner. Furthermore, it is a techno-economic and environmental assessment that seeks to enable the forecast of the marine gas turbine part and full load performance as it relates to the fuel requirement for a complete voyage.

Keywords: cruise ship, gas turbine, hull fouling, performance, propulsion, weather

Procedia PDF Downloads 160
586 Influence of Torrefied Biomass on Co-Combustion Behaviors of Biomass/Lignite Blends

Authors: Aysen Caliskan, Hanzade Haykiri-Acma, Serdar Yaman

Abstract:

Co-firing of coal and biomass blends is an effective method to reduce carbon dioxide emissions released by burning coals, thanks to the carbon-neutral nature of biomass. Besides, usage of biomass that is renewable and sustainable energy resource mitigates the dependency on fossil fuels for power generation. However, most of the biomass species has negative aspects such as low calorific value, high moisture and volatile matter contents compared to coal. Torrefaction is a promising technique in order to upgrade the fuel properties of biomass through thermal treatment. That is, this technique improves the calorific value of biomass along with serious reductions in the moisture and volatile matter contents. In this context, several woody biomass materials including Rhododendron, hybrid poplar, and ash-tree were subjected to torrefaction process in a horizontal tube furnace at 200°C under nitrogen flow. In this way, the solid residue obtained from torrefaction that is also called as 'biochar' was obtained and analyzed to monitor the variations taking place in biomass properties. On the other hand, some Turkish lignites from Elbistan, Adıyaman-Gölbaşı and Çorum-Dodurga deposits were chosen as coal samples since these lignites are of great importance in lignite-fired power stations in Turkey. These lignites were blended with the obtained biochars for which the blending ratio of biochars was kept at 10 wt% and the lignites were the dominant constituents in the fuel blends. Burning tests of the lignites, biomasses, biochars, and blends were performed using a thermogravimetric analyzer up to 900°C with a heating rate of 40°C/min under dry air atmosphere. Based on these burning tests, properties relevant to burning characteristics such as the burning reactivity and burnout yields etc. could be compared to justify the effects of torrefaction and blending. Besides, some characterization techniques including X-Ray Diffraction (XRD), Fourier Transform Infrared (FTIR) spectroscopy and Scanning Electron Microscopy (SEM) were also conducted for the untreated biomass and torrefied biomass (biochar) samples, lignites and their blends to examine the co-combustion characteristics elaborately. Results of this study revealed the fact that blending of lignite with 10 wt% biochar created synergistic behaviors during co-combustion in comparison to the individual burning of the ingredient fuels in the blends. Burnout and ignition performances of each blend were compared by taking into account the lignite and biomass structures and characteristics. The blend that has the best co-combustion profile and ignition properties was selected. Even though final burnouts of the lignites were decreased due to the addition of biomass, co-combustion process acts as a reasonable and sustainable solution due to its environmentally friendly benefits such as reductions in net carbon dioxide (CO2), SOx and hazardous organic chemicals derived from volatiles.

Keywords: burnout performance, co-combustion, thermal analysis, torrefaction pretreatment

Procedia PDF Downloads 330
585 Implications of Social Rights Adjudication on the Separation of Powers Doctrine: Colombian Case

Authors: Mariam Begadze

Abstract:

Separation of Powers (SOP) has often been the most frequently posed objection against the judicial enforcement of socio-economic rights. Although a lot has been written to refute those, very rarely has it been assessed what effect the current practice of social rights adjudication has had on the construction of SOP doctrine in specific jurisdictions. Colombia is an appropriate case-study on this question. The notion of collaborative SOP in the 1991 Constitution has affected the court’s conception of its role. On the other hand, the trends in the jurisprudence have further shaped the collaborative notion of SOP. Other institutional characteristics of the Colombian constitutional law have played its share role as well. Tutela action, particularly flexible and fast judicial action for individuals has placed the judiciary in a more confrontational relation vis-à-vis the political branches. Later interventions through abstract review of austerity measures further contributed to that development. Logically, the court’s activism in this sphere has attracted attacks from political branches, which have turned out to be unsuccessful precisely due to court’s outreach to the middle-class, whose direct reliance on the court has turned into its direct democratic legitimacy. Only later have the structural judgments attempted to revive the collaborative notion behind SOP doctrine. However, the court-supervised monitoring process of implementation has itself manifested fluctuations in the mode of collaboration, moving into more managerial supervision recently. This is not surprising considering the highly dysfunctional political system in Colombia, where distrust seems to be the default starting point in the interaction of the branches. The paper aims to answer the question, what the appropriate judicial tools are to realize the collaborative notion of SOP in a context where the court has to strike a balance between the strong executive and the weak and largely dysfunctional legislative branch. If the recurrent abuse lies in the indifference and inaction of legislative branches to engage with political issues seriously, what are the tools in the court’s hands to activate the political process? The answer to this question partly lies in the court’s other strand of jurisprudence, in which it combines substantive objections with procedural ones concerning the operation of the legislative branch. The primary example is the decision on value-added tax on basic goods, in which the court invalidated the law based on the absence of sufficient deliberation in Congress on the question of the bills’ implications on the equity and progressiveness of the entire taxing system. The decision led to Congressional rejection of an identical bill based on the arguments put forward by the court. The case perhaps is the best illustration of the collaborative notion of SOP, in which the court refrains from categorical pronouncements, while does its bit for activating political process. This also legitimizes the court’s activism based on its role to counter the most perilous abuse in the Colombian context – failure of the political system to seriously engage with serious political questions.

Keywords: Colombian constitutional court, judicial review, separation of powers, social rights

Procedia PDF Downloads 95
584 Modification of a Commercial Ultrafiltration Membrane by Electrospray Deposition for Performance Adjustment

Authors: Elizaveta Korzhova, Sebastien Deon, Patrick Fievet, Dmitry Lopatin, Oleg Baranov

Abstract:

Filtration with nanoporous ultrafiltration membranes is an attractive option to remove ionic pollutants from contaminated effluents. Unfortunately, commercial membranes are not necessarily suitable for specific applications, and their modification by polymer deposition is a fruitful way to adapt their performances accordingly. Many methods are usually used for surface modification, but a novel technique based on electrospray is proposed here. Various quantities of polymers were deposited on a commercial membrane, and the impact of the deposit is investigated on filtration performances and discussed in terms of charge and hydrophobicity. The electrospray deposition is a technique which has not been used for membrane modification up to now. It consists of spraying small drops of polymer solution under a high voltage between the needle containing the solution and the metallic support on which membrane is stuck. The advantage of this process lies in the small quantities of polymer that can be coated on the membrane surface compared with immersion technique. In this study, various quantities (from 2 to 40 μL/cm²) of solutions containing two charged polymers (13 mmol/L of monomer unit), namely polyethyleneimine (PEI) and polystyrene sulfonate (PSS), were sprayed on a negatively charged polyethersulfone membrane (PLEIADE, Orelis Environment). The efficacy of the polymer deposition was then investigated by estimating ion rejection, permeation flux, zeta-potential and contact angle before and after the polymer deposition. Firstly, contact angle (θ) measurements show that the surface hydrophilicity is notably improved by coating both PEI and PSS. Moreover, it was highlighted that the contact angle decreases monotonously with the amount of sprayed solution. Additionally, hydrophilicity enhancement was proved to be better with PSS (from 62 to 35°) than PEI (from 62 to 53°). Values of zeta-potential (ζ were estimated by measuring the streaming current generated by a pressure difference on both sides of a channel made by clamping two membranes. The ζ-values demonstrate that the deposits of PSS (negative at pH=5.5) allow an increase of the negative membrane charge, whereas the deposits of PEI (positive) lead to a positive surface charge. Zeta-potentials measurements also emphasize that the sprayed quantity has little impact on the membrane charge, except for very low quantities (2 μL/m²). The cross-flow filtration of salt solutions containing mono and divalent ions demonstrate that polymer deposition allows a strong enhancement of ion rejection. For instance, it is shown that rejection of a salt containing a divalent cation can be increased from 1 to 20 % and even to 35% by deposing 2 and 4 μL/cm² of PEI solution, respectively. This observation is coherent with the reversal of the membrane charge induced by PEI deposition. Similarly, the increase of negative charge induced by PSS deposition leads to an increase of NaCl rejection from 5 to 45 % due to electrostatic repulsion of the Cl- ion by the negative surface charge. Finally, a notable fall in the permeation flux due to the polymer layer coated at the surface was observed and the best polymer concentration in the sprayed solution remains to be determined to optimize performances.

Keywords: ultrafiltration, electrospray deposition, ion rejection, permeation flux, zeta-potential, hydrophobicity

Procedia PDF Downloads 179
583 Hybrid Living: Emerging Out of the Crises and Divisions

Authors: Yiorgos Hadjichristou

Abstract:

The paper will focus on the hybrid living typologies which are brought about due to the Global Crisis. Mixing of the generations and the groups of people, mingling the functions of living with working and socializing, merging the act of living in synergy with the urban realm and its constituent elements will be the springboard of proposing an essential sustainable housing approach and the respective urban development. The thematic will be based on methodologies developed both on the academic, educational environment including participation of students’ research and on the practical aspect of architecture including case studies executed by the author in the island of Cyprus. Both paths of the research will deal with the explorative understanding of the hybrid ways of living, testing the limits of its autonomy. The evolution of the living typologies into substantial hybrid entities, will deal with the understanding of new ways of living which include among others: re-introduction of natural phenomena, accommodation of the activity of work and services in the living realm, interchange of public and private, injections of communal events into the individual living territories. The issues and the binary questions raised by what is natural and artificial, what is private and what public, what is ephemeral and what permanent and all the in-between conditions are eloquently traced in the everyday life in the island. Additionally, given the situation of Cyprus with the eminent scar of the dividing ‘Green line’ and the waiting of the ‘ghost city’ of Famagusta to be resurrected, the conventional way of understanding the limits and the definitions of the properties is irreversibly shaken. The situation is further aggravated by the unprecedented phenomenon of the crisis on the island. All these observations set the premises of reexamining the urban development and the respective sustainable housing in a synergy where their characteristics start exchanging positions, merge into each other, contemporarily emerge and vanish, changing from permanent to ephemeral. This fluidity of conditions will attempt to render a future of the built- and unbuilt realm where the main focusing point will be redirected to the human and the social. Weather and social ritual scenographies together with ‘spontaneous urban landscapes’ of ‘momentary relationships’ will suggest a recipe for emerging urban environments and sustainable living. Thus, the paper will aim at opening a discourse on the future of the sustainable living merged in a sustainable urban development in relation to the imminent solution of the division of island, where the issue of property became the main obstacle to be overcome. At the same time, it will attempt to link this approach to the global need for a sustainable evolution of the urban and living realms.

Keywords: social ritual scenographies, spontaneous urban landscapes, substantial hybrid entities, re-introduction of natural phenomena

Procedia PDF Downloads 255
582 The Shape of the Sculptor: Exploring Psychologist’s Perceptions of a Model of Parenting Ability to Guide Intervention in Child Custody Evaluations in South Africa

Authors: Anthony R. Townsend, Robyn L. Fasser

Abstract:

This research project provides an interpretative phenomenological analysis of a proposed conceptual model of parenting ability that has been designed to offer recommendations to guide intervention in child custody evaluations in South Africa. A recent review of the literature on child custody evaluations reveals that while there have been significant and valuable shifts in the capacity of the legal system aided by mental health professionals in understanding children and family dynamics, there remains a conceptual gap regarding the nature of parenting ability. With a view to addressing this paucity of a theoretical basis for considering parenting ability, this research project reviews a dimensional model for the assessment of parenting ability by conceiving parenting ability as a combination of good parenting and parental fitness. This model serves as a conceptual framework to guide child-custody evaluation and refine intervention in such cases to better meet the best interests of the child in a manner that bridges the professional gap between parties, legal entities, and mental health professionals. Using a model of good parenting as a point of theoretical departure, this model incorporates both intra-psychic and interpersonal attributes and behaviours of parents to form an impression of parenting ability and identify areas for potential enhancement. This research, therefore, hopes to achieve the following: (1) to provide nuanced descriptions of parents’ parenting ability; (2) to describe parents’ parenting potential; (3) to provide a parenting assessment tool for investigators in forensic family matters that will enable more useful recommendations and interventions; (4) to develop a language of consensus for investigators, attorneys, judges and parents, in forensic family matters, as to what comprises parenting ability and how this can be assessed; and (5) that all of the aforementioned will serve to advance the best interests of the children involved in such litigious matters. The evaluative promise and post-assessment prospects of this model are illustrated through three interlinking data sets: (1) the results of interviews with South African psychologists about the model, (2) retrospective analysis of care and contact evaluation reports using the model to determine if different conclusions or more specific recommendations are generated with its use and (3) the results of an interview with a psychologist who piloted this model by using it in care and contact evaluation.

Keywords: alienation, attachment, best interests of the child, care and contact evaluation, children’s act (38 of 2005), child custody evaluation, civil forensics, gatekeeping, good parenting, good-enough parenting, health professions council of South Africa, family law, forensic mental healthcare practitioners, parental fitness, parenting ability, parent management training, parenting plan, problem-determined system, psychotherapy, support of other child-parent relationship, voice of the child

Procedia PDF Downloads 102
581 Genome-Wide Homozygosity Analysis of the Longevous Phenotype in the Amish Population

Authors: Sandra Smieszek, Jonathan Haines

Abstract:

Introduction: Numerous research efforts have focused on searching for ‘longevity genes’. However, attempting to decipher the genetic component of the longevous phenotype have resulted in limited success and the mechanisms governing longevity remain to be explained. We conducted a genome-wide homozygosity analysis (GWHA) of the founder population of the Amish community in central Ohio. While genome-wide association studies using unrelated individuals have revealed many interesting longevity associated variants, these variants are typically of small effect and cannot explain the observed patterns of heritability for this complex trait. The Amish provide a large cohort of extended kinships allowing for in depth analysis via family-based approach excellent population due to its. Heritability of longevity increases with age with significant genetic contribution being seen in individuals living beyond 60 years of age. In our present analysis we show that the heritability of longevity is estimated to be increasing with age particularly on the paternal side. Methods: The present analysis integrated both phenotypic and genotypic data and led to the discovery of a series of variants, distinct for stratified populations across ages and distinct for paternal and maternal cohorts. Specifically 5437 subjects were analyzed and a subset of 893 successfully genotyped individuals was used to assess CHIP heritability. We have conducted the homozygosity analysis to examine if homozygosity is associated with increased risk of living beyond 90. We analyzed AMISH cohort genotyped for 614,957 SNPs. Results: We delineated 10 significant regions of homozygosity (ROH) specific for the age group of interest (>90). Of particular interest was ROH on chromosome 13, P < 0.0001. The lead SNPs rs7318486 and rs9645914 point to COL4A2 and our lead SNP. COL25A1 encodes one of the six subunits of type IV collagen, the C-terminal portion of the protein, known as canstatin, is an inhibitor of angiogenesis and tumor growth. COL4A2 mutations have been reported with a broader spectrum of cerebrovascular, renal, ophthalmological, cardiac, and muscular abnormalities. The second region of interest points to IRS2. Furthermore we built a classifier using the obtained SNPs from the significant ROH region with 0.945 AUC giving ability to discriminate between those living beyond to 90 years of age and beyond. Conclusion: In conclusion our results suggest that a history of longevity does indeed contribute to increasing the odds of individual longevity. Preliminary results are consistent with conjecture that heritability of longevity is substantial when we start looking at oldest fifth and smaller percentiles of survival specifically in males. We will validate all the candidate variants in independent cohorts of centenarians, to test whether they are robustly associated with human longevity. The identified regions of interest via ROH analysis could be of profound importance for the understanding of genetic underpinnings of longevity.

Keywords: regions of homozygosity, longevity, SNP, Amish

Procedia PDF Downloads 223