Search results for: screwed connections
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 665

Search results for: screwed connections

95 Methods Used to Achieve Airtightness of 0.07 Ach@50Pa for an Industrial Building

Authors: G. Wimmers

Abstract:

The University of Northern British Columbia needed a new laboratory building for the Master of Engineering in Integrated Wood Design Program and its new Civil Engineering Program. Since the University is committed to reducing its environmental footprint and because the Master of Engineering Program is actively involved in research of energy efficient buildings, the decision was made to request the energy efficiency of the Passive House Standard in the Request for Proposals. The building is located in Prince George in Northern British Columbia, a city located at the northern edge of climate zone 6 with an average low between -8 and -10.5 in the winter months. The footprint of the building is 30m x 30m with a height of about 10m. The building consists of a large open space for the shop and laboratory with a small portion of the floorplan being two floors, allowing for a mezzanine level with a few offices as well as mechanical and storage rooms. The total net floor area is 1042m² and the building’s gross volume 9686m³. One key requirement of the Passive House Standard is the airtight envelope with an airtightness of < 0.6 ach@50Pa. In the past, we have seen that this requirement can be challenging to reach for industrial buildings. When testing for air tightness, it is important to test in both directions, pressurization, and depressurization, since the airflow through all leakages of the building will, in reality, happen simultaneously in both directions. A specific detail or situation such as overlapping but not sealed membranes might be airtight in one direction, due to the valve effect, but are opening up when tested in the opposite direction. In this specific project, the advantage was the overall very compact envelope and the good volume to envelope area ratio. The building had to be very airtight and the details for the windows and doors installation as well as all transitions from walls to roof and floor, the connections of the prefabricated wall panels and all penetrations had to be carefully developed to allow for maximum airtightness. The biggest challenges were the specific components of this industrial building, the large bay door for semi-trucks and the dust extraction system for the wood processing machinery. The testing was carried out in accordance with EN 132829 (method A) as specified in the International Passive House Standard and the volume calculation was also following the Passive House guideline resulting in a net volume of 7383m3, excluding all walls, floors and suspended ceiling volumes. This paper will explore the details and strategies used to achieve an airtightness of 0.07 ach@50Pa, to the best of our knowledge the lowest value achieved in North America so far following the test protocol of the International Passive House Standard and discuss the crucial steps throughout the project phases and the most challenging details.

Keywords: air changes, airtightness, envelope design, industrial building, passive house

Procedia PDF Downloads 148
94 The Superior Performance of Investment Bank-Affiliated Mutual Funds

Authors: Michelo Obrey

Abstract:

Traditionally, mutual funds have long been esteemed as stand-alone entities in the U.S. However, the prevalence of the fund families’ affiliation to financial conglomerates is eroding this striking feature. Mutual fund families' affiliation with financial conglomerates can potentially be an important source of superior performance or cost to the affiliated mutual fund investors. On the one hand, financial conglomerates affiliation offers the mutual funds access to abundant resources, better research quality, private material information, and business connections within the financial group. On the other hand, conflict of interest is bound to arise between the financial conglomerate relationship and fund management. Using a sample of U.S. domestic equity mutual funds from 1994 to 2017, this paper examines whether fund family affiliation to an investment bank help the affiliated mutual funds deliver superior performance through private material information advantage possessed by the investment banks or it costs affiliated mutual fund shareholders due to the conflict of interest. Robust to alternative risk adjustments and cross-section regression methodologies, this paper finds that the investment bank-affiliated mutual funds significantly outperform those of the mutual funds that are not affiliated with an investment bank. Interestingly the paper finds that the outperformance is confined to holding return, a return measure that captures the investment talent that is uninfluenced by transaction costs, fees, and other expenses. Further analysis shows that the investment bank-affiliated mutual funds specialize in hard-to-value stocks, which are not more likely to be held by unaffiliated funds. Consistent with the information advantage hypothesis, the paper finds that affiliated funds holding covered stocks outperform affiliated funds without covered stocks lending no support to the hypothesis that affiliated mutual funds attract superior stock-picking talent. Overall, the paper findings are consistent with the idea that investment banks maximize fee income by monopolistically exploiting their private information, thus strategically transferring performance to their affiliated mutual funds. This paper contributes to the extant literature on the agency problem in mutual fund families. It adds to this stream of research by showing that the agency problem is not only prevalent in fund families but also in financial organizations such as investment banks that have affiliated mutual fund families. The results show evidence of exploitation of synergies such as private material information sharing that benefit mutual fund investors due to affiliation with a financial conglomerate. However, this research has a normative dimension, allowing such incestuous behavior of insider trading and exploitation of superior information not only negatively affect the unaffiliated fund investors but also led to an unfair and unleveled playing field in the financial market.

Keywords: mutual fund performance, conflicts of interest, informational advantage, investment bank

Procedia PDF Downloads 191
93 Study the Effect of Liquefaction on Buried Pipelines during Earthquakes

Authors: Mohsen Hababalahi, Morteza Bastami

Abstract:

Buried pipeline damage correlations are critical part of loss estimation procedures applied to lifelines for future earthquakes. The vulnerability of buried pipelines against earthquake and liquefaction has been observed during some of previous earthquakes and there are a lot of comprehensive reports about this event. One of the main reasons for impairment of buried pipelines during earthquake is liquefaction. Necessary conditions for this phenomenon are loose sandy soil, saturation of soil layer and earthquake intensity. Because of this fact that pipelines structure are very different from other structures (being long and having light mass) by paying attention to the results of previous earthquakes and compare them with other structures, it is obvious that the danger of liquefaction for buried pipelines is not high risked, unless effective parameters like earthquake intensity and non-dense soil and other factors be high. Recent liquefaction researches for buried pipeline include experimental and theoretical ones as well as damage investigations during actual earthquakes. The damage investigations have revealed that a damage ratio of pipelines (Number/km ) has much larger values in liquefied grounds compared with one in shaking grounds without liquefaction according to damage statistics during past severe earthquakes, and that damages of joints and pipelines connected with manholes were remarkable. The purpose of this research is numerical study of buried pipelines under the effect of liquefaction by case study of the 2013 Dashti (Iran) earthquake. Water supply and electrical distribution systems of this township interrupted during earthquake and water transmission pipelines were damaged severely due to occurrence of liquefaction. The model consists of a polyethylene pipeline with 100 meters length and 0.8 meter diameter which is covered by light sandy soil and the depth of burial is 2.5 meters from surface. Since finite element method is used relatively successfully in order to solve geotechnical problems, we used this method for numerical analysis. For evaluating this case, some information like geotechnical information, classification of earthquakes levels, determining the effective parameters in probability of liquefaction, three dimensional numerical finite element modeling of interaction between soil and pipelines are necessary. The results of this study on buried pipelines indicate that the effect of liquefaction is function of pipe diameter, type of soil, and peak ground acceleration. There is a clear increase in percentage of damage with increasing the liquefaction severity. The results indicate that although in this form of the analysis, the damage is always associated to a certain pipe material, but the nominally defined “failures” include by failures of particular components (joints, connections, fire hydrant details, crossovers, laterals) rather than material failures. At the end, there are some retrofit suggestions in order to decrease the risk of liquefaction on buried pipelines.

Keywords: liquefaction, buried pipelines, lifelines, earthquake, finite element method

Procedia PDF Downloads 513
92 Working Memory and Phonological Short-Term Memory in the Acquisition of Academic Formulaic Language

Authors: Zhicheng Han

Abstract:

This study examines the correlation between knowledge of formulaic language, working memory (WM), and phonological short-term memory (PSTM) in Chinese L2 learners of English. This study investigates if WM and PSTM correlate differently to the acquisition of formulaic language, which may be relevant for the discourse around the conceptualization of formulas. Connectionist approaches have lead scholars to argue that formulas are form-meaning connections stored whole, making PSTM significant in the acquisitional process as it pertains to the storage and retrieval of chunk information. Generativist scholars, on the other hand, argued for active participation of interlanguage grammar in the acquisition and use of formulaic language, where formulas are represented in the mind but retain the internal structure built around a lexical core. This would make WM, especially the processing component of WM an important cognitive factor since it plays a role in processing and holding information for further analysis and manipulation. The current study asked L1 Chinese learners of English enrolled in graduate programs in China to complete a preference raking task where they rank their preference for formulas, grammatical non-formulaic expressions, and ungrammatical phrases with and without the lexical core in academic contexts. Participants were asked to rank the options in order of the likeliness of them encountering these phrases in the test sentences within academic contexts. Participants’ syntactic proficiency is controlled with a cloze test and grammar test. Regression analysis found a significant relationship between the processing component of WM and preference of formulaic expressions in the preference ranking task while no significant correlation is found for PSTM or syntactic proficiency. The correlational analysis found that WM, PSTM, and the two proficiency test scores have significant covariates. However, WM and PSTM have different predictor values for participants’ preference for formulaic language. Both storage and processing components of WM are significantly correlated with the preference for formulaic expressions while PSTM is not. These findings are in favor of the role of interlanguage grammar and syntactic knowledge in the acquisition of formulaic expressions. The differing effects of WM and PSTM suggest that selective attention to and processing of the input beyond simple retention play a key role in successfully acquiring formulaic language. Similar correlational patterns were found for preferring the ungrammatical phrase with the lexical core of the formula over the ones without the lexical core, attesting to learners’ awareness of the lexical core around which formulas are constructed. These findings support the view that formulaic phrases retain internal syntactic structures that are recognized and processed by the learners.

Keywords: formulaic language, working memory, phonological short-term memory, academic language

Procedia PDF Downloads 63
91 A Multi-Perspective, Qualitative Study into Quality of Life for Elderly People Living at Home and the Challenges for Professional Services in the Netherlands

Authors: Hennie Boeije, Renate Verkaik, Joke Korevaar

Abstract:

In Dutch national policy, it is promoted that the elderly remain living at home longer. They are less often admitted to a nursing home or only later in life. While living at home, it is important that they experience a good quality of life. Care providers in primary care support this. In this study, it was investigated what quality of life means for the elderly and which characteristics care should have that supports living at home longer with quality of life. To explore this topic, a qualitative methodology was used. Four focus groups were conducted: two with elderly people who live at home and their family caregivers, one with district nurses employed in-home care services and one with elderly care physicians working in primary care. Next to this individual interviews were employed with general practitioners (GPs). In total 32 participants took part in the study. The data were thematically analysed with MaxQDA software for qualitative analysis and reported. Quality of life is a multi-faceted term for elderly. The essence of their description is that they can still undertake activities that matter to them. Good physical health, mental well-being and social connections enable them to do this. Own control over their life is important for some. They are of opinion that how they experience life and manage old age is related to their resilience and coping. Key terms in the definitions of quality of life by GPs are also physical and mental health and social contacts. These are the three pillars. Next, to this elderly care, physicians mention security and safety and district nurses add control over their own life and meaningful daily activities. They agree that with frail elderly people, the balance is delicate and a change in one of the three pillars can cause it to collapse like a house of cards. When discussing what support is needed, professionals agree on access to care with a low threshold, prevention, and life course planning. When care is provided in a timely manner, a worsening of the situation can be prevented. They agree that hospital care often is not needed since most of the problems with the elderly have to do with care and security rather than with a cure per se. GPs can consult elderly care physicians to lower their workload and to bring in specific knowledge. District nurses often signal changes in the situation of the elderly. According to them, the elderly predominantly need someone to watch over them and provide them with a feeling of security. Life course planning and advance care planning can contribute to uniform treatment in line with older adults’ wishes. In conclusion, all stakeholders, including elderly persons, agree on what entails quality of life and the quality of care that is needed to support that. A future challenge is to shape conditions for the right skill mix of professionals, cooperation between the professions and breaking down differences in financing and supply. For the elderly, the challenge is preparing for aging.

Keywords: elderly living at home, quality of life, quality of care, professional cooperation, life course planning, advance care planning

Procedia PDF Downloads 129
90 Adapting Cyber Physical Production Systems to Small and Mid-Size Manufacturing Companies

Authors: Yohannes Haile, Dipo Onipede, Jr., Omar Ashour

Abstract:

The main thrust of our research is to determine Industry 4.0 readiness of small and mid-size manufacturing companies in our region and assist them to implement Cyber Physical Production System (CPPS) capabilities. Adopting CPPS capabilities will help organizations realize improved quality, order delivery, throughput, new value creation, and reduced idle time of machines and work centers of their manufacturing operations. The key metrics for the assessment include the level of intelligence, internal and external connections, responsiveness to internal and external environmental changes, capabilities for customization of products with reference to cost, level of additive manufacturing, automation, and robotics integration, and capabilities to manufacture hybrid products in the near term, where near term is defined as 0 to 18 months. In our initial evaluation of several manufacturing firms which are profitable and successful in what they do, we found low level of Physical-Digital-Physical (PDP) loop in their manufacturing operations, whereas 100% of the firms included in this research have specialized manufacturing core competencies that have differentiated them from their competitors. The level of automation and robotics integration is low to medium range, where low is defined as less than 30%, and medium is defined as 30 to 70% of manufacturing operation to include automation and robotics. However, there is a significant drive to include these capabilities at the present time. As it pertains to intelligence and connection of manufacturing systems, it is observed to be low with significant variance in tying manufacturing operations management to Enterprise Resource Planning (ERP). Furthermore, it is observed that the integration of additive manufacturing in general, 3D printing, in particular, to be low, but with significant upside of integrating it in their manufacturing operations in the near future. To hasten the readiness of the local and regional manufacturing companies to Industry 4.0 and transitions towards CPPS capabilities, our working group (ADMAR Working Group) in partnership with our university have been engaged with the local and regional manufacturing companies. The goal is to increase awareness, share know-how and capabilities, initiate joint projects, and investigate the possibility of establishing the Center for Cyber Physical Production Systems Innovation (C2P2SI). The center is intended to support the local and regional university-industry research of implementing intelligent factories, enhance new value creation through disruptive innovations, the development of hybrid and data enhanced products, and the creation of digital manufacturing enterprises. All these efforts will enhance local and regional economic development and educate students that have well developed knowledge and applications of cyber physical manufacturing systems and Industry 4.0.

Keywords: automation, cyber-physical production system, digital manufacturing enterprises, disruptive innovation, new value creation, physical-digital-physical loop

Procedia PDF Downloads 141
89 Connecting the Dots: Bridging Academia and National Community Partnerships When Delivering Healthy Relationships Programming

Authors: Nicole Vlasman, Karamjeet Dhillon

Abstract:

Over the past four years, the Healthy Relationships Program has been delivered in community organizations and schools across Canada. More than 240 groups have been facilitated in collaboration with 33 organizations. As a result, 2157 youth have been engaged in the programming. The purpose and scope of the Healthy Relationships Program are to offer sustainable, evidence-based skills through small group implementation to prevent violence and promote positive, healthy relationships in youth. The program development has included extensive networking at regional and national levels. The Healthy Relationships Program is currently being implemented, adapted, and researched within the Resilience and Inclusion through Strengthening and Enhancing Relationships (RISE-R) project. Alongside the project’s research objectives, the RISE-R team has worked to virtually share the ongoing findings of the project through a slow ontology approach. Slow ontology is a practice integrated into project systems and structures whereby slowing the pace and volume of outputs offers creative opportunities. Creative production reveals different layers of success and complements the project, the building blocks for sustainability. As a result of integrating a slow ontology approach, the RISE-R team has developed a Geographic Information System (GIS) that documents local landscapes through a Story Map feature, and more specifically, video installations. Video installations capture the cartography of space and place within the context of singular diverse community spaces (case studies). By documenting spaces via human connections, the project captures narratives, which further enhance the voices and faces of the community within the larger project scope. This GIS project aims to create a visual and interactive flow of information that complements the project's mixed-method research approach. Conclusively, creative project development in the form of a geographic information system can provide learning and engagement opportunities at many levels (i.e., within community organizations and educational spaces or with the general public). In each of these disconnected spaces, fragmented stories are connected through a visual display of project outputs. A slow ontology practice within the context of the RISE-R project documents activities on the fringes and within internal structures; primarily through documenting project successes as further contributions to the Centre for School Mental Health framework (philosophy, recruitment techniques, allocation of resources and time, and a shared commitment to evidence-based products).

Keywords: community programming, geographic information system, project development, project management, qualitative, slow ontology

Procedia PDF Downloads 156
88 Embracing the Uniqueness and Potential of Each Child: Moving Theory to Practice

Authors: Joy Chadwick

Abstract:

This Study of Teaching and Learning (SoTL) research focused on the experiences of teacher candidates involved in an inclusive education methods course within a four-year direct entry Bachelor of Education program. The placement of this course within the final fourteen-week practicum semester is designed to facilitate deeper theory-practice connections between effective inclusive pedagogical knowledge and the real life of classroom teaching. The course focuses on supporting teacher candidates to understand that effective instruction within an inclusive classroom context must be intentional, responsive, and relational. Diversity is situated not as exceptional but rather as expected. This interpretive qualitative study involved the analysis of twenty-nine teacher candidate reflective journals and six individual teacher candidate semi-structured interviews. The journal entries were completed at the start of the semester and at the end of the semester with the intent of having teacher candidates reflect on their beliefs of what it means to be an effective inclusive educator and how the course and practicum experiences impacted their understanding and approaches to teaching in inclusive classrooms. The semi-structured interviews provided further depth and context to the journal data. The journals and interview transcripts were coded and themed using NVivo software. The findings suggest that instructional frameworks such as universal design for learning (UDL), differentiated instruction (DI), response to intervention (RTI), social emotional learning (SEL), and self-regulation supported teacher candidate’s abilities to meet the needs of their students more effectively. Course content that focused on specific exceptionalities also supported teacher candidates to be proactive rather than reactive when responding to student learning challenges. Teacher candidates also articulated the importance of reframing their perspective about students in challenging moments and that seeing the individual worth of each child was integral to their approach to teaching. A persisting question for teacher educators exists as to what pedagogical knowledge and understanding is most relevant in supporting future teachers to be effective at planning for and embracing the diversity of student needs within classrooms today. This research directs us to consider the critical importance of addressing personal attributes and mindsets of teacher candidates regarding children as well as considering instructional frameworks when designing coursework. Further, the alignment of an inclusive education course during a teaching practicum allows for an iterative approach to learning. The practical application of course concepts while teaching in a practicum allows for a deeper understanding of instructional frameworks, thus enhancing the confidence of teacher candidates. Research findings have implications for teacher education programs as connected to inclusive education methods courses, practicum experiences, and overall teacher education program design.

Keywords: inclusion, inclusive education, pre-service teacher education, practicum experiences, teacher education

Procedia PDF Downloads 69
87 Modeling the Relation between Discretionary Accrual Earnings Management, International Financial Reporting Standards and Corporate Governance

Authors: Ikechukwu Ndu

Abstract:

This study examines the econometric modeling of the relation between discretionary accrual earnings management, International Financial Reporting Standards (IFRS), and certain corporate governance factors with regard to listed Nigerian non-financial firms. Although discretionary accrual earnings management is a well-known and global problem that has an adverse impact on users of the financial statements, its relationship with IFRS and corporate governance is neither adequately researched nor properly systematically investigated in Nigeria. The dearth of research in the relation between discretionary accrual earnings management, IFRS and corporate governance in Nigeria has made it difficult for academics, practitioners, government setting bodies, regulators and international bodies to achieve a clearer understanding of how discretionary accrual earnings management relates to IFRS and certain corporate governance characteristics. This is the first study to the author’s best knowledge to date that makes interesting research contributions that significantly add to the literature of discretionary accrual earnings management and its relation with corporate governance and IFRS pertaining to the Nigerian context. A comprehensive review is undertaken of the literature of discretionary total accrual earnings management, IFRS, and certain corporate governance characteristics as well as the data, models, methodologies, and different estimators used in the study. Secondary financial statement, IFRS, and corporate governance data are sourced from Bloomberg database and published financial statements of Nigerian non-financial firms for the period 2004 to 2016. The methodology uses both the total and working capital accrual basis. This study has a number of interesting preliminary findings. First, there is a negative relationship between the level of discretionary accrual earnings management and the adoption of IFRS. However, this relationship does not appear to be statistically significant. Second, there is a significant negative relationship between the size of the board of directors and discretionary accrual earnings management. Third, CEO Separation of roles does not constrain earnings management, indicating the need to preserve relationships, personal connections, and maintain bonded friendships between the CEO, Chairman, and executive directors. Fourth, there is a significant negative relationship between discretionary accrual earnings management and the use of a Big Four firm as an auditor. Fifth, including shareholders in the audit committee, leads to a reduction in discretionary accrual earnings management. Sixth, the debt and return on assets (ROA) variables are significant and positively related to discretionary accrual earnings management. Finally, the company size variable indicated by the log of assets is surprisingly not found to be statistically significant and indicates that all Nigerian companies irrespective of size engage in discretionary accrual management. In conclusion, this study provides key insights that enable a better understanding of the relationship between discretionary accrual earnings management, IFRS, and corporate governance in the Nigerian context. It is expected that the results of this study will be of interest to academics, practitioners, regulators, governments, international bodies and other parties involved in policy setting and economic development in areas of financial reporting, securities regulation, accounting harmonization, and corporate governance.

Keywords: discretionary accrual earnings management, earnings manipulation, IFRS, corporate governance

Procedia PDF Downloads 145
86 Cultural Heritage, Urban Planning and the Smart City in Indian Context

Authors: Paritosh Goel

Abstract:

The conservation of historic buildings and historic Centre’s over recent years has become fully encompassed in the planning of built-up areas and their management following climate changes. The approach of the world of restoration, in the Indian context on integrated urban regeneration and its strategic potential for a smarter, more sustainable and socially inclusive urban development introduces, for urban transformations in general (historical centers and otherwise), the theme of sustainability. From this viewpoint, it envisages, as a primary objective, a real “green, ecological or environmental” requalification of the city through interventions within the main categories of sustainability: mobility, energy efficiency, use of sources of renewable energy, urban metabolism (waste, water, territory, etc.) and natural environment. With this the concept of a “resilient city” is also introduced, which can adapt through progressive transformations to situations of change which may not be predictable, behavior that the historical city has always been able to express. Urban planning on the other hand, has increasingly focused on analyses oriented towards the taxonomic description of social/economic and perceptive parameters. It is connected with human behavior, mobility and the characterization of the consumption of resources, in terms of quantity even before quality to inform the city design process, which for ancient fabrics, and mainly affects the public space also in its social dimension. An exact definition of the term “smart city” is still essentially elusive, since we can attribute three dimensions to the term: a) That of a virtual city, evolved based on digital networks and web networks b) That of a physical construction determined by urban planning based on infrastructural innovation, which in the case of historic Centre’s implies regeneration that stimulates and sometimes changes the existing fabric; c) That of a political and social/economic project guided by a dynamic process that provides new behavior and requirements of the city communities that orients the future planning of cities also through participation in their management. This paper is a preliminary research into the connections between these three dimensions applied to the specific case of the fabric of ancient cities with the aim of obtaining a scientific theory and methodology to apply to the regeneration of Indian historical Centre’s. The Smart city scheme if contextualize with heritage of the city it can be an initiative which intends to provide a transdisciplinary approach between various research networks (natural sciences, socio-economics sciences and humanities, technological disciplines, digital infrastructures) which are united in order to improve the design, livability and understanding of urban environment and high historical/cultural performance levels.

Keywords: historical cities regeneration, sustainable restoration, urban planning, smart cities, cultural heritage development strategies

Procedia PDF Downloads 282
85 Narcissism in the Life of Howard Hughes: A Psychobiographical Exploration

Authors: Alida Sandison, Louise A. Stroud

Abstract:

Narcissism is a personality configuration which has both normal and pathological personality expressions. Narcissism is highly complex, and is linked to a broad field of research. There are both dimensional and categorical conceptualisations of narcissism, and a variety of theoretical formulations that have been put forward to understand the narcissistic personality configuration. Currently, Kernberg’s Object Relations theory is well supported for this purpose. The complexity and particular defense mechanisms at play in the narcissistic personality make it a difficult personality configuration worth further research. Psychobiography as a methodology allows for the exploration of the lived life, and is thus a useful methodology to surmount these inherent challenges. Narcissism has been a focus of academic interest for a long time, and although there is a lot of research done in this area, to the researchers' knowledge, narcissistic dynamics have never been explored within a psychobiographical format. Thus, the primary aim of the research was to explore and describe narcissism in the life of Howard Hughes, with the objective of gaining further insight into narcissism through the use of this unconventional research approach. Hughes was chosen as subject for the study as he is renowned as an eccentric billionaire who had his revolutionary effect on the world, but was concurrently disturbed within his personal pathologies. Hughes was dynamic in three different sectors, namely motion pictures, aviation and gambling. He became more and more reclusive as he entered into middle age. From his early fifties he was agoraphobic, and the social network of connectivity that could reasonably be expected from someone in the top of their field was notably distorted. Due to his strong narcissistic personality configuration, and the interpersonal difficulties he experienced, Hughes represents an ideal figure to explore narcissism. The study used a single case study design, and purposive sampling to select Hughes. Qualitative data was sampled, using secondary data sources. Given that Hughes was a famous figure, there is a plethora of information on his life, which is primarily autobiographical. This includes books written about his life, and archival material in the form of newspaper articles, interviews and movies. Gathered data were triangulated to avoid the effect of author bias, and increase the credibility of the data used. It was collected using Yin’s guidelines for data collection. Data was analysed using Miles and Huberman strategy of data analysis, which consists of three steps, namely, data reduction, data display, and conclusion drawing and verification. Patterns which emerged in the data highlighted the defense mechanisms used by Hughes, in particular that of splitting and projection, in defending his sense of self. These defense mechanisms help us to understand the high levels of entitlement and paranoia experienced by Hughes. Findings provide further insight into his sense of isolation and difference, and the consequent difficulty he experienced in maintaining connections with others. Findings furthermore confirm the effectiveness of Kernberg’s theory in understanding narcissism observing an individual life.

Keywords: Howard Hughes, narcissism, narcissistic defenses, object relations

Procedia PDF Downloads 357
84 A Case of Bilateral Vulval Abscess with Pelvic Fistula in an Immunocompromised Patient with Colostomy: A Diagnostic Challenge

Authors: Paul Feyi Waboso

Abstract:

This case report presents a 57-year-old female patient with a history of colon cancer, colostomy, and immunocompromise, who presented with an unusual bilateral vulval abscess, more prominent on the left side. Due to the atypical presentation, an MRI was performed, revealing a pelvic collection and a fistulous connection between the pelvis and vulva. This finding prompted an urgent surgical intervention. This case highlights the diagnostic and therapeutic challenges of managing complex abscesses and fistulas in immunocompromised patients. Introduction: Vulval abscesses in immunocompromised individuals can present with atypical features and may be associated with complex pathologies. Patients with a history of cancer, colostomy, and immunocompromise are particularly prone to infections and may present with unusual manifestations. This report discusses a case of a large bilateral vulval abscess with an underlying pelvic fistula, emphasizing the importance of advanced imaging in cases with atypical presentations. Case Presentation: A 57-year-old female with a known history of colon cancer, treated with colostomy, presented with severe pain and swelling in the vulval area. Physical examination revealed bilateral vulval swelling, with the abscess on the left side appearing larger and more pronounced than on the right. Given her immunocompromised status and the unusual nature of the presentation, we requested an MRI of the pelvis, suspecting an underlying pathology beyond a typical abscess. Investigations: MRI imaging revealed a significant pelvic collection and identified a fistulous tract between the pelvis and the vulva. This confirmed that the vulval abscess was connected to a deeper pelvic infection, necessitating urgent intervention. Management: After consultation with the multidisciplinary team (MDT), it was agreed that the patient required surgical intervention, having had 48 hours of antibiotics. The patient underwent evacuation of the left-sided vulval abscess under spinal anesthesia. During surgery, the pelvic collection was drained of 200 ml of pus. Outcome and Follow-Up: Postoperative recovery was closely monitored due to the patient’s immunocompromised state. Follow-up imaging and clinical evaluation showed improvement in symptoms, with gradual resolution of infection. The patient was scheduled for regular follow-up visits to monitor for recurrence or further complications. Discussion: Bilateral vulval abscesses are uncommon and, in an immunocompromised patient, warrant thorough investigation to rule out deeper infectious or fistulous connections. This case underscores the utility of MRI in identifying complex fistulous tracts and highlights the importance of a multidisciplinary approach in managing such high-risk patients. Conclusion: This case illustrates a rare presentation of bilateral vulval abscess with an associated pelvic fistula.

Keywords: vulval abscess, MDT team, colon cancer with pelvic fistula, vulval skin condition

Procedia PDF Downloads 21
83 Differentially Expressed Protein Biomarkers in Early and Advanced Stage Young Triple-Negative Breast Cancer Patients

Authors: Shamim Mushtaq, Moazzam Shahid

Abstract:

Breast cancer (BC) claims the lives of half a million women every year and is the most common cause of death in the developing world. In 2019, it was estimated that BC alone accounts for 15% of all cancer deaths in younger women (aged < 45 years old) with advanced-stage lung metastasis. According to the World Health Organization & International Union against Cancer, in Asia, a high number of cancer-related deaths will be observed in 2020, whereas the burden will be reduced in Western countries due to awareness about the disease, better health facilities and advanced treatments. In the last 15 years, it has been reported that the incidence of BC has increased by 1.1% among Asian compared to the US population from 2003 to 2012. To date, several BC biological subtypes have been reported so far, which are associated with different treatment responses. The heterogeneity and diversity of BC reflected these different subtypes, including Luminal A (23.7% prevalence) and B (38.8% prevalence) that have pathological estrogen receptor (ER+)-positive tumors, the human epidermal growth factor receptor 2 (HER2) (11.2% prevalence) and triple-negative breast cancer (TNBC) (25% prevalence). According to Shaukat Khanum Memorial Cancer Hospital and Research Centre – Pakistan, ten years of data showed that among 636 BC patients, 30.5% had TNBC who were <40 years of age, which is an extremely alarming situation. Therefore, there is a dire need to explore and develop therapeutic targets for the treatment of early TNBC. Since the last decade, unfortunately, there has been little success in understanding the complexity of TNBC and in discovering new biological therapeutic targets. However, conventional chemotherapy is the only choice of treatment for TNBC patients. Many investigators revealed advances in multi-omics (multiple "omes", e.g., genome, proteome, transcriptome, epigenome, and microbiome) which were later identified as actionable targets and increased prevalence in TNBC patients. However, various drugs have been identified so far which are related to a particular diagnostic and prognostic biomarker. For example, Epidermal growth factor receptor ( EGFR or ErbB-1), HER-2/neu (ErbB-2), HER-3 (ErbB-3), and HER-4 (ErbB-4). Protein Transglin-2 (TAGLN 2 ) and Profilins-1 (Pfn-1 ) are the ubiquitously expressed large family of proteins present in all eukaryotes, enabling actin cytoskeletal reorganization. It is known that the oncogenic transformation of cells is accompanied by alteration in the actin cytoskeleton. There are causal connections between altered expression of actin cytoskeletal regulators and cancer progression. Our case-control study identified TAGLN-2 and Pfn-1 proteins in TNBC blood by mass spectrometry. Both TAGLN-2 and Pfn-1 proteins are differentially expressed in early and advanced stages of TNBS patients, which could be potential predictors or therapeutic targets for TNBC.

Keywords: TNBC, blood biomarkers, mass spectrometry, qPCR, ELISA

Procedia PDF Downloads 45
82 Design of an Ultra High Frequency Rectifier for Wireless Power Systems by Using Finite-Difference Time-Domain

Authors: Felipe M. de Freitas, Ícaro V. Soares, Lucas L. L. Fortes, Sandro T. M. Gonçalves, Úrsula D. C. Resende

Abstract:

There is a dispersed energy in Radio Frequencies (RF) that can be reused to power electronics circuits such as: sensors, actuators, identification devices, among other systems, without wire connections or a battery supply requirement. In this context, there are different types of energy harvesting systems, including rectennas, coil systems, graphene and new materials. A secondary step of an energy harvesting system is the rectification of the collected signal which may be carried out, for example, by the combination of one or more Schottky diodes connected in series or shunt. In the case of a rectenna-based system, for instance, the diode used must be able to receive low power signals at ultra-high frequencies. Therefore, it is required low values of series resistance, junction capacitance and potential barrier voltage. Due to this low-power condition, voltage multiplier configurations are used such as voltage doublers or modified bridge converters. Lowpass filter (LPF) at the input, DC output filter, and a resistive load are also commonly used in the rectifier design. The electronic circuits projects are commonly analyzed through simulation in SPICE (Simulation Program with Integrated Circuit Emphasis) environment. Despite the remarkable potential of SPICE-based simulators for complex circuit modeling and analysis of quasi-static electromagnetic fields interaction, i.e., at low frequency, these simulators are limited and they cannot model properly applications of microwave hybrid circuits in which there are both, lumped elements as well as distributed elements. This work proposes, therefore, the electromagnetic modelling of electronic components in order to create models that satisfy the needs for simulations of circuits in ultra-high frequencies, with application in rectifiers coupled to antennas, as in energy harvesting systems, that is, in rectennas. For this purpose, the numerical method FDTD (Finite-Difference Time-Domain) is applied and SPICE computational tools are used for comparison. In the present work, initially the Ampere-Maxwell equation is applied to the equations of current density and electric field within the FDTD method and its circuital relation with the voltage drop in the modeled component for the case of lumped parameter using the FDTD (Lumped-Element Finite-Difference Time-Domain) proposed in for the passive components and the one proposed in for the diode. Next, a rectifier is built with the essential requirements for operating rectenna energy harvesting systems and the FDTD results are compared with experimental measurements.

Keywords: energy harvesting system, LE-FDTD, rectenna, rectifier, wireless power systems

Procedia PDF Downloads 134
81 Listening to Voices: A Meaning-Focused Framework for Supporting People with Auditory Verbal Hallucinations

Authors: Amar Ghelani

Abstract:

People with auditory verbal hallucinations (AVH) who seek support from mental health services commonly report feeling unheard and invalidated in their interactions with social workers and psychiatric professionals. Current mental health training and clinical approaches have proven to be inadequate in addressing the complex nature of voice hearing. Childhood trauma is a key factor in the development of AVH and can render people more vulnerable to hearing both supportive and/or disturbing voices. Lived experiences of racism, poverty, and immigration are also associated with development of what is broadly classified as psychosis. Despite evidence affirming the influence of environmental factors on voice hearing, the Western biomedical system typically conceptualizes this experience as a symptom of genetically-based mental illnesses which requires diagnosis and treatment. Overemphasis on psychiatric medications, referrals, and directive approaches to people’s problems has shifted clinical interventions away from assessing and addressing problems directly related to AVH. The Maastricht approach offers voice hearers and mental health workers an alternative and respectful starting point for understanding and coping with voices. The approach was developed by voice hearers in partnership with mental health professionals and entails an innovative method to assess and create meaning from voice hearing and related life stressors. The objectives of the approach are to help people who hear voices: (1) understand the problems and/or people the voices may represent in their history, and (2) cope with distress and find solutions to related problems. The Maastricht approach has also been found to help voice hearers integrate emotional conflicts, reduce avoidance or fear associated with AVH, improve therapeutic relationships, and increase a sense of control over internal experiences. The proposed oral presentation will be guided by a recovery-oriented theoretical framework which suggests healing from psychological wounds occurs through social connections and community support systems. The presentation will start with a brainstorming exercise to identify participants pre-existing knowledge of the subject matter. This will lead into a literature review on the relations between trauma, intersectionality, and AVH. An overview of the Maastricht approach and review of research related to its therapeutic risks and benefits will follow. Participants will learn trauma-informed coping skills and questions which can help voice hearers make meaning from their experiences. The presentation will conclude with a review of resources and learning opportunities where participants can expand their knowledge of the Hearing Voices Movement and Maastricht approach.

Keywords: Maastricht interview, recovery, therapeutic assessment, voice hearing

Procedia PDF Downloads 115
80 Pyramid of Deradicalization: Causes and Possible Solutions

Authors: Ashir Ahmed

Abstract:

Generally, radicalization happens when a person's thinking and behaviour become significantly different from how most of the members of their society and community view social issues and participate politically. Radicalization often leads to violent extremism that refers to the beliefs and actions of people who support or use violence to achieve ideological, religious or political goals. Studies on radicalization negate the common myths that someone must be in a group to be radicalised or anyone who experiences radical thoughts is a violent extremist. Moreover, it is erroneous to suggest that radicalisation is always linked to religion. Generally, the common motives of radicalization include ideological, issue-based, ethno-nationalist or separatist underpinning. Moreover, there are number of factors that further augments the chances of someone being radicalised and may choose the path of violent extremism and possibly terrorism. Since there are numbers of factors (and sometimes quite different) contributing in radicalization and violent extremism, it is highly unlikely to devise a single solution that could produce effective outcomes to deal with radicalization, violent extremism and terrorism. The pathway to deradicalization, like the pathway to radicalisation, is different for everyone. Considering the need of having customized deradicalization resolution, this study proposes a multi-tier framework, called ‘pyramid of deradicalization’ that first help identifying the stage at which an individual could be on the radicalization pathway and then propose a customize strategy to deal with the respective stage. The first tier (tier 1) addresses broader community and proposes a ‘universal approach’ aiming to offer community-based design and delivery of educational programs to raise awareness and provide general information on possible factors leading to radicalization and their remedies. The second tier focuses on the members of community who are more vulnerable and are disengaged from the rest of the community. This tier proposes a ‘targeted approach’ targeting the vulnerable members of the community through early intervention such as providing anonymous help lines where people feel confident and comfortable in seeking help without fearing the disclosure of their identity. The third tier aims to focus on people having clear evidence of moving toward extremism or getting radicalized. The people falls in this tier are believed to be supported through ‘interventionist approach’. The interventionist approach advocates the community engagement and community-policing, introducing deradicalization programmes to the targeted individuals and looking after their physical and mental health issues. The fourth and the last tier suggests the strategies to deal with people who are actively breaking the law. ‘Enforcement approach’ suggests various approaches such as strong law enforcement, fairness and accuracy in reporting radicalization events, unbiased treatment by law based on gender, race, nationality or religion and strengthen the family connections.It is anticipated that the operationalization of the proposed framework (‘pyramid of deradicalization’) would help in categorising people considering their tendency to become radicalized and then offer an appropriate strategy to make them valuable and peaceful members of the community.

Keywords: deradicalization, framework, terrorism, violent extremism

Procedia PDF Downloads 272
79 Motivational Profiles of the Entrepreneurial Career in Spanish Businessmen

Authors: Magdalena Suárez-Ortega, M. Fe. Sánchez-García

Abstract:

This paper focuses on the analysis of the motivations that lead people to undertake and consolidate their business. It is addressed from the framework of planned behavior theory, which recognizes the importance of the social environment and cultural values, both in the decision to undertake business and in business consolidation. Similarly, it is also based on theories of career development, which emphasize the importance of career management competencies and their connections to other vital aspects of people, including their roles within their families and other personal activities. This connects directly with the impact of entrepreneurship on the career and the professional-personal project of each individual. This study is part of the project titled Career Design and Talent Management (Ministry of Economy and Competitiveness of Spain, State Plan 2013-2016 Excellence Ref. EDU2013-45704-P). The aim of the study is to identify and describe entrepreneurial competencies and motivational profiles in a sample of 248 Spanish entrepreneurs, considering the consolidated profile and the profile in transition (n = 248).In order to obtain the information, the Questionnaire of Motivation and conditioners of the entrepreneurial career (MCEC) has been applied. This consists of 67 items and includes four scales (E1-Conflicts in conciliation, E2-Satisfaction in the career path, E3-Motivations to undertake, E4-Guidance Needs). Cluster analysis (mixed method, combining k-means clustering with a hierarchical method) was carried out, characterizing the groups profiles according to the categorical variables (chi square, p = 0.05), and the quantitative variables (ANOVA). The results have allowed us to characterize three motivational profiles relevant to the motivation, the degree of conciliation between personal and professional life, and the degree of conflict in conciliation, levels of career satisfaction and orientation needs (in the entrepreneurial project and life-career). The first profile is formed by extrinsically motivated entrepreneurs, professionally satisfied and without conflict of vital roles. The second profile acts with intrinsic motivation and also associated with family models, and although it shows satisfaction with their professional career, it finds a high conflict in their family and professional life. The third is composed of entrepreneurs with high extrinsic motivation, professional dissatisfaction and at the same time, feel the conflict in their professional life by the effect of personal roles. Ultimately, the analysis has allowed us to line the kinds of entrepreneurs to different levels of motivation, satisfaction, needs and articulation in professional and personal life, showing characterizations associated with the use of time for leisure, and the care of the family. Associations related to gender, age, activity sector, environment (rural, urban, virtual), and the use of time for domestic tasks are not identified. The model obtained and its implications for the design of training actions and orientation to entrepreneurs is also discussed.

Keywords: motivation, entrepreneurial career, guidance needs, life-work balance, job satisfaction, assessment

Procedia PDF Downloads 303
78 De-Densifying Congested Cores of Cities and Their Emerging Design Opportunities

Authors: Faith Abdul Rasak Asharaf

Abstract:

Every city has a threshold known as urban carrying capacity based on which it can withstand a particular density of people, above which the city might need to resort to measures like expanding its boundaries or growing vertically. As a result of this circumstance, the number of squatter communities is growing, as is the claustrophobic feeling of being confined inside a "concrete jungle." The expansion of suburbs, commercial areas, and industrial real estate in the areas surrounding medium-sized cities has resulted in changes to their landscapes and urban forms, as well as a systematic shift in their role in the urban hierarchy when functional endowment and connections to other territories are considered. The urban carrying capacity idea provides crucial guidance for city administrators and planners in better managing, designing, planning, constructing, and distributing urban resources to satisfy the huge demands of an evergrowing urban population. An ecological footprint is a criterion of urban carrying capacity, which is the amount of land required to provide humanity with renewable resources and absorb its trash. However, as each piece of land has its unique carrying capacity, including ecological, social, and economic considerations, these metropolitan areas begin to reach a saturation point over time. Various city models have been tried throughout the years to meet the increasing urban population density by moving the zones of work, life, and leisure to achieve maximum sustainable growth. The current scenario is that of a vertical city and compact city concept, in which the maximum density of people is attempted to fit into a definite area using efficient land use and a variety of other strategies, but this has proven to be a very unsustainable method of growth, as evidenced by the COVID-19 period. Due to a shortage of housing and basic infrastructure, densely populated cities gave rise to massive squatter communities, unable to accommodate the overflowing migrants. To achieve optimum carrying capacity, planning measures such as polycentric city and diffuse city concepts can be implemented, which will help to relieve the congested city core by relocating certain sectors of the town to the city periphery, which will help to create newer spaces for design in terms of public space, transportation, and housing, which is a major concern in the current scenario. The study's goal is focused on suggesting design options and solutions in terms of placemaking for better urban quality and urban life for the citizens once city centres have been de-densified based on urban carrying capacity and ecological footprint, taking the case of Kochi as an apt example of a highly densified city core, focusing on Edappally, which is an agglomeration of many urban factors.

Keywords: urban carrying capacity, urbanization, urban sprawl, ecological footprint

Procedia PDF Downloads 79
77 (Re)connecting to the Spirit of the Language: Decolonizing from Eurocentric Indigenous Language Revitalization Methodologies

Authors: Lana Whiskeyjack, Kyle Napier

Abstract:

The Spirit of the language embodies the motivation for indigenous people to connect with the indigenous language of their lineage. While the concept of the spirit of the language is often woven into the discussion by indigenous language revitalizationists, particularly those who are indigenous, there are few tangible terms in academic research conceptually actualizing the term. Through collaborative work with indigenous language speakers, elders, and learners, this research sets out to identify the spirit of the language, the catalysts of disconnection from the spirit of the language, and the sources of reconnection to the spirit of the language. This work fundamentally addresses the terms of engagement around collaboration with indigenous communities, itself inviting a decolonial approach to community outreach and individual relationships. As indigenous researchers, this means beginning, maintain, and closing this work in the ceremony while being transparent with community members in this work and related publishing throughout the project’s duration. Decolonizing this approach also requires maintaining explicit ongoing consent by the elders, knowledge keepers, and community members when handling their ancestral and indigenous knowledge. The handling of this knowledge is regarded in this work as stewardship, both in the handling of digital materials and the handling of ancestral Indigenous knowledge. This work observes recorded conversations in both nêhiyawêwin and English, resulting from 10 semi-structured interviews with fluent nêhiyawêwin speakers as well as three structured dialogue circles with fluent and emerging speakers. The words were transcribed by a speaker fluent in both nêhiyawêwin and English. The results of those interviews were categorized thematically to conceptually actualize the spirit of the language, catalysts of disconnection to thespirit of the language, and community voices methods of reconnection to the spirit of the language. Results of these interviews vastly determine that the spirit of the language is drawn from the land. Although nêhiyawêwin is the focus of this work, Indigenous languages are by nature inherently related to the land. This is further reaffirmed by the Indigenous language learners and speakers who expressed having ancestries and lineages from multiple Indigenous communities. Several other key differences embody this spirit of the language, which include ceremony and spirituality, as well as the semantic worldviews tied to polysynthetic verb-oriented morphophonemics most often found in indigenous languages — and of focus, nêhiyawêwin. The catalysts of disconnection to the spirit of the language are those whose histories have severed connections between Indigenous Peoples and the spirit of their languages or those that have affected relationships with the land, ceremony, and ways of thinking. Results of this research and its literature review have determined the three most ubiquitously damaging interdependent factors, which are catalysts of disconnection from the spirit of the language as colonization, capitalism, and Christianity. As voiced by the Indigenous language learners, this work necessitates addressing means to reconnect to the spirit of the language. Interviewees mentioned that the process of reconnection involves a whole relationship with the land, the practice of reciprocal-relational methodologies for language learning, and indigenous-protected and -governed learning. This work concludes in support of those reconnection methodologies.

Keywords: indigenous language acquisition, indigenous language reclamation, indigenous language revitalization, nêhiyawêwin, spirit of the language

Procedia PDF Downloads 143
76 Analysis of Engagement Methods in the College Classroom Post Pandemic

Authors: Marsha D. Loda

Abstract:

College enrollment is declining and generation Z, today’s college students, are struggling. Before the pandemic, researchers characterized this generational cohort as unique. Gen Z has been called the most achievement-oriented generation, as they enjoy greater economic status, are more racially and ethnically diverse, and better educated than any other generation. However, they are also the most likely generation to suffer from depression and anxiety. Gen Z has grown up largely with usually well-intentioned but overprotective parents who inadvertently kept them from learning life skills, likely impacting their ability to cope with and to effectively manage challenges. The unprecedented challenges resulting from the pandemic up ended their world and left them emotionally reeling. One of the ramifications of this for higher education is how to reengage current Gen Z students in the classroom. This research presents qualitative findings from 24 single-spaced pages of verbatim comments from college students. Research questions concerned what helps them learn and what they abhor, as well as how to engage them with the university outside of the classroom to aid in retention. Students leave little doubt about what they want to experience in the classroom. In order of mention, students want discussion, to engage with questions, to hear how a topic relates to real life and the real world, to feel connections with the professor and fellow students, and to have an opportunity to give their opinions. They prefer a classroom that involves conversation, with interesting topics and active learning. “professor talks instead of lecturing” “professor builds a connection with the classroom” “I am engaged because it feels like a respectful conversation” Similarly, students are direct about what they dislike in a classroom. In order of frequency, students dislike teachers unenthusiastically reading word or word from notes or presentations, repeating the text without adding examples, or addressing how to apply the information. “All lecture. I can read the book myself” “Not taught how to apply the skill or lesson” “Lectures the entire time. Lesson goes in one ear and out the other.” Pertaining to engagement outside the classroom, Gen Z challenges higher education to step outside the box. They don’t want to just hear from professionals in their field, they want to meet and interact with them. Perhaps because of their dependence on technology and pandemic isolation, they seem to reach out for assistance in forming social bonds. “I believe fun and social events are the best way to connect with students and get them involved. Cookouts, raffles, socials, or networking events would all most likely appeal to many students”. “Events… even if they aren’t directly related to learning. Maybe like movie nights… doing meet ups at restaurants”. Qualitative research suggests strategy. This research is rife with strategic implications to improve learning, increase engagement and reduce drop-out rates among Generation Z higher education students. It also compliments existing research on student engagement. With college enrollment declining by some 1.3 million students over the last two years, this research is both timely and important.

Keywords: college enrollment, generation Z, higher education, pandemic, student engagement

Procedia PDF Downloads 105
75 A Case Study on Problems Originated from Critical Path Method Application in a Governmental Construction Project

Authors: Mohammad Lemar Zalmai, Osman Hurol Turkakin, Cemil Akcay, Ekrem Manisali

Abstract:

In public construction projects, determining the contract period in the award phase is one of the most important factors. The contract period establishes the baseline for creating the cash flow curve and progress payment planning in the post-award phase. If overestimated, project duration causes losses for both the owner and the contractor. Therefore, it is essential to base construction project duration on reliable forecasting. In Turkey, schedules are usually built using the bar chart (Gantt) schedule, especially for governmental construction agencies. The usage of these schedules is limited for bidding purposes. Although the bar-chart schedule is useful in some cases, it lacks logical connections between activities; it would be harder to obtain the activities that have more effects than others on the project's total duration, especially in large complex projects. In this study, a construction schedule is prepared with Critical Path Method (CPM) that addresses the above-mentioned discrepancies. CPM is a simple and effective method that displays project time and critical paths, showing results of forward and backward calculations with considering the logic relationships between activities; it is a powerful tool for planning and managing all kinds of construction projects and is a very convenient method for the construction industry. CPM provides a much more useful and precise approach than traditional bar-chart diagrams that form the basis of construction planning and control. CPM has two main application utilities in the construction field; the first one is obtaining project duration, which is called an as-planned schedule that includes as-planned activity durations with relationships between subsequent activities. Another utility is during the project execution; each activity is tracked, and their durations are recorded in order to obtain as-built schedule, which is named as a black box of the project. The latter is more useful for delay analysis, and conflict resolutions. These features of CPM have been popular around the world. However, it has not been yet extensively used in Turkey. In this study, a real construction project is investigated as a case study; CPM-based scheduling is used for establishing both of as-built and as-planned schedules. Problems that emerged during the construction phase are identified and categorized. Subsequently, solutions are suggested. Two scenarios were considered. In the first scenario, project progress was monitored based as CPM was used to track and manage progress; this was carried out based on real-time data. In the second scenario, project progress was supposedly tracked based on the assumption that the Gantt chart was used. The S-curves of the two scenarios are plotted and interpreted. Comparing the results, possible faults of the latter scenario are highlighted, and solutions are suggested. The importance of CPM implementation has been emphasized and it has been proposed to make it mandatory for preparation of construction schedule based on CPM for public construction projects contracts.

Keywords: as-built, case-study, critical path method, Turkish government sector projects

Procedia PDF Downloads 122
74 Embodied Neoliberalism and the Mind as Tool to Manage the Body: A Descriptive Study Applied to Young Australian Amateur Athletes

Authors: Alicia Ettlin

Abstract:

Amid the rise of neoliberalism to the leading economic policy model in Western societies in the 1980s, people have started to internalise a neoliberal way of thinking, whereby the human body has become an entity that can and needs to be precisely managed through free yet rational decision-making processes. The neoliberal citizen has consequently become an entrepreneur of the self who is free, independent, rational, productive and responsible for themselves, their health and wellbeing as well as their appearance. The focus on individuals as entrepreneurs who manage their bodies through the rationally thinking mind has, however, become increasingly criticised for viewing the social actor as ‘disembodied’, as a detached, social actor whose powerful mind governs over the passive body. On the other hand, the discourse around embodiment seeks to connect rational decision-making processes to the dominant neoliberal discourse which creates an embodied understanding that the body, just as other areas of people’s lives, can and should be shaped, monitored and managed through cognitive and rational thinking. This perspective offers an understanding of the body regarding its connections with the social environment that reaches beyond the debates around mind-body binary thinking. Hence, following this argument, body management should not be thought of as either solely guided by embodied discourses nor as merely falling into a mind-body dualism, but rather, simultaneously and inseparably as both at once. The descriptive, qualitative analysis of semi-structured in-depth interviews conducted with young Australian amateur athletes between the age of 18 and 24 has shown that most participants are interested in measuring and managing their body to create self-knowledge and self-improvement. The participants thereby connected self-improvement to weight loss, muscle gain or simply staying fit and healthy. Self-knowledge refers to body measurements including weight, BMI or body fat percentage. Self-management and self-knowledge that are reliant on one another to take rational and well-thought-out decisions, are both characteristic values of the neoliberal doctrine. A neoliberal way of thinking and looking after the body has also by many been connected to rewarding themselves for their discipline, hard work or achievement of specific body management goals (e.g. eating chocolate for reaching the daily step count goal). A few participants, however, have shown resistance against these neoliberal values, and in particular, against the precise monitoring and management of the body with the help of self-tracking devices. Ultimately, however, it seems that most participants have internalised the dominant discourses around self-responsibility, and by association, a sense of duty to discipline their body in normative ways. Even those who have indicated their resistance against body work and body management practices that follow neoliberal thinking and measurement systems, are aware and have internalised the concept of the rational operating mind that needs or should decide how to look after the body in terms of health but also appearance ideals. The discussion around the collected data thereby shows that embodiment and the mind/body dualism constitute two connected, rather than two separate or opposing concepts.

Keywords: dualism, embodiment, mind, neoliberalism

Procedia PDF Downloads 163
73 Steel Concrete Composite Bridge: Modelling Approach and Analysis

Authors: Kaviyarasan D., Satish Kumar S. R.

Abstract:

India being vast in area and population with great scope of international business, roadways and railways network connection within the country is expected to have a big growth. There are numerous rail-cum-road bridges constructed across many major rivers in India and few are getting very old. So there is more possibility of repairing or coming up with such new bridges in India. Analysis and design of such bridges are practiced through conventional procedure and end up with heavy and uneconomical sections. Such heavy class steel bridges when subjected to high seismic shaking has more chance to fail by stability because the members are too much rigid and stocky rather than being flexible to dissipate the energy. This work is the collective study of the researches done in the truss bridge and steel concrete composite truss bridges presenting the method of analysis, tools for numerical and analytical modeling which evaluates its seismic behaviour and collapse mechanisms. To ascertain the inelastic and nonlinear behaviour of the structure, generally at research level static pushover analysis is adopted. Though the static pushover analysis is now extensively used for the framed steel and concrete buildings to study its lateral action behaviour, those findings by pushover analysis done for the buildings cannot directly be used for the bridges as such, because the bridges have completely a different performance requirement, behaviour and typology as compared to that of the buildings. Long span steel bridges are mostly the truss bridges. Truss bridges being formed by many members and connections, the failure of the system does not happen suddenly with single event or failure of one member. Failure usually initiates from one member and progresses gradually to the next member and so on when subjected to further loading. This kind of progressive collapse of the truss bridge structure is dependent on many factors, in which the live load distribution and span to length ratio are most significant. The ultimate collapse is anyhow by the buckling of the compression members only. For regular bridges, single step pushover analysis gives results closer to that of the non-linear dynamic analysis. But for a complicated bridge like heavy class steel bridge or the skewed bridges or complicated dynamic behaviour bridges, nonlinear analysis capturing the progressive yielding and collapse pattern is mandatory. With the knowledge of the postelastic behaviour of the bridge and advancements in the computational facility, the current level of analysis and design of bridges has moved to state of ascertaining the performance levels of the bridges based on the damage caused by seismic shaking. This is because the buildings performance levels deals much with the life safety and collapse prevention levels, whereas the bridges mostly deal with the extent damages and how quick it can be repaired with or without disturbing the traffic after a strong earthquake event. The paper would compile the wide spectrum of modeling to analysis of the steel concrete composite truss bridges in general.

Keywords: bridge engineering, performance based design of steel truss bridge, seismic design of composite bridge, steel-concrete composite bridge

Procedia PDF Downloads 186
72 Fabrication of Electrospun Green Fluorescent Protein Nano-Fibers for Biomedical Applications

Authors: Yakup Ulusu, Faruk Ozel, Numan Eczacioglu, Abdurrahman Ozen, Sabriye Acikgoz

Abstract:

GFP discovered in the mid-1970s, has been used as a marker after replicated genetic study by scientists. In biotechnology, cell, molecular biology, the GFP gene is frequently used as a reporter of expression. In modified forms, it has been used to make biosensors. Many animals have been created that express GFP as an evidence that a gene can be expressed throughout a given organism. Proteins labeled with GFP identified locations are determined. And so, cell connections can be monitored, gene expression can be reported, protein-protein interactions can be observed and signals that create events can be detected. Additionally, monitoring GFP is noninvasive; it can be detected by under UV-light because of simply generating fluorescence. Moreover, GFP is a relatively small and inert molecule, that does not seem to treat any biological processes of interest. The synthesis of GFP has some steps like, to construct the plasmid system, transformation in E. coli, production and purification of protein. GFP carrying plasmid vector pBAD–GFPuv was digested using two different restriction endonuclease enzymes (NheI and Eco RI) and DNA fragment of GFP was gel purified before cloning. The GFP-encoding DNA fragment was ligated into pET28a plasmid using NheI and Eco RI restriction sites. The final plasmid was named pETGFP and DNA sequencing of this plasmid indicated that the hexa histidine-tagged GFP was correctly inserted. Histidine-tagged GFP was expressed in an Escherichia coli BL21 DE3 (pLysE) strain. The strain was transformed with pETGFP plasmid and grown on LuiraBertoni (LB) plates with kanamycin and chloramphenicol selection. E. coli cells were grown up to an optical density (OD 600) of 0.8 and induced by the addition of a final concentration of 1mM isopropyl-thiogalactopyranoside (IPTG) and then grown for additional 4 h. The amino-terminal hexa-histidine-tag facilitated purification of the GFP by using a His Bind affinity chromatography resin (Novagen). Purity of GFP protein was analyzed by a 12 % sodium dodecyl sulphate polyacrylamide gel electrophoresis (SDS-PAGE). The concentration of protein was determined by UV absorption at 280 nm (Varian Cary 50 Scan UV/VIS spectrophotometer). Synthesis of GFP-Polymer composite nanofibers was produced by using GFP solution (10mg/mL) and polymer precursor Polyvinylpyrrolidone, (PVP, Mw=1300000) as starting materials and template, respectively. For the fabrication of nanofibers with the different fiber diameter; a sol–gel solution comprising of 0.40, 0.60 and 0.80 g PVP (depending upon the desired fiber diameter) and 100 mg GFP in 10 mL water: ethanol (3:2) mixtures were prepared and then the solution was covered on collecting plate via electro spinning at 10 kV with a feed-rate of 0.25 mL h-1 using Spellman electro spinning system. Results show that GFP-based nano-fiber can be used plenty of biomedical applications such as bio-imaging, bio-mechanic, bio-material and tissue engineering.

Keywords: biomaterial, GFP, nano-fibers, protein expression

Procedia PDF Downloads 320
71 Experimental Study of Energy Absorption Efficiency (EAE) of Warp-Knitted Spacer Fabric Reinforced Foam (WKSFRF) Under Low-Velocity Impact

Authors: Amirhossein Dodankeh, Hadi Dabiryan, Saeed Hamze

Abstract:

Using fabrics to reinforce composites considerably leads to improved mechanical properties, including resistance to the impact load and the energy absorption of composites. Warp-knitted spacer fabrics (WKSF) are fabrics consisting of two layers of warp-knitted fabric connected by pile yarns. These connections create a space between the layers filled by pile yarns and give the fabric a three-dimensional shape. Today because of the unique properties of spacer fabrics, they are widely used in the transportation, construction, and sports industries. Polyurethane (PU) foams are commonly used as energy absorbers, but WKSF has much better properties in moisture transfer, compressive properties, and lower heat resistance than PU foam. It seems that the use of warp-knitted spacer fabric reinforced PU foam (WKSFRF) can lead to the production and use of composite, which has better properties in terms of energy absorption from the foam, its mold formation is enhanced, and its mechanical properties have been improved. In this paper, the energy absorption efficiency (EAE) of WKSFRF under low-velocity impact is investigated experimentally. The contribution of the effect of each of the structural parameters of the WKSF on the absorption of impact energy has also been investigated. For this purpose, WKSF with different structures such as two different thicknesses, small and large mesh sizes, and position of the meshes facing each other and not facing each other were produced. Then 6 types of composite samples with different structural parameters were fabricated. The physical properties of samples like weight per unit area and fiber volume fraction of composite were measured for 3 samples of any type of composites. Low-velocity impact with an initial energy of 5 J was carried out on 3 samples of any type of composite. The output of the low-velocity impact test is acceleration-time (A-T) graph with a lot deviation point, in order to achieve the appropriate results, these points were removed using the FILTFILT function of MATLAB R2018a. Using Newtonian laws of physics force-displacement (F-D) graph was drawn from an A-T graph. We know that the amount of energy absorbed is equal to the area under the F-D curve. Determination shows the maximum energy absorption is 2.858 J which is related to the samples reinforced with fabric with large mesh, high thickness, and not facing of the meshes relative to each other. An index called energy absorption efficiency was defined, which means absorption energy of any kind of our composite divided by its fiber volume fraction. With using this index, the best EAE between the samples is 21.6 that occurs in the sample with large mesh, high thickness, and meshes facing each other. Also, the EAE of this sample is 15.6% better than the average EAE of other composite samples. Generally, the energy absorption on average has been increased 21.2% by increasing the thickness, 9.5% by increasing the size of the meshes from small to big, and 47.3% by changing the position of the meshes from facing to non-facing.

Keywords: composites, energy absorption efficiency, foam, geometrical parameters, low-velocity impact, warp-knitted spacer fabric

Procedia PDF Downloads 171
70 Reconceptualizing Evidence and Evidence Types for Digital Journalism Studies

Authors: Hai L. Tran

Abstract:

In the digital age, evidence-based reporting is touted as a best practice for seeking the truth and keeping the public well-informed. Journalists are expected to rely on evidence to demonstrate the validity of a factual statement and lend credence to an individual account. Evidence can be obtained from various sources, and due to a rich supply of evidence types available, the definition of this important concept varies semantically. To promote clarity and understanding, it is necessary to break down the various types of evidence and categorize them in a more coherent, systematic way. There is a wide array of devices that digital journalists deploy as proof to back up or refute a truth claim. Evidence can take various formats, including verbal and visual materials. Verbal evidence encompasses quotes, soundbites, talking heads, testimonies, voice recordings, anecdotes, and statistics communicated through written or spoken language. There are instances where evidence is simply non-verbal, such as when natural sounds are provided without any verbalized words. On the other hand, other language-free items exhibited in photos, video footage, data visualizations, infographics, and illustrations can serve as visual evidence. Moreover, there are different sources from which evidence can be cited. Supporting materials, such as public or leaked records and documents, data, research studies, surveys, polls, or reports compiled by governments, organizations, and other entities, are frequently included as informational evidence. Proof can also come from human sources via interviews, recorded conversations, public and private gatherings, or press conferences. Expert opinions, eye-witness insights, insider observations, and official statements are some of the common examples of testimonial evidence. Digital journalism studies tend to make broad references when comparing qualitative versus quantitative forms of evidence. Meanwhile, limited efforts are being undertaken to distinguish between sister terms, such as “data,” “statistical,” and “base-rate” on one side of the spectrum and “narrative,” “anecdotal,” and “exemplar” on the other. The present study seeks to develop the evidence taxonomy, which classifies evidence through the quantitative-qualitative juxtaposition and in a hierarchical order from broad to specific. According to this scheme, data, statistics, and base rate belong to the quantitative evidence group, whereas narrative, anecdote, and exemplar fall into the qualitative evidence group. Subsequently, the taxonomical classification arranges data versus narrative at the top of the hierarchy of types of evidence, followed by statistics versus anecdote and base rate versus exemplar. This research reiterates the central role of evidence in how journalists describe and explain social phenomena and issues. By defining the various types of evidence and delineating their logical connections it helps remove a significant degree of conceptual inconsistency, ambiguity, and confusion in digital journalism studies.

Keywords: evidence, evidence forms, evidence types, taxonomy

Procedia PDF Downloads 68
69 Leveraging Multimodal Neuroimaging Techniques to in vivo Address Compensatory and Disintegration Patterns in Neurodegenerative Disorders: Evidence from Cortico-Cerebellar Connections in Multiple Sclerosis

Authors: Efstratios Karavasilis, Foteini Christidi, Georgios Velonakis, Agapi Plousi, Kalliopi Platoni, Nikolaos Kelekis, Ioannis Evdokimidis, Efstathios Efstathopoulos

Abstract:

Introduction: Advanced structural and functional neuroimaging techniques contribute to the study of anatomical and functional brain connectivity and its role in the pathophysiology and symptoms’ heterogeneity in several neurodegenerative disorders, including multiple sclerosis (MS). Aim: In the present study, we applied multiparametric neuroimaging techniques to investigate the structural and functional cortico-cerebellar changes in MS patients. Material: We included 51 MS patients (28 with clinically isolated syndrome [CIS], 31 with relapsing-remitting MS [RRMS]) and 51 age- and gender-matched healthy controls (HC) who underwent MRI in a 3.0T MRI scanner. Methodology: The acquisition protocol included high-resolution 3D T1 weighted, diffusion-weighted imaging and echo planar imaging sequences for the analysis of volumetric, tractography and functional resting state data, respectively. We performed between-group comparisons (CIS, RRMS, HC) using CAT12 and CONN16 MATLAB toolboxes for the analysis of volumetric (cerebellar gray matter density) and functional (cortico-cerebellar resting-state functional connectivity) data, respectively. Brainance suite was used for the analysis of tractography data (cortico-cerebellar white matter integrity; fractional anisotropy [FA]; axial and radial diffusivity [AD; RD]) to reconstruct the cerebellum tracts. Results: Patients with CIS did not show significant gray matter (GM) density differences compared with HC. However, they showed decreased FA and increased diffusivity measures in cortico-cerebellar tracts, and increased cortico-cerebellar functional connectivity. Patients with RRMS showed decreased GM density in cerebellar regions, decreased FA and increased diffusivity measures in cortico-cerebellar WM tracts, as well as a pattern of increased and mostly decreased functional cortico-cerebellar connectivity compared to HC. The comparison between CIS and RRMS patients revealed significant GM density difference, reduced FA and increased diffusivity measures in WM cortico-cerebellar tracts and increased/decreased functional connectivity. The identification of decreased WM integrity and increased functional cortico-cerebellar connectivity without GM changes in CIS and the pattern of decreased GM density decreased WM integrity and mostly decreased functional connectivity in RRMS patients emphasizes the role of compensatory mechanisms in early disease stages and the disintegration of structural and functional networks with disease progression. Conclusions: In conclusion, our study highlights the added value of multimodal neuroimaging techniques for the in vivo investigation of cortico-cerebellar brain changes in neurodegenerative disorders. An extension and future opportunity to leverage multimodal neuroimaging data inevitably remain the integration of such data in the recently-applied mathematical approaches of machine learning algorithms to more accurately classify and predict patients’ disease course.

Keywords: advanced neuroimaging techniques, cerebellum, MRI, multiple sclerosis

Procedia PDF Downloads 141
68 The Budget Impact of the DISCERN™ Diagnostic Test for Alzheimer’s Disease in the United States

Authors: Frederick Huie, Lauren Fusfeld, William Burchenal, Scott Howell, Alyssa McVey, Thomas F. Goss

Abstract:

Alzheimer’s Disease (AD) is a degenerative brain disease characterized by memory loss and cognitive decline that presents a substantial economic burden for patients and health insurers in the US. This study evaluates the payer budget impact of the DISCERN™ test in the diagnosis and management of patients with symptoms of dementia evaluated for AD. DISCERN™ comprises three assays that assess critical factors related to AD that regulate memory, formation of synaptic connections among neurons, and levels of amyloid plaques and neurofibrillary tangles in the brain and can provide a quicker, more accurate diagnosis than tests in the current diagnostic pathway (CDP). An Excel-based model with a three-year horizon was developed to assess the budget impact of DISCERN™ compared with CDP in a Medicare Advantage plan with 1M beneficiaries. Model parameters were identified through a literature review and were verified through consultation with clinicians experienced in diagnosis and management of AD. The model assesses direct medical costs/savings for patients based on the following categories: •Diagnosis: costs of diagnosis using DISCERN™ and CDP. •False Negative (FN) diagnosis: incremental cost of care avoidable with a correct AD diagnosis and appropriately directed medication. •True Positive (TP) diagnosis: AD medication costs; cost from a later TP diagnosis with the CDP versus DISCERN™ in the year of diagnosis, and savings from the delay in AD progression due to appropriate AD medication in patients who are correctly diagnosed after a FN diagnosis.•False Positive (FP) diagnosis: cost of AD medication for patients who do not have AD. A one-way sensitivity analysis was conducted to assess the effect of varying key clinical and cost parameters ±10%. An additional scenario analysis was developed to evaluate the impact of individual inputs. In the base scenario, DISCERN™ is estimated to decrease costs by $4.75M over three years, equating to approximately $63.11 saved per test per year for a cohort followed over three years. While the diagnosis cost is higher with DISCERN™ than with CDP modalities, this cost is offset by the higher overall costs associated with CDP due to the longer time needed to receive a TP diagnosis and the larger number of patients who receive a FN diagnosis and progress more rapidly than if they had received appropriate AD medication. The sensitivity analysis shows that the three parameters with the greatest impact on savings are: reduced sensitivity of DISCERN™, improved sensitivity of the CDP, and a reduction in the percentage of disease progression that is avoided with appropriate AD medication. A scenario analysis in which DISCERN™ reduces the utilization for patients of computed tomography from 21% in the base case to 16%, magnetic resonance imaging from 37% to 27% and cerebrospinal fluid biomarker testing, positive emission tomography, electroencephalograms, and polysomnography testing from 4%, 5%, 10%, and 8%, respectively, in the base case to 0%, results in an overall three-year net savings of $14.5M. DISCERN™ improves the rate of accurate, definitive diagnosis of AD earlier in the disease and may generate savings for Medicare Advantage plans.

Keywords: Alzheimer’s disease, budget, dementia, diagnosis.

Procedia PDF Downloads 138
67 Investigating the Nature of Transactions Behind Violations Along Bangalore’s Lakes

Authors: Sakshi Saxena

Abstract:

Bangalore is an IT industry-based metropolitan city in the state of Karnataka in India. It has experienced tremendous urbanization at the expense of the environment. The reasons behind development over and near ecologically sensitive areas have been raised by several instances of disappearing lakes. Lakes in Bangalore can be considered commons on both a local and a regional scale and these water bodies are becoming less interconnected because of encroachment in the catchment area. Other sociocultural environmental risks that have led to social issues are now a source of concern. They serve as an example of the transformations in commons, a dilemma that as is transformed from rural to urban areas, as well as the complicated institutional issues associated with governance. According to some scholarly work and ecologists, a nexus of public and commercial institutions is primarily responsible for the depletion of water tanks and the inefficiency of the planning process. It is said that Bangalore's growth as an urban centre, together with the demands it created, particularly on land and water, resulted in the emergence of a middle and upper class that was demanding and self-assured. For the report in focus, it is evident to understand the issues and problems which led to these encroachments and captured violations if any around these lakes and tanks which arose during these decades. To claim watersheds and lake edges as properties, institutional arrangements (organizations, laws, and policies) intersect with planning authorities. Because of unregulated or indiscriminate forms of urbanization, it is claimed that the engagement of actors and negotiations of the process, including government ignorance, are allowing this problem to flourish. In general, the governance of natural resources in India is largely state-based. This is due to the constitutional scheme, which since the Government of India Act, of 1935 has in principle given the power to the states to legislate in this area. Thus, states have the exclusive power to regulate water supplies, irrigation and canals, drainage and embankments, water storage, hydropower, and fisheries. Thus, The main aim is to understand institutional arrangements and the master planning processes behind these arrangements. To understand the ambiguity through an example, it is noted that, Custodianship alone is a role divided between two state and two city-level bodies. This creates regulatory ambiguity and the effects on the environment are such as changes in city temperature, urban flooding, etc. As established, the main kinds of issues around lakes/tanks in Bangalore are encroachment and depletion. This study will further be enhanced by doing a physical survey of three of these lakes focusing on the Bellandur site and the stakeholders involved. According to the study's findings thus far, corrupt politicians and dubious land transaction tools are involved in the real estate industry. It appears that some destruction could have been stopped or at least mitigated in this case if there had been a robust system of urban planning processes involved along with strong institutional arrangements to protect lakes.

Keywords: wetlands, lakes, urbanization, bangalore, politics, reservoirs, municipal jurisdiction, lake connections, institutions

Procedia PDF Downloads 78
66 Investigations on the Fatigue Behavior of Welded Details with Imperfections

Authors: Helen Bartsch, Markus Feldmann

Abstract:

The dimensioning of steel structures subject to fatigue loads, such as wind turbines, bridges, masts and towers, crane runways and weirs or components in crane construction, is often dominated by fatigue verification. The fatigue details defined by the welded connections, such as butt or cruciform joints, longitudinal welds, welded-on or welded-in stiffeners, etc., are decisive. In Europe, the verification is usually carried out according to EN 1993-1-9 on a nominal stress basis. The basis is the detailed catalog, which specifies the fatigue strength of the various weld and construction details according to fatigue classes. Until now, a relation between fatigue classes and weld imperfection sizes is not included. Quality levels for imperfections in fusion-welded joints in steel, nickel, titanium and their alloys are regulated in EN ISO 5817, which, however, doesn’t contain direct correlations to fatigue resistances. The question arises whether some imperfections might be tolerable to a certain extent since they may be present in the test data used for detail classifications dating back decades ago. Although current standardization requires proof of satisfying limits of imperfection sizes, it would also be possible to tolerate welds with certain irregularities if these can be reliably quantified by non-destructive testing. Fabricators would be prepared to undertake carefully and sustained weld inspection in view of the significant economic consequences of such unfavorable fatigue classes. This paper presents investigations on the fatigue behavior of common welded details containing imperfections. In contrast to the common nominal stress concept, local fatigue concepts were used to consider the true stress increase, i.e., local stresses at the weld toe and root. The actual shape of a weld comprising imperfections, e.g., gaps or undercuts, can be incorporated into the fatigue evaluation, usually on a numerical basis. With the help of the effective notch stress concept, the fatigue resistance of detailed local weld shapes is assessed. Validated numerical models serve to investigate notch factors of fatigue details with different geometries. By utilizing parametrized ABAQUS routines, detailed numerical studies have been performed. Depending on the shape and size of different weld irregularities, fatigue classes can be defined. As well load-carrying welded details, such as the cruciform joint, as non-load carrying welded details, e.g., welded-on or welded-in stiffeners, are regarded. The investigated imperfections include, among others, undercuts, excessive convexity, incorrect weld toe, excessive asymmetry and insufficient or excessive throat thickness. Comparisons of the impact of different imperfections on the different types of fatigue details are made. Moreover, the influence of a combination of crucial weld imperfections on the fatigue resistance is analyzed. With regard to the trend of increasing efficiency in steel construction, the overall aim of the investigations is to include a more economical differentiation of fatigue details with regard to tolerance sizes. In the long term, the harmonization of design standards, execution standards and regulations of weld imperfections is intended.

Keywords: effective notch stress, fatigue, fatigue design, weld imperfections

Procedia PDF Downloads 261