Search results for: broad crested weir
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 980

Search results for: broad crested weir

140 Seismic Data Analysis of Intensity, Orientation and Distribution of Fractures in Basement Rocks for Reservoir Characterization

Authors: Mohit Kumar

Abstract:

Natural fractures are classified in two broad categories of joints and faults on the basis of shear movement in the deposited strata. Natural fracture always has high structural relationship with extensional or non-extensional tectonics and sometimes the result is seen in the form of micro cracks. Geological evidences suggest that both large and small-scale fractures help in to analyze the seismic anisotropy which essentially contribute into characterization of petro physical properties behavior associated with directional migration of fluid. We generally question why basement study is much needed as historically it is being treated as non-productive and geoscientist had no interest in exploration of these basement rocks. Basement rock goes under high pressure and temperature, and seems to be highly fractured because of the tectonic stresses that are applied to the formation along with the other geological factors such as depositional trend, internal stress of the rock body, rock rheology, pore fluid and capillary pressure. Sometimes carbonate rocks also plays the role of basement and igneous body e.g basalt deposited over the carbonate rocks and fluid migrate from carbonate to igneous rock due to buoyancy force and adequate permeability generated by fracturing. So in order to analyze the complete petroleum system, FMC (Fluid Migration Characterization) is necessary through fractured media including fracture intensity, orientation and distribution both in basement rock and county rock. Thus good understanding of fractures can lead to project the correct wellbore trajectory or path which passes through potential permeable zone generated through intensified P-T and tectonic stress condition. This paper deals with the analysis of these fracture property such as intensity, orientation and distribution in basement rock as large scale fracture can be interpreted on seismic section, however, small scale fractures show ambiguity in interpretation because fracture in basement rock lies below the seismic wavelength and hence shows erroneous result in identification. Seismic attribute technique also helps us to delineate the seismic fracture and subtle changes in fracture zone and these can be inferred from azimuthal anisotropy in velocity and amplitude and spectral decomposition. Seismic azimuthal anisotropy derives fracture intensity and orientation from compressional wave and converted wave data and based on variation of amplitude or velocity with azimuth. Still detailed analysis of fractured basement required full isotropic and anisotropic analysis of fracture matrix and surrounding rock matrix in order to characterize the spatial variability of basement fracture which support the migration of fluid from basement to overlying rock.

Keywords: basement rock, natural fracture, reservoir characterization, seismic attribute

Procedia PDF Downloads 175
139 Tracing a Timber Breakthrough: A Qualitative Study of the Introduction of Cross-Laminated-Timber to the Student Housing Market in Norway

Authors: Marius Nygaard, Ona Flindall

Abstract:

The Palisaden student housing project was completed in August 2013 and was, with its eight floors, Norway’s tallest timber building at the time of completion. It was the first time cross-laminated-timber (CLT) was utilized at this scale in Norway. The project was the result of a concerted effort by a newly formed management company to establish CLT as a sustainable and financially competitive alternative to conventional steel and concrete systems. The introduction of CLT onto the student housing market proved so successful that by 2017 more than 4000 individual student residences will have been built using the same model of development and construction. The aim of this paper is to identify the key factors that enabled this breakthrough for CLT. It is based on an in-depth study of a series of housing projects and the role of the management company who both instigated and enabled this shift of CLT from the margin to the mainstream. Specifically, it will look at how a new building system was integrated into a marketing strategy that identified a market potential within the existing structure of the construction industry and within the economic restrictions inherent to student housing in Norway. It will show how a key player established a project model that changed both the patterns of cooperation and the information basis for decisions. Based on qualitative semi-structured interviews with managers, contractors and the interdisciplinary teams of consultants (architects, structural engineers, acoustical experts etc.) this paper will trace the introduction, expansion and evolution of CLT-based building systems in the student housing market. It will show how the project management firm’s position in the value chain enabled them to function both as a liaison between contractor and client, and between contractor and producer. A position that allowed them to improve the flow of information. This ensured that CLT was handled on equal terms to other structural solutions in the project specifications, enabling realistic pricing and risk evaluation. Secondly, this paper will describe and discuss how the project management firm established and interacted with a growing network of contractors, architects and engineers to pool expertise and broaden the knowledge base across Norway’s regional markets. Finally, it will examine the role of the client, the building typology, and the industrial and technological factors in achieving this breakthrough for CLT in the construction industry. This paper gives an in-depth view of the progression of a single case rather than a broad description of the state of the art of large-scale timber building in Norway. However, this type of study may offer insights that are important to the understanding not only of specific markets but also of how new technologies should be introduced in big and well-established industries.

Keywords: cross-laminated-timber (CLT), industry breakthrough, student housing, timber market

Procedia PDF Downloads 200
138 Rt-Pcr Negative COVID-19 Infection in a Bodybuilding Competitor Using Anabolic Steroids: A Case Report

Authors: Mariana Branco, Nahida Sobrino, Cristina Neves, Márcia Santos, Afonso Granja, João Rosa Oliveira, Joana Costa, Luísa Castro Leite

Abstract:

This case reports a COVID-19 infection in an unvaccinated adult man with no history of COVID-19 and no relevant clinical history besides anabolic steroid use, undergoing weaning with tamoxifen after a bodybuilding competition. The patient presented a 4cm cervical mass 3 weeks after COVID-19 infection in his cohabitants. He was otherwise asymptomatic and tested negative to multiple RT-PCR tests. Nevertheless, the IgG COVID-19 antibody was positive, suggesting the previous infection. This report raises a potential link between anabolic steroid use and atypical COVID-19 onset. Objectives: The goals of this paper are to raise a potential link between anabolic steroid use and atypical COVID-19 onset but also to report an uncommon case of COVID-19 infection with consecutive negative gold standard tests. Methodology: The authors used CARE guidelines for case report writing. Introduction: This case reports a COVID-19 infection case in an unvaccinated adult man, with multiple serial negative reverse transcription polymerase chain reaction (RT-PCR) test results, presenting with single cervical lymphadenopathy. Although the association between COVID-19 and lymphadenopathy is well established, there are no cases with this presentation, and consistently negative RT-PCR tests have been reported. Methodologies: The authors used CARE guidelines for case report writing. Case presentation: This case reports a 28-year-old Caucasian man with no previous history of COVID-19 infection or vaccination and no relevant clinical history besides anabolic steroid use undergoing weaning with tamoxifendue to participation in a bodybuilding competition. He visits his primary care physician because of a large (4 cm) cervical lump, present for 3 days prior to the consultation. There was a positive family history for COVID-19 infection 3 weeks prior to the visit, during which the patient cohabited with the infected family members. The patient never had any previous clinical manifestation of COVID-19 infection and, despite multiple consecutive RT-PCR testing, never tested positive. The patient was treated with an NSAID and a broad-spectrum antibiotic, with little to no effect. Imagiological testing was performed via a cervical ultrasound, followed by a needle biopsy for histologic analysis. Serologic testing for COVID-19 immunity was conducted, revealing a positive Anti-SARS-CoV-2 IgG (Spike S1) antibody, suggesting the previous infection, given the unvaccinated status of our patient Conclusion: In patients with a positive epidemiologic context and cervical lymphadenopathy, physicians should still consider COVID-19 infection as a differential diagnosis, despite negative PCR testing. This case also raises a potential link between anabolic steroid use and atypical COVID-19 onset, never before reported in scientific literature.

Keywords: COVID-19, cervical lymphadenopathy, anabolic steroids, primary care

Procedia PDF Downloads 93
137 Notes on Matter: Ibn Arabi, Bernard Silvestris, and Other Ghosts

Authors: Brad Fox

Abstract:

Between something and nothing, a bit of both, neither/nor, a figment of the imagination, the womb of the universe - questions of what matter is, where it exists and what it means continue to surge up from the bottom of our concepts and theories. This paper looks at divergences and convergences, intimations and mistranslations, in a lineage of thought that begins with Plato’s Timaeus, travels through Arabic Spain and Syria, finally to end up in the language of science. Up to the 13th century, philosophers in Christian France based such inquiries on a questionable and fragmented translation of the Timaeus by Calcidius, with a commentary that conflated the Platonic concept of khora (‘space’ or ‘void’) with Aristotle’s hyle (‘primal matter’ as derived from ‘wood’ as a building material). Both terms were translated by Calcidius as silva. For 700 years, this was the only source for philosophers of matter in the Latin-speaking world. Bernard Silvestris, in his Cosmographia, exemplifies the concepts developed before new translations from Arabic began to pour into the Latin world from such centers as the court of Toledo. Unlike their counterparts across the Pyrenees, 13th century philosophers in Muslim Spain had access to a broad vocabulary for notions of primal matter. The prolific and visionary theologian, philosopher, and poet Muhyiddin Ibn Arabi could draw on the Ikhwan Al-Safa’s 10th Century renderings of Aristotle, which translated the Greek hyle as the everyday Arabic word maddah, still used for building materials today. He also often used the simple transliteration of hyle as hayula, probably taken from Ibn Sina. The prophet’s son-in-law Ali talked of dust in the air, invisible until it is struck by sunlight. Ibn Arabi adopted this dust - haba - as an expression for an original metaphysical substance, nonexistent but susceptible to manifesting forms. Ibn Arabi compares the dust to a phoenix, because we have heard about it and can conceive of it, but it has no existence unto itself and can be described only in similes. Elsewhere he refers to it as quwwa wa salahiyya - pure potentiality and readiness. The final portion of the paper will compare Bernard and Ibn Arabi’s notions of matter to the recent ontology developed by theoretical physicist and philosopher Karen Barad. Looking at Barad’s work with the work of Nils Bohr, it will argue that there is a rich resonance between Ibn Arabi’s paradoxical conceptions of matter and the quantum vacuum fluctuations verified by recent lab experiments. The inseparability of matter and meaning in Barad recall Ibn Arabi’s original response to Ibn Rushd’s question: Does revelation offer the same knowledge as rationality? ‘Yes and No,’ Ibn Arabi said, ‘and between the yes and no spirit is divided from matter and heads are separated from bodies.’ Ibn Arabi’s double affirmation continues to offer insight into our relationship to momentary experience at its most fundamental level.

Keywords: Karen Barad, Muhyiddin Ibn Arabi, primal matter, Bernard Silvestris

Procedia PDF Downloads 409
136 Ecological and Historical Components of the Cultural Code of the City of Florence as Part of the Edutainment Project Velonotte International

Authors: Natalia Zhabo, Sergey Nikitin, Marina Avdonina, Mariya Nikitina

Abstract:

The analysis of the activities of one of the events of the international educational and entertainment project Velonotte is provided: an evening bicycle tour with children around Florence. The aim of the project is to develop methods and techniques for increasing the sensitivity of the cycling participants and listeners of the radio broadcasts to the treasures of the national heritage, in this case, to the historical layers of the city and the ecology of the Renaissance epoch. The block of educational tasks is considered, and the issues of preserving the identity of the city are discussed. Methods. The Florentine event was prepared during more than a year. First of all the creative team selected such events of the history of the city which seem to be important for revealing the specifics of the city, its spirit - from antiquity to our days – including the forums of Internet with broad public opinion. Then a route (seven kilometers) was developed, which was proposed to the authorities and organizations of the city. The selection of speakers was conducted according to several criteria: they should be authors of books, famous scientists, connoisseurs in a certain sphere (toponymy, history of urban gardens, art history), capable and willing to talk with participants directly at the points of stops, in order to make a dialogue and so that performances could be organized with their participation. The music was chosen for each part of the itinerary to prepare the audience emotionally. Cards for coloring with images of the main content of each stop were created for children. A site was done to inform the participants and to keep photos, videos and the audio files with speakers’ speech afterward. Results: Held in April 2017, the event was dedicated to the 640th Anniversary of the Filippo Brunelleschi, Florentine architect, and to the 190th anniversary of the publication of Florence guide by Stendhal. It was supported by City of Florence and Florence Bike Festival. Florence was explored to transfer traditional elements of culture, sometimes unfairly forgotten from ancient times to Brunelleschi and Michelangelo and Tschaikovsky and David Bowie with lectures by professors of Universities. Memorable art boards were installed in public spaces. Elements of the cultural code are deeply internalized in the minds of the townspeople, the perception of the city in everyday life and human communication is comparable to such fundamental concepts of the self-awareness of the townspeople as mental comfort and the level of happiness. The format of a fun and playful walk with the ICT support gives new opportunities for enriching the city's cultural code of each citizen with new components, associations, connotations.

Keywords: edutainment, cultural code, cycling, sensitization Florence

Procedia PDF Downloads 194
135 Narcissism in the Life of Howard Hughes: A Psychobiographical Exploration

Authors: Alida Sandison, Louise A. Stroud

Abstract:

Narcissism is a personality configuration which has both normal and pathological personality expressions. Narcissism is highly complex, and is linked to a broad field of research. There are both dimensional and categorical conceptualisations of narcissism, and a variety of theoretical formulations that have been put forward to understand the narcissistic personality configuration. Currently, Kernberg’s Object Relations theory is well supported for this purpose. The complexity and particular defense mechanisms at play in the narcissistic personality make it a difficult personality configuration worth further research. Psychobiography as a methodology allows for the exploration of the lived life, and is thus a useful methodology to surmount these inherent challenges. Narcissism has been a focus of academic interest for a long time, and although there is a lot of research done in this area, to the researchers' knowledge, narcissistic dynamics have never been explored within a psychobiographical format. Thus, the primary aim of the research was to explore and describe narcissism in the life of Howard Hughes, with the objective of gaining further insight into narcissism through the use of this unconventional research approach. Hughes was chosen as subject for the study as he is renowned as an eccentric billionaire who had his revolutionary effect on the world, but was concurrently disturbed within his personal pathologies. Hughes was dynamic in three different sectors, namely motion pictures, aviation and gambling. He became more and more reclusive as he entered into middle age. From his early fifties he was agoraphobic, and the social network of connectivity that could reasonably be expected from someone in the top of their field was notably distorted. Due to his strong narcissistic personality configuration, and the interpersonal difficulties he experienced, Hughes represents an ideal figure to explore narcissism. The study used a single case study design, and purposive sampling to select Hughes. Qualitative data was sampled, using secondary data sources. Given that Hughes was a famous figure, there is a plethora of information on his life, which is primarily autobiographical. This includes books written about his life, and archival material in the form of newspaper articles, interviews and movies. Gathered data were triangulated to avoid the effect of author bias, and increase the credibility of the data used. It was collected using Yin’s guidelines for data collection. Data was analysed using Miles and Huberman strategy of data analysis, which consists of three steps, namely, data reduction, data display, and conclusion drawing and verification. Patterns which emerged in the data highlighted the defense mechanisms used by Hughes, in particular that of splitting and projection, in defending his sense of self. These defense mechanisms help us to understand the high levels of entitlement and paranoia experienced by Hughes. Findings provide further insight into his sense of isolation and difference, and the consequent difficulty he experienced in maintaining connections with others. Findings furthermore confirm the effectiveness of Kernberg’s theory in understanding narcissism observing an individual life.

Keywords: Howard Hughes, narcissism, narcissistic defenses, object relations

Procedia PDF Downloads 332
134 An Assessment of Suitable Alternative Public Transport System in Mid-Sized City of India

Authors: Sanjeev Sinha, Samir Saurav

Abstract:

The rapid growth of urban areas in India has led to transportation challenges like traffic congestion and an increase in accidents. Despite efforts by state governments and local administrations to improve urban transport, the surge in private vehicles has worsened the situation. Patna, located in Bihar State, is an example of the trend of increasing reliance on private motor vehicles, resulting in vehicular congestion and emissions. The existing transportation infrastructure is inadequate to meet future travel demands, and there has been a notable increase in the share of private vehicles in the city. Additionally, there has been a surge in economic activities in the region, which has increased the demand for improved travel convenience and connectivity. To address these challenges, a study was conducted to assess the most suitable transit mode for the proposed transit corridor outlined in the Comprehensive Mobility Plan (CMP) for Patna. The study covered four stages: developing screening criteria, evaluating parameters for various alternatives, qualitative and quantitative evaluations of alternatives, and implementation options for the most viable alternative. The study suggests that a mass transit system such as a metro rail is necessary to enhance Patna's urban public transport system. The New Metro Policy 2017 outlines specific prerequisites for submitting a Metro Rail Project Proposal to the Ministry of Housing and Urban Affairs (MoHUA), including the preparation of a CMP, the formation of an Urban Metropolitan Transport Authority (UMTA), the creation of an Alternative Analysis Report, the development of a Detailed Project Report, a Multi-Modal Integration Plan, and a Transit-Oriented Development (TOD) Plan. In 2018, the Comprehensive Mobility Plan for Patna was prepared, setting the stage for the subsequent steps in the metro rail project proposal. The results indicated that from the screening and analysis of qualitative parameters for different alternative modes in Patna, it is inferred that the Metro Rail and Monorail score 82.25 and 70.50, respectively, on a scale of 100. Based on the initial analysis and alternative evaluation in the form of quantitative analysis, the Metro Rail System significantly outperformed the Monorail system. The Metro Rail System has a positive Economic Net Present Value (ENPV) at a 14% internal rate of return, while the Monorail has a negative value. In conclusion, the study recommends choosing metro rail over monorail for the proposed transit corridor in Patna. However, the lack of broad-based technical expertise may result in implementation delays and increased costs for monorail.

Keywords: comprehensive mobility plan, alternative analysis, mobility corridors, mass transit system

Procedia PDF Downloads 87
133 A Survey of Digital Health Companies: Opportunities and Business Model Challenges

Authors: Iris Xiaohong Quan

Abstract:

The global digital health market reached 175 billion U.S. dollars in 2019, and is expected to grow at about 25% CAGR to over 650 billion USD by 2025. Different terms such as digital health, e-health, mHealth, telehealth have been used in the field, which can sometimes cause confusion. The term digital health was originally introduced to refer specifically to the use of interactive media, tools, platforms, applications, and solutions that are connected to the Internet to address health concerns of providers as well as consumers. While mHealth emphasizes the use of mobile phones in healthcare, telehealth means using technology to remotely deliver clinical health services to patients. According to FDA, “the broad scope of digital health includes categories such as mobile health (mHealth), health information technology (IT), wearable devices, telehealth and telemedicine, and personalized medicine.” Some researchers believe that digital health is nothing else but the cultural transformation healthcare has been going through in the 21st century because of digital health technologies that provide data to both patients and medical professionals. As digital health is burgeoning, but research in the area is still inadequate, our paper aims to clear the definition confusion and provide an overall picture of digital health companies. We further investigate how business models are designed and differentiated in the emerging digital health sector. Both quantitative and qualitative methods are adopted in the research. For the quantitative analysis, our research data came from two databases Crunchbase and CBInsights, which are well-recognized information sources for researchers, entrepreneurs, managers, and investors. We searched a few keywords in the Crunchbase database based on companies’ self-description: digital health, e-health, and telehealth. A search of “digital health” returned 941 unique results, “e-health” returned 167 companies, while “telehealth” 427. We also searched the CBInsights database for similar information. After merging and removing duplicate ones and cleaning up the database, we came up with a list of 1464 companies as digital health companies. A qualitative method will be used to complement the quantitative analysis. We will do an in-depth case analysis of three successful unicorn digital health companies to understand how business models evolve and discuss the challenges faced in this sector. Our research returned some interesting findings. For instance, we found that 86% of the digital health startups were founded in the recent decade since 2010. 75% of the digital health companies have less than 50 employees, and almost 50% with less than 10 employees. This shows that digital health companies are relatively young and small in scale. On the business model analysis, while traditional healthcare businesses emphasize the so-called “3P”—patient, physicians, and payer, digital health companies extend to “5p” by adding patents, which is the result of technology requirements (such as the development of artificial intelligence models), and platform, which is an effective value creation approach to bring the stakeholders together. Our case analysis will detail the 5p framework and contribute to the extant knowledge on business models in the healthcare industry.

Keywords: digital health, business models, entrepreneurship opportunities, healthcare

Procedia PDF Downloads 166
132 Basics of Gamma Ray Burst and Its Afterglow

Authors: Swapnil Kumar Singh

Abstract:

Gamma-ray bursts (GRB's), short and intense pulses of low-energy γ rays, have fascinated astronomers and astrophysicists since their unexpected discovery in the late sixties. GRB'sare accompanied by long-lasting afterglows, and they are associated with core-collapse supernovae. The detection of delayed emission in X-ray, optical, and radio wavelength, or "afterglow," following a γ-ray burst can be described as the emission of a relativistic shell decelerating upon collision with the interstellar medium. While it is fair to say that there is strong diversity amongst the afterglow population, probably reflecting diversity in the energy, luminosity, shock efficiency, baryon loading, progenitor properties, circumstellar medium, and more, the afterglows of GRBs do appear more similar than the bursts themselves, and it is possible to identify common features within afterglows that lead to some canonical expectations. After an initial flash of gamma rays, a longer-lived "afterglow" is usually emitted at longer wavelengths (X-ray, ultraviolet, optical, infrared, microwave, and radio). It is a slowly fading emission at longer wavelengths created by collisions between the burst ejecta and interstellar gas. In X-ray wavelengths, the GRB afterglow fades quickly at first, then transitions to a less-steep drop-off (it does other stuff after that, but we'll ignore that for now). During these early phases, the X-ray afterglow has a spectrum that looks like a power law: flux F∝ E^β, where E is energy and beta is some number called the spectral index. This kind of spectrum is characteristic of synchrotron emission, which is produced when charged particles spiral around magnetic field lines at close to the speed of light. In addition to the outgoing forward shock that ploughs into the interstellar medium, there is also a so-called reverse shock, which propagates backward through the ejecta. In many ways," reverse" shock can be misleading; this shock is still moving outward from the restframe of the star at relativistic velocity but is ploughing backward through the ejecta in their frame and is slowing the expansion. This reverse shock can be dynamically important, as it can carry comparable energy to the forward shock. The early phases of the GRB afterglow still provide a good description even if the GRB is highly collimated since the individual emitting regions of the outflow are not in causal contact at large angles and so behave as though they are expanding isotropically. The majority of afterglows, at times typically observed, fall in the slow cooling regime, and the cooling break lies between the optical and the X-ray. Numerous observations support this broad picture for afterglows in the spectral energy distribution of the afterglow of the very bright GRB. The bluer light (optical and X-ray) appears to follow a typical synchrotron forward shock expectation (note that the apparent features in the X-ray and optical spectrum are due to the presence of dust within the host galaxy). We need more research in GRB and Particle Physics in order to unfold the mysteries of afterglow.

Keywords: GRB, synchrotron, X-ray, isotropic energy

Procedia PDF Downloads 72
131 Inhibition of Food Borne Pathogens by Bacteriocinogenic Enterococcus Strains

Authors: Neha Farid

Abstract:

Due to the abuse of antimicrobial medications in animal feed, the occurrence of multi-drug resistant (MDR) pathogens in foods is currently a growing public health concern on a global scale. MDR infections have the potential to penetrate the food chain by posing a serious risk to both consumers and animals. Food pathogens are those biological agents that have the tendency to cause pathogenicity in the host body upon ingestion. The major reservoirs of foodborne pathogens include food-producing fauna like cows, pigs, goats, sheep, deer, etc. The intestines of these animals are highly condensed with several different types of food pathogens. Bacterial food pathogens are the main cause of foodborne disease in humans; almost 66% of the reported cases of food illness in a year are caused by the infestation of bacterial food pathogens. When ingested, these pathogens reproduce and survive or form different kinds of toxins inside host cells causing severe infections. The genus Listeria consists of gram-positive, rod-shaped, non-spore-forming bacteria. The disease caused by Listeria monocytogenes is listeriosis or gastroenteritis, which induces fever, vomiting, and severe diarrhea in the affected body. Campylobacter jejuni is a gram-negative, curved-rod-shaped bacteria causing foodborne illness. The major source of Campylobacter jejuni is livestock and poultry; particularly, chicken is highly colonized with Campylobacter jejuni. Serious public health concerns include the widespread growth of bacteria that are resistant to antibiotics and the slowing in the discovery of new classes of medicines. The objective of this study is to provide some potential antibacterial activities with certain broad-range antibiotics and our desired bacteriocins, i.e., Enterococcus faecium from specific strains preventing microbial contamination pathways in order to safeguard the food by lowering food deterioration, contamination, and foodborne illnesses. The food pathogens were isolated from various sources of dairy products and meat samples. The isolates were tested for the presence of Listeria and Campylobacter by gram staining and biochemical testing. They were further sub-cultured on selective media enriched with the growth supplements for Listeria and Campylobacter. All six strains of Listeria and Campylobacter were tested against ten antibiotics. Campylobacter strains showed resistance against all the antibiotics, whereas Listeria was found to be resistant only against Nalidixic Acid and Erythromycin. Further, the strains were tested against the two bacteriocins isolated from Enterococcus faecium. It was found that bacteriocins showed better antimicrobial activity against food pathogens. They can be used as a potential antimicrobial for food preservation. Thus, the study concluded that natural antimicrobials could be used as alternatives to synthetic antimicrobials to overcome the problem of food spoilage and severe food diseases.

Keywords: food pathogens, listeria, campylobacter, antibiotics, bacteriocins

Procedia PDF Downloads 46
130 Changing the Landscape of Fungal Genomics: New Trends

Authors: Igor V. Grigoriev

Abstract:

Understanding of biological processes encoded in fungi is instrumental in addressing future food, feed, and energy demands of the growing human population. Genomics is a powerful and quickly evolving tool to understand these processes. The Fungal Genomics Program of the US Department of Energy Joint Genome Institute (JGI) partners with researchers around the world to explore fungi in several large scale genomics projects, changing the fungal genomics landscape. The key trends of these changes include: (i) rapidly increasing scale of sequencing and analysis, (ii) developing approaches to go beyond culturable fungi and explore fungal ‘dark matter,’ or unculturables, and (iii) functional genomics and multi-omics data integration. Power of comparative genomics has been recently demonstrated in several JGI projects targeting mycorrhizae, plant pathogens, wood decay fungi, and sugar fermenting yeasts. The largest JGI project ‘1000 Fungal Genomes’ aims at exploring the diversity across the Fungal Tree of Life in order to better understand fungal evolution and to build a catalogue of genes, enzymes, and pathways for biotechnological applications. At this point, at least 65% of over 700 known families have one or more reference genomes sequenced, enabling metagenomics studies of microbial communities and their interactions with plants. For many of the remaining families no representative species are available from culture collections. To sequence genomes of unculturable fungi two approaches have been developed: (a) sequencing DNA from fruiting bodies of ‘macro’ and (b) single cell genomics using fungal spores. The latter has been tested using zoospores from the early diverging fungi and resulted in several near-complete genomes from underexplored branches of the Fungal Tree, including the first genomes of Zoopagomycotina. Genome sequence serves as a reference for transcriptomics studies, the first step towards functional genomics. In the JGI fungal mini-ENCODE project transcriptomes of the model fungus Neurospora crassa grown on a spectrum of carbon sources have been collected to build regulatory gene networks. Epigenomics is another tool to understand gene regulation and recently introduced single molecule sequencing platforms not only provide better genome assemblies but can also detect DNA modifications. For example, 6mC methylome was surveyed across many diverse fungi and the highest among Eukaryota levels of 6mC methylation has been reported. Finally, data production at such scale requires data integration to enable efficient data analysis. Over 700 fungal genomes and other -omes have been integrated in JGI MycoCosm portal and equipped with comparative genomics tools to enable researchers addressing a broad spectrum of biological questions and applications for bioenergy and biotechnology.

Keywords: fungal genomics, single cell genomics, DNA methylation, comparative genomics

Procedia PDF Downloads 185
129 The Lonely Entrepreneur: Antecedents and Effects of Social Isolation on Entrepreneurial Intention and Output

Authors: Susie Pryor, Palak Sadhwani

Abstract:

The purpose of this research is to provide the foundations for a broad research agenda examining the role loneliness plays in entrepreneurship. While qualitative research in entrepreneurship incidentally captures the existence of loneliness as a part of the lived reality of entrepreneurs, to the authors’ knowledge, no academic work has to date explored this construct in this context. Moreover, many individuals reporting high levels of loneliness (women, ethnic minorities, immigrants, low income, low education) reflect those who are currently driving small business growth in the United States. Loneliness is a persistent state of emotional distress which results from feelings of estrangement and rejection or develops in the absence of social relationships and interactions. Empirical work finds links between loneliness and depression, suicide and suicide ideation, anxiety, hostility and passiveness, lack of communication and adaptability, shyness, poor social skills and unrealistic social perceptions, self-doubts, fear of rejection, and negative self-evaluation. Lonely individuals have been found to exhibit lower levels of self-esteem, higher levels of introversion, lower affiliative tendencies, less assertiveness, higher sensitivity to rejection, a heightened external locus of control, intensified feelings of regret and guilt over past events and rigid and overly idealistic goals concerning the future. These characteristics are likely to impact entrepreneurs and their work. Research identifies some key dangers of loneliness. Loneliness damages human love and intimacy, can disturb and distract individuals from channeling creative and effective energies in a meaningful way, may result in the formation of premature, poorly thought out and at times even irresponsible decisions, and produce hard and desensitized individuals, with compromised health and quality of life concerns. The current study utilizes meta-analysis and text analytics to distinguish loneliness from other related constructs (e.g., social isolation) and categorize antecedents and effects of loneliness across subpopulations. This work has the potential to materially contribute to the field of entrepreneurship by cleanly defining constructs and providing foundational background for future research. It offers a richer understanding of the evolution of loneliness and related constructs over the life cycle of entrepreneurial start-up and development. Further, it suggests preliminary avenues for exploration and methods of discovery that will result in knowledge useful to the field of entrepreneurship. It is useful to both entrepreneurs and those work with them as well as academics interested in the topics of loneliness and entrepreneurship. It adopts a grounded theory approach.

Keywords: entrepreneurship, grounded theory, loneliness, meta-analysis

Procedia PDF Downloads 97
128 Primary-Color Emitting Photon Energy Storage Nanophosphors for Developing High Contrast Latent Fingerprints

Authors: G. Swati, D. Haranath

Abstract:

Commercially available long afterglow /persistent phosphors are proprietary materials and hence the exact composition and phase responsible for their luminescent characteristics such as initial intensity and afterglow luminescence time are not known. Further to generate various emission colors, commercially available persistence phosphors are physically blended with fluorescent organic dyes such as rodhamine, kiton and methylene blue etc. Blending phosphors with organic dyes results into complete color coverage in visible spectra, however with time, such phosphors undergo thermal and photo-bleaching. This results in the loss of their true emission color. Hence, the current work is dedicated studies on inorganic based thermally and chemically stable primary color emitting nanophosphors namely SrAl2O4:Eu2+, Dy3+, (CaZn)TiO3:Pr3+, and Sr2MgSi2O7:Eu2+, Dy3+. SrAl2O4: Eu2+, Dy3+ phosphor exhibits a strong excitation in UV and visible region (280-470 nm) with a broad emission peak centered at 514 nm is the characteristic emission of parity allowed 4f65d1→4f7 transitions of Eu2+ (8S7/2→2D5/2). Sunlight excitable Sr2MgSi2O7:Eu2+,Dy3+ nanophosphors emits blue color (464 nm) with Commercial international de I’Eclairage (CIE) coordinates to be (0.15, 0.13) with a color purity of 74 % with afterglow time of > 5 hours for dark adapted human eyes. (CaZn)TiO3:Pr3+ phosphor system possess high color purity (98%) which emits intense, stable and narrow red emission at 612 nm due intra 4f transitions (1D2 → 3H4) with afterglow time of 0.5 hour. Unusual property of persistence luminescence of these nanophoshphors supersedes background effects without losing sensitive information these nanophosphors offer several advantages of visible light excitation, negligible substrate interference, high contrast bifurcation of ridge pattern, non-toxic nature revealing finger ridge details of the fingerprints. Both level 1 and level 2 features from a fingerprint can be studied which are useful for used classification, indexing, comparison and personal identification. facile methodology to extract high contrast fingerprints on non-porous and porous substrates using a chemically inert, visible light excitable, and nanosized phosphorescent label in the dark has been presented. The chemistry of non-covalent physisorption interaction between the long afterglow phosphor powder and sweat residue in fingerprints has been discussed in detail. Real-time fingerprint development on porous and non-porous substrates has also been performed. To conclude, apart from conventional dark vision applications, as prepared primary color emitting afterglow phosphors are potentional candidate for developing high contrast latent fingerprints.

Keywords: fingerprints, luminescence, persistent phosphors, rare earth

Procedia PDF Downloads 184
127 Chemistry and Biological Activity of Feed Additive for Poultry Farming

Authors: Malkhaz Jokhadze, Vakhtang Mshvildadze, Levan Makaradze, Ekaterine Mosidze, Salome Barbaqadze, Mariam Murtazashvili, Dali Berashvili, Koba sivsivadze, Lasha Bakuridze, Aliosha Bakuridze

Abstract:

Essential oils are one of the most important groups of biologically active substances present in plants. Due to the chemical diversity of components, essential oils and their preparations have a wide spectrum of pharmacological action. They have bactericidal, antiviral, fungicidal, antiprotozoal, anti-inflammatory, spasmolytic, sedative and other activities. They are expectorant, spasmolytic, sedative, hypotensive, secretion enhancing, antioxidant remedies. Based on preliminary pharmacological studies, we have developed a formulation called “Phytobiotic” containing essential oils, a feed additive for poultry as an alternative to antibiotics. Phytobiotic is a water-soluble powder containing a composition of essential oils of thyme, clary, monarda and auxiliary substances: dry extract of liquorice and inhalation lactose. On this stage of research, the goal was to study the chemical composition of provided phytobiotic, identify the main substances and determine their quantity, investigate the biological activity of phytobiotic through in vitro and in vivo studies. Using gas chromatography-mass spectrometry, 38 components were identified in phytobiotic, representing acyclic-, monocyclic-, bicyclic-, and sesquiterpenes. Together with identification of main active substances, their quantitative content was determined, including acyclic terpene alcohol β-linalool, acyclic terpene ketone linalyl acetate, monocyclic terpenes: D-limonene and γ-terpinene, monocyclic aromatic terpene thymol. Provided phytobiotic has pronounced and at the same time broad spectrum of antibacterial activity. In the cell model, phytobiotic showed weak antioxidant activity, and it was stronger in the ORAC (chemical model) tests. Meanwhile anti-inflammatory activity was also observed. When fowls were supplied feed enriched with phytobiotic, it was observed that gained weight of the chickens in the experimental group exceeded the same data for the control group during the entire period of the experiment. The survival rate of broilers in the experimental group during the growth period was 98% compared to -94% in the control group. As a result of conducted researches probable four different mechanisms which are important for the action of phytobiotics were identified: sensory, metabolic, antioxidant and antibacterial action. General toxic, possible local irritant and allergenic effects of phytobiotic were also investigated. Performed assays proved that formulation is safe.

Keywords: clary, essential oils, monarda, poultry, phytobiotics, thyme

Procedia PDF Downloads 150
126 Identifying the Effects of the Rural Demographic Changes in the Northern Netherlands: A Holistic Approach to Create Healthier Environment

Authors: A. R. Shokoohi, E. A. M. Bulder, C. Th. van Alphen, D. F. den Hertog, E. J. Hin

Abstract:

The Northern region of the Netherlands has beautiful landscapes, a nice diversity of green and blue areas, and dispersed settlements. However, some recent population changes can become threats to health and wellbeing in these areas. The rural areas in the three northern provinces -Groningen, Friesland, and Drenthe, see youngsters leave the region for which reason they are aging faster than other regions in the Netherlands. As a result, some villages have faced major population decline that is leading to loss of facilities/amenities and a decrease in accessibility and social cohesion. Those who still live in these villages are relatively old, low educated and have low-income. To develop a deeper understanding of the health status of the people living in these areas, and help them to improve their living environment, the GO!-Method is being applied in this study. This method has been developed by the National Institute for Public Health and the Environment (RIVM) of the Netherlands and is inspired by the broad definition of health by Machteld Huber: the ability to adapt and direct control, in terms of the physical, emotional and social challenges of life, while paying extra attention to vulnerable groups. A healthy living environment is defined as an environment that residents find it pleasant and encourages and supports healthy behavior. The GO!-method integrates six domains that constitute a healthy living environment: health and lifestyle, facilities and development, safety and hygiene, social cohesion and active citizens, green areas, and air and noise pollution. First of all, this method will identify opportunities for a healthier living environment using existing information and perceptions of residents and other local stakeholders in order to strengthen social participation and quality of life in these rural areas. Second, this approach will connect identified opportunities with available and effective evidence-based interventions in order to develop an action plan from the residents and local authorities perspective which will help them to design their municipalities healthier and more resilient. This method is being used for the first time in rural areas to our best knowledge, in close collaboration with the residents and local authorities of the three provinces to create a sustainable process and stimulate social participation. Our paper will present the outcomes of the first phase of this project in collaboration with the municipality of Westerkwartier, located in the northwest of the province of Groningen. And will describe the current situation, and identify local assets, opportunities, and policies relating to healthier environment; as well as needs and challenges to achieve goals. The preliminary results show that rural demographic changes in the northern Netherlands have negative impacts on service provisions and social cohesion, and there is a need to understand this complicated situation and improve the quality of life in those areas.

Keywords: population decline, rural areas, healthy environment, Netherlands

Procedia PDF Downloads 78
125 Electroactive Fluorene-Based Polymer Films Obtained by Electropolymerization

Authors: Mariana-Dana Damaceanu

Abstract:

Electrochemical oxidation is one of the most convenient ways to obtain conjugated polymer films as polypyrrole, polyaniline, polythiophene or polycarbazole. The research in the field has been mainly directed to the study of electrical conduction properties of the materials obtained by electropolymerization, often the main reason being their use as electroconducting electrodes, and very little attention has been paid to the morphological and optical quality of the films electrodeposited on flat surfaces. Electropolymerization of the monomer solution was scarcely used in the past to manufacture polymer-based light-emitting diodes (PLED), most probably due to the difficulty of obtaining defectless polymer films with good mechanical and optical properties, or conductive polymers with well controlled molecular weights. Here we report our attempts in using electrochemical deposition as appropriate method for preparing ultrathin films of fluorene-based polymers for PLED applications. The properties of these films were evaluated in terms of structural morphology, optical properties, and electrochemical conduction. Thus, electropolymerization of 4,4'-(9-fluorenylidene)-dianiline was performed in dichloromethane solution, at a concentration of 10-2 M, using 0.1 M tetrabutylammonium tetrafluoroborate as electrolyte salt. The potential was scanned between 0 and 1.3 V on the one hand, and 0 - 2 V on the other hand, when polymer films with different structures and properties were obtained. Indium tin oxide-coated glass substrate of different size was used as working electrode, platinum wire as counter electrode and calomel electrode as reference. For each potential range 100 cycles were recorded at a scan rate of 100 mV/s. The film obtained in the potential range from 0 to 1.3 V, namely poly(FDA-NH), is visible to the naked eye, being light brown, transparent and fluorescent, and displays an amorphous morphology. Instead, the electrogrowth poly(FDA) film in the potential range of 0 - 2 V is yellowish-brown and opaque, presenting a self-assembled structure in aggregates of irregular shape and size. The polymers structure was identified by FTIR spectroscopy, which shows the presence of broad bands specific to a polymer, the band centered at approx. 3443 cm-1 being ascribed to the secondary amine. The two polymer films display two absorption maxima, at 434-436 nm assigned to π-π* transitions of polymers, and another at 832 and 880 nm assigned to polaron transitions. The fluorescence spectra indicated the presence of emission bands in the blue domain, with two peaks at 422 and 488 nm for poly (FDA-NH), and four narrow peaks at 422, 447, 460 and 484 nm for poly(FDA), peaks originating from fluorene-containing segments of varying degrees of conjugation. Poly(FDA-NH) exhibited two oxidation peaks in the anodic region and the HOMO energy value of 5.41 eV, whereas poly(FDA) showed only one oxidation peak and the HOMO level localized at 5.29 eV. The electrochemical data are discussed in close correlation with the proposed chemical structure of the electrogrowth films. Further research will be carried out to study their use and performance in light-emitting devices.

Keywords: electrogrowth polymer films, fluorene, morphology, optical properties

Procedia PDF Downloads 324
124 Assessing the Structure of Non-Verbal Semantic Knowledge: The Evaluation and First Results of the Hungarian Semantic Association Test

Authors: Alinka Molnár-Tóth, Tímea Tánczos, Regina Barna, Katalin Jakab, Péter Klivényi

Abstract:

Supported by neuroscientific findings, the so-called Hub-and-Spoke model of the human semantic system is based on two subcomponents of semantic cognition, namely the semantic control process and semantic representation. Our semantic knowledge is multimodal in nature, as the knowledge system stored in relation to a conception is extensive and broad, while different aspects of the conception may be relevant depending on the purpose. The motivation of our research is to develop a new diagnostic measurement procedure based on the preservation of semantic representation, which is appropriate to the specificities of the Hungarian language and which can be used to compare the non-verbal semantic knowledge of healthy and aphasic persons. The development of the test will broaden the Hungarian clinical diagnostic toolkit, which will allow for more specific therapy planning. The sample of healthy persons (n=480) was determined by the last census data for the representativeness of the sample. Based on the concept of the Pyramids and Palm Tree Test, and according to the characteristics of the Hungarian language, we have elaborated a test based on different types of semantic information, in which the subjects are presented with three pictures: they have to choose the one that best fits the target word above from the two lower options, based on the semantic relation defined. We have measured 5 types of semantic knowledge representations: associative relations, taxonomy, motional representations, concrete as well as abstract verbs. As the first step in our data analysis, we examined the normal distribution of our results, and since it was not normally distributed (p < 0.05), we used nonparametric statistics further into the analysis. Using descriptive statistics, we could determine the frequency of the correct and incorrect responses, and with this knowledge, we could later adjust and remove the items of questionable reliability. The reliability was tested using Cronbach’s α, and it can be safely said that all the results were in an acceptable range of reliability (α = 0.6-0.8). We then tested for the potential gender differences using the Mann Whitney-U test, however, we found no difference between the two (p < 0.05). Likewise, we didn’t see that the age had any effect on the results using one-way ANOVA (p < 0.05), however, the level of education did influence the results (p > 0.05). The relationships between the subtests were observed by the nonparametric Spearman’s rho correlation matrix, showing statistically significant correlation between the subtests (p > 0.05), signifying a linear relationship between the measured semantic functions. A margin of error of 5% was used in all cases. The research will contribute to the expansion of the clinical diagnostic toolkit and will be relevant for the individualised therapeutic design of treatment procedures. The use of a non-verbal test procedure will allow an early assessment of the most severe language conditions, which is a priority in the differential diagnosis. The measurement of reaction time is expected to advance prodrome research, as the tests can be easily conducted in the subclinical phase.

Keywords: communication disorders, diagnostic toolkit, neurorehabilitation, semantic knowlegde

Procedia PDF Downloads 78
123 Impact of Chess Intervention on Cognitive Functioning of Children

Authors: Ebenezer Joseph

Abstract:

Chess is a useful tool to enhance general and specific cognitive functioning in children. The present study aims to assess the impact of chess on cognitive in children and to measure the differential impact of socio-demographic factors like age and gender of the child on the effectiveness of the chess intervention.This research study used an experimental design to study the impact of the Training in Chess on the intelligence of children. The Pre-test Post-test Control Group Design was utilized. The research design involved two groups of children: an experimental group and a control group. The experimental group consisted of children who participated in the one-year Chess Training Intervention, while the control group participated in extra-curricular activities in school. The main independent variable was training in chess. Other independent variables were gender and age of the child. The dependent variable was the cognitive functioning of the child (as measured by IQ, working memory index, processing speed index, perceptual reasoning index, verbal comprehension index, numerical reasoning, verbal reasoning, non-verbal reasoning, social intelligence, language, conceptual thinking, memory, visual motor and creativity). The sample consisted of 200 children studying in Government and Private schools. Random sampling was utilized. The sample included both boys and girls falling in the age range 6 to 16 years. The experimental group consisted of 100 children (50 from Government schools and 50 from Private schools) with an equal representation of boys and girls. The control group similarly consisted of 100 children. The dependent variables were assessed using Binet-Kamat Test of Intelligence, Wechsler Intelligence Scale for Children - IV (India) and Wallach Kogan Creativity Test. The training methodology comprised Winning Moves Chess Learning Program - Episodes 1–22, lectures with the demonstration board, on-the-board playing and training, chess exercise through workbooks (Chess school 1A, Chess school 2, and tactics) and working with chess software. Further students games were mapped using chess software and the brain patterns of the child were understood. They were taught the ideas behind chess openings and exposure to classical games were also given. The children participated in mock as well as regular tournaments. Preliminary analysis carried out using independent t tests with 50 children indicates that chess training has led to significant increases in the intelligent quotient. Children in the experimental group have shown significant increases in composite scores like working memory and perceptual reasoning. Chess training has significantly enhanced the total creativity scores, line drawing and pattern meaning subscale scores. Systematically learning chess as part of school activities appears to have a broad spectrum of positive outcomes.

Keywords: chess, intelligence, creativity, children

Procedia PDF Downloads 238
122 The Political Economy of Media Privatisation in Egypt: State Mechanisms and Continued Control

Authors: Mohamed Elmeshad

Abstract:

During the mid-1990's Egypt had become obliged to implement the Economic Reform and Structural Adjustment Program that included broad economic liberalization, expansion of the private sector and a contraction the size of government spending. This coincided as well with attempts to appear more democratic and open to liberalizing public space and discourse. At the same time, economic pressures and the proliferation of social media access and activism had led to increased pressure to open a mediascape and remove it from the clutches of the government, which had monopolized print and broadcast mass media for over 4 decades by that point. However, the mechanisms that governed the privatization of mass media allowed for sustained government control, even through the prism of ostensibly privately owned newspapers and television stations. These mechanisms involve barriers to entry from a financial and security perspective, as well as operational capacities of distribution and access to means of production. The power dynamics between mass media establishments and the state were moulded during this period in a novel way. Power dynamics within media establishments had also formed under such circumstances. The changes in the country's political economy itself somehow mirrored these developments. This paper will examine these dynamics and shed light on the political economy of Egypt's newly privatized mass media in the early 2000's especially. Methodology: This study will rely on semi-structured interviews from individuals involved with these changes from the perspective of the media organizations. It also will map out the process of media privatization by looking at the administrative, operative and legislative institutions and contexts in order to attempt to draw conclusions on methods of control and the role of the state during the process of privatization. Finally, a brief discourse analysis will be necessary in order to aptly convey how these factors ultimately reflected on media output. Findings and conclusion: The development of Egyptian private, “independent” mirrored the trajectory of transitions in the country’s political economy. Liberalization of the economy meant that a growing class of business owners would explore opportunities that such new markets would offer. However the regime’s attempts to control access to certain forms of capital, especially in sectors such as the media affected the structure of print and broadcast media, as well as the institutions that would govern them. Like the process of liberalisation, much of the regime’s manoeuvring with regards to privatization of media had been haphazardly used to indirectly expand the regime and its ruling party’s ability to retain influence, while creating a believable façade of openness. In this paper, we will attempt to uncover these mechanisms and analyse our findings in ways that explain how the manifestations prevalent in the context of a privatizing media space in a transitional Egypt provide evidence of both the intentions of this transition, and the ways in which it was being held back.

Keywords: business, mass media, political economy, power, privatisation

Procedia PDF Downloads 210
121 An Audit of Climate Change and Sustainability Teaching in Medical School

Authors: Karolina Wieczorek, Zofia Przypaśniak

Abstract:

Climate change is a rapidly growing threat to global health, and part of the responsibility to combat it lies within the healthcare sector itself, including adequate education of future medical professionals. To mitigate the consequences, the General Medical Council (GMC) has equipped medical schools with a list of outcomes regarding sustainability teaching. Students are expected to analyze the impact of the healthcare sector’s emissions on climate change. The delivery of the related teaching content is, however, often inadequate and insufficient time is devoted for exploration of the topics. Teaching curricula lack in-depth exploration of the learning objectives. This study aims to assess the extent and characteristics of climate change and sustainability subjects teaching in the curriculum of a chosen UK medical school (Barts and The London School of Medicine and Dentistry). It compares the data to the national average scores from the Climate Change and Sustainability Teaching (C.A.S.T.) in Medical Education Audit to draw conclusions about teaching on a regional level. This is a single-center audit of the timetabled sessions of teaching in the medical course. The study looked at the academic year 2020/2021 which included a review of all non-elective, core curriculum teaching materials including tutorials, lectures, written resources, and assignments in all five years of the undergraduate and graduate degrees, focusing only on mandatory teaching attended by all students (excluding elective modules). The topics covered were crosschecked with GMC Outcomes for graduates: “Educating for Sustainable Healthcare – Priority Learning Outcomes” as gold standard to look for coverage of the outcomes and gaps in teaching. Quantitative data was collected in form of time allocated for teaching as proxy of time spent per individual outcomes. The data was collected independently by two students (KW and ZP) who have received prior training and assessed two separate data sets to increase interrater reliability. In terms of coverage of learning outcomes, 12 out of 13 were taught (with the national average being 9.7). The school ranked sixth in the UK for time spent per topic and second in terms of overall coverage, meaning the school has a broad range of topics taught with some being explored in more detail than others. For the first outcome 4 out of 4 objectives covered (average 3.5) with 47 minutes spent per outcome (average 84 min), for the second objective 5 out of 5 covered (average 3.5) with 46 minutes spent (average 20), for the third 3 out of 4 (average 2.5) with 10 mins pent (average 19 min). A disproportionately large amount of time is spent delivering teaching regarding air pollution (respiratory illnesses), which resulted in the topic of sustainability in other specialties being excluded from teaching (musculoskeletal, ophthalmology, pediatrics, renal). Conclusions: Currently, there is no coherent strategy on national teaching of climate change topics and as a result an unstandardized amount of time spent on teaching and coverage of objectives can be observed.

Keywords: audit, climate change, sustainability, education

Procedia PDF Downloads 67
120 Make Populism Great Again: Identity Crisis in Western World with a Narrative Analysis of Donald Trump's Presidential Campaign Announcement Speech

Authors: Soumi Banerjee

Abstract:

In this research paper we will go deep into understanding Benedict Anderson’s definition of the nation as an imagined community and we will analyze why and how national identities were created through long and complex processes, and how there can exist strong emotional bonds between people within an imagined community, given the fact that these people have never known each other personally, but will still feel some form of imagined unity. Such identity construction on the part of an individual or within societies are always in some sense in a state of flux as imagined communities are ever changing, which provides us with the ontological foundation for reaching on this paper. This sort of identity crisis among individuals living in the Western world, who are in search for psychological comfort and security, illustrates a possible need for spatially dislocated, ontologically insecure and vulnerable individuals to have a secure identity. To create such an identity there has to be something to build upon, which could be achieved through what may be termed as ‘homesteading’. This could in short, and in my interpretation of Kinnvall and Nesbitt’s concept, be described as a search for security that involves a search for ‘home’, where home acts as a secure place, which one can build an identity around. The next half of the paper will then look into how populism and identity have played an increasingly important role in the political elections in the so-called western democracies of the world, using the U.S. as an example. Notions of ‘us and them’, the people and the elites will be looked into and analyzed through a social constructivist theoretical lens. Here we will analyze how such narratives about identity and the nation state affects people, their personality development and identity in different ways by studying the U.S. President Donald Trump’s speeches and analyze if and how he used different identity creating narratives for gaining political and popular support. The reason to choose narrative analysis as a method in this research paper is to use the narratives as a device to understand how the perceived notions of 'us and them' can initiate huge identity crisis with a community or a nation-state. This is a relevant subject as results and developments such as rising populist rightwing movements are being felt in a number of European states, with the so-called Brexit vote in the U.K. and the election of Donald Trump as president are two of the prime examples. This paper will then attempt to argue that these mechanisms are strengthened and gaining significance in situations when humans in an economic, social or ontologically vulnerable position, imagined or otherwise, in a general and broad meaning perceive themselves to be under pressure, and a sense of insecurity is rising. These insecurities and sense of being under threat have been on the rise in many of the Western states that are otherwise usually perceived to be some of the safest, democratically stable and prosperous states in the world, which makes it of interest to study what has changed, and help provide some part of the explanation as to how creating a ‘them’ in the discourse of national identity can cause massive security crisis.

Keywords: identity crisis, migration, ontological security(in), nation-states

Procedia PDF Downloads 230
119 Cytotoxicity and Genotoxicity of Glyphosate and Its Two Impurities in Human Peripheral Blood Mononuclear Cells

Authors: Marta Kwiatkowska, Paweł Jarosiewicz, Bożena Bukowska

Abstract:

Glyphosate (N-phosphonomethylglycine) is a non-selected broad spectrum ingredient in the herbicide (Roundup) used for over 35 years for the protection of agricultural and horticultural crops. Glyphosate was believed to be environmentally friendly but recently, a large body of evidence has revealed that glyphosate can negatively affect on environment and humans. It has been found that glyphosate is present in the soil and groundwater. It can also enter human body which results in its occurrence in blood in low concentrations of 73.6 ± 28.2 ng/ml. Research conducted for potential genotoxicity and cytotoxicity can be an important element in determining the toxic effect of glyphosate. Due to regulation of European Parliament 1107/2009 it is important to assess genotoxicity and cytotoxicity not only for the parent substance but also its impurities, which are formed at different stages of production of major substance – glyphosate. Moreover verifying, which of these compounds are more toxic is required. Understanding of the molecular pathways of action is extremely important in the context of the environmental risk assessment. In 2002, the European Union has decided that glyphosate is not genotoxic. Unfortunately, recently performed studies around the world achieved results which contest decision taken by the committee of the European Union. World Health Organization (WHO) in March 2015 has decided to change the classification of glyphosate to category 2A, which means that the compound is considered to "probably carcinogenic to humans". This category relates to compounds for which there is limited evidence of carcinogenicity to humans and sufficient evidence of carcinogenicity on experimental animals. That is why we have investigated genotoxicity and cytotoxicity effects of the most commonly used pesticide: glyphosate and its impurities: N-(phosphonomethyl)iminodiacetic acid (PMIDA) and bis-(phosphonomethyl)amine on human peripheral blood mononuclear cells (PBMCs), mostly lymphocytes. DNA damage (analysis of DNA strand-breaks) using the single cell gel electrophoresis (comet assay) and ATP level were assessed. Cells were incubated with glyphosate and its impurities: PMIDA and bis-(phosphonomethyl)amine at concentrations from 0.01 to 10 mM for 24 hours. Evaluating genotoxicity using the comet assay showed a concentration-dependent increase in DNA damage for all compounds studied. ATP level was decreased to zero as a result of using the highest concentration of two investigated impurities, like bis-(phosphonomethyl)amine and PMIDA. Changes were observed using the highest concentration at which a person can be exposed as a result of acute intoxication. Our survey leads to a conclusion that the investigated compounds exhibited genotoxic and cytotoxic potential but only in high concentrations, to which people are not exposed environmentally. Acknowledgments: This work was supported by the Polish National Science Centre (Contract-2013/11/N/NZ7/00371), MSc Marta Kwiatkowska, project manager.

Keywords: cell viability, DNA damage, glyphosate, impurities, peripheral blood mononuclear cells

Procedia PDF Downloads 463
118 Graphene-Graphene Oxide Dopping Effect on the Mechanical Properties of Polyamide Composites

Authors: Daniel Sava, Dragos Gudovan, Iulia Alexandra Gudovan, Ioana Ardelean, Maria Sonmez, Denisa Ficai, Laurentia Alexandrescu, Ecaterina Andronescu

Abstract:

Graphene and graphene oxide have been intensively studied due to the very good properties, which are intrinsic to the material or come from the easy doping of those with other functional groups. Graphene and graphene oxide have known a broad band of useful applications, in electronic devices, drug delivery systems, medical devices, sensors and opto-electronics, coating materials, sorbents of different agents for environmental applications, etc. The board range of applications does not come only from the use of graphene or graphene oxide alone, or by its prior functionalization with different moieties, but also it is a building block and an important component in many composite devices, its addition coming with new functionalities on the final composite or strengthening the ones that are already existent on the parent product. An attempt to improve the mechanical properties of polyamide elastomers by compounding with graphene oxide in the parent polymer composition was attempted. The addition of the graphene oxide contributes to the properties of the final product, improving the hardness and aging resistance. Graphene oxide has a lower hardness and textile strength, and if the amount of graphene oxide in the final product is not correctly estimated, it can lead to mechanical properties which are comparable to the starting material or even worse, the graphene oxide agglomerates becoming a tearing point in the final material if the amount added is too high (in a value greater than 3% towards the parent material measured in mass percentages). Two different types of tests were done on the obtained materials, the hardness standard test and the tensile strength standard test, and they were made on the obtained materials before and after the aging process. For the aging process, an accelerated aging was used in order to simulate the effect of natural aging over a long period of time. The accelerated aging was made in extreme heat. For all materials, FT-IR spectra were recorded using FT-IR spectroscopy. From the FT-IR spectra only the bands corresponding to the polyamide were intense, while the characteristic bands for graphene oxide were very small in comparison due to the very small amounts introduced in the final composite along with the low absorptivity of the graphene backbone and limited number of functional groups. In conclusion, some compositions showed very promising results, both in tensile strength test and in hardness tests. The best ratio of graphene to elastomer was between 0.6 and 0.8%, this addition extending the life of the product. Acknowledgements: The present work was possible due to the EU-funding grant POSCCE-A2O2.2.1-2013-1, Project No. 638/12.03.2014, code SMIS-CSNR 48652. The financial contribution received from the national project ‘New nanostructured polymeric composites for centre pivot liners, centre plate and other components for the railway industry (RONERANANOSTRUCT)’, No: 18 PTE (PN-III-P2-2.1-PTE-2016-0146) is also acknowledged.

Keywords: graphene, graphene oxide, mechanical properties, dopping effect

Procedia PDF Downloads 290
117 The Role of Building Information Modeling as a Design Teaching Method in Architecture, Engineering and Construction Schools in Brazil

Authors: Aline V. Arroteia, Gustavo G. Do Amaral, Simone Z. Kikuti, Norberto C. S. Moura, Silvio B. Melhado

Abstract:

Despite the significant advances made by the construction industry in recent years, the crystalized absence of integration between the design and construction phases is still an evident and costly problem in building construction. Globally, the construction industry has sought to adopt collaborative practices through new technologies to mitigate impacts of this fragmented process and to optimize its production. In this new technological business environment, professionals are required to develop new methodologies based on the notion of collaboration and integration of information throughout the building lifecycle. This scenario also represents the industry’s reality in developing nations, and the increasing need for overall efficiency has demanded new educational alternatives at the undergraduate and post-graduate levels. In countries like Brazil, it is the common understanding that Architecture, Engineering and Building Construction educational programs are being required to review the traditional design pedagogical processes to promote a comprehensive notion about integration and simultaneity between the phases of the project. In this context, the coherent inclusion of computation design to all segments of the educational programs of construction related professionals represents a significant research topic that, in fact, can affect the industry practice. Thus, the main objective of the present study was to comparatively measure the effectiveness of the Building Information Modeling courses offered by the University of Sao Paulo, the most important academic institution in Brazil, at the Schools of Architecture and Civil Engineering and the courses offered in well recognized BIM research institutions, such as the School of Design in the College of Architecture of the Georgia Institute of Technology, USA, to evaluate the dissemination of BIM knowledge amongst students in post graduate level. The qualitative research methodology was developed based on the analysis of the program and activities proposed by two BIM courses offered in each of the above-mentioned institutions, which were used as case studies. The data collection instruments were a student questionnaire, semi-structured interviews, participatory evaluation and pedagogical practices. The found results have detected a broad heterogeneity of the students regarding their professional experience, hours dedicated to training, and especially in relation to their general knowledge of BIM technology and its applications. The research observed that BIM is mostly understood as an operational tool and not as methodological project development approach, relevant to the whole building life cycle. The present research offers in its conclusion an assessment about the importance of the incorporation of BIM, with efficiency and in its totality, as a teaching method in undergraduate and graduate courses in the Brazilian architecture, engineering and building construction schools.

Keywords: building information modeling (BIM), BIM education, BIM process, design teaching

Procedia PDF Downloads 133
116 Using Convolutional Neural Networks to Distinguish Different Sign Language Alphanumerics

Authors: Stephen L. Green, Alexander N. Gorban, Ivan Y. Tyukin

Abstract:

Within the past decade, using Convolutional Neural Networks (CNN)’s to create Deep Learning systems capable of translating Sign Language into text has been a breakthrough in breaking the communication barrier for deaf-mute people. Conventional research on this subject has been concerned with training the network to recognize the fingerspelling gestures of a given language and produce their corresponding alphanumerics. One of the problems with the current developing technology is that images are scarce, with little variations in the gestures being presented to the recognition program, often skewed towards single skin tones and hand sizes that makes a percentage of the population’s fingerspelling harder to detect. Along with this, current gesture detection programs are only trained on one finger spelling language despite there being one hundred and forty-two known variants so far. All of this presents a limitation for traditional exploitation for the state of current technologies such as CNN’s, due to their large number of required parameters. This work aims to present a technology that aims to resolve this issue by combining a pretrained legacy AI system for a generic object recognition task with a corrector method to uptrain the legacy network. This is a computationally efficient procedure that does not require large volumes of data even when covering a broad range of sign languages such as American Sign Language, British Sign Language and Chinese Sign Language (Pinyin). Implementing recent results on method concentration, namely the stochastic separation theorem, an AI system is supposed as an operate mapping an input present in the set of images u ∈ U to an output that exists in a set of predicted class labels q ∈ Q of the alphanumeric that q represents and the language it comes from. These inputs and outputs, along with the interval variables z ∈ Z represent the system’s current state which implies a mapping that assigns an element x ∈ ℝⁿ to the triple (u, z, q). As all xi are i.i.d vectors drawn from a product mean distribution, over a period of time the AI generates a large set of measurements xi called S that are grouped into two categories: the correct predictions M and the incorrect predictions Y. Once the network has made its predictions, a corrector can then be applied through centering S and Y by subtracting their means. The data is then regularized by applying the Kaiser rule to the resulting eigenmatrix and then whitened before being split into pairwise, positively correlated clusters. Each of these clusters produces a unique hyperplane and if any element x falls outside the region bounded by these lines then it is reported as an error. As a result of this methodology, a self-correcting recognition process is created that can identify fingerspelling from a variety of sign language and successfully identify the corresponding alphanumeric and what language the gesture originates from which no other neural network has been able to replicate.

Keywords: convolutional neural networks, deep learning, shallow correctors, sign language

Procedia PDF Downloads 83
115 Variability Studies of Seyfert Galaxies Using Sloan Digital Sky Survey and Wide-Field Infrared Survey Explorer Observations

Authors: Ayesha Anjum, Arbaz Basha

Abstract:

Active Galactic Nuclei (AGN) are the actively accreting centers of the galaxies that host supermassive black holes. AGN emits radiation in all wavelengths and also shows variability across all the wavelength bands. The analysis of flux variability tells us about the morphology of the site of emission radiation. Some of the major classifications of AGN are (a) Blazars, with featureless spectra. They are subclassified as BLLacertae objects, Flat Spectrum Radio Quasars (FSRQs), and others; (b) Seyferts with prominent emission line features are classified into Broad Line, Narrow Line Seyferts of Type 1 and Type 2 (c) quasars, and other types. Sloan Digital Sky Survey (SDSS) is an optical telescope based in Mexico that has observed and classified billions of objects based on automated photometric and spectroscopic methods. A sample of blazars is obtained from the third Fermi catalog. For variability analysis, we searched for light curves for these objects in Wide-Field Infrared Survey Explorer (WISE) and Near Earth Orbit WISE (NEOWISE) in two bands: W1 (3.4 microns) and W2 (4.6 microns), reducing the final sample to 256 objects. These objects are also classified into 155 BLLacs, 99 FSRQs, and 2 Narrow Line Seyferts, namely, PMNJ0948+0022 and PKS1502+036. Mid-infrared variability studies of these objects would be a contribution to the literature. With this as motivation, the present work is focused on studying a final sample of 256 objects in general and the Seyferts in particular. Owing to the fact that the classification is automated, SDSS has miclassified these objects into quasars, galaxies, and stars. Reasons for the misclassification are explained in this work. The variability analysis of these objects is done using the method of flux amplitude variability and excess variance. The sample consists of observations in both W1 and W2 bands. PMN J0948+0022 is observed between MJD from 57154.79 to 58810.57. PKS 1502+036 is observed between MJD from 57232.42 to 58517.11, which amounts to a period of over six years. The data is divided into different epochs spanning not more than 1.2 days. In all the epochs, the sources are found to be variable in both W1 and W2 bands. This confirms that the object is variable in mid-infrared wavebands in both long and short timescales. Also, the sources are observed for color variability. Objects either show a bluer when brighter trend (BWB) or a redder when brighter trend (RWB). The possible claim for the object to be BWB (present objects) is that the longer wavelength radiation emitted by the source can be suppressed by the high-energy radiation from the central source. Another result is that the smallest radius of the emission source is one day since the epoch span used in this work is one day. The mass of the black holes at the centers of these sources is found to be less than or equal to 108 solar masses, respectively.

Keywords: active galaxies, variability, Seyfert galaxies, SDSS, WISE

Procedia PDF Downloads 108
114 Graphene Metamaterials Supported Tunable Terahertz Fano Resonance

Authors: Xiaoyong He

Abstract:

The manipulation of THz waves is still a challenging task due to lack of natural materials interacted with it strongly. Designed by tailoring the characters of unit cells (meta-molecules), the advance of metamaterials (MMs) may solve this problem. However, because of Ohmic and radiation losses, the performance of MMs devices is subjected to the dissipation and low quality factor (Q-factor). This dilemma may be circumvented by Fano resonance, which arises from the destructive interference between a bright continuum mode and dark discrete mode (or a narrow resonance). Different from symmetric Lorentz spectral curve, Fano resonance indicates a distinct asymmetric line-shape, ultrahigh quality factor, steep variations in spectrum curves. Fano resonance is usually realized through symmetry breaking. However, if concentric double rings (DR) are placed closely to each other, the near-field coupling between them gives rise to two hybridized modes (bright and narrowband dark modes) because of the local asymmetry, resulting into the characteristic Fano line shape. Furthermore, from the practical viewpoint, it is highly desirable requirement that to achieve the modulation of Fano spectral curves conveniently, which is an important and interesting research topics. For current Fano systems, the tunable spectral curves can be realized by adjusting the geometrical structural parameters or magnetic fields biased the ferrite-based structure. But due to limited dispersion properties of active materials, it is still a tough work to tailor Fano resonance conveniently with the fixed structural parameters. With the favorable properties of extreme confinement and high tunability, graphene is a strong candidate to achieve this goal. The DR-structure possesses the excitation of so-called “trapped modes,” with the merits of simple structure and high quality of resonances in thin structures. By depositing graphene circular DR on the SiO2/Si/ polymer substrate, the tunable Fano resonance has been theoretically investigated in the terahertz regime, including the effects of graphene Fermi level, structural parameters and operation frequency. The results manifest that the obvious Fano peak can be efficiently modulated because of the strong coupling between incident waves and graphene ribbons. As Fermi level increases, the peak amplitude of Fano curve increases, and the resonant peak position shifts to high frequency. The amplitude modulation depth of Fano curves is about 30% if Fermi level changes in the scope of 0.1-1.0 eV. The optimum gap distance between DR is about 8-12 μm, where the value of figure of merit shows a peak. As the graphene ribbon width increases, the Fano spectral curves become broad, and the resonant peak denotes blue shift. The results are very helpful to develop novel graphene plasmonic devices, e.g. sensors and modulators.

Keywords: graphene, metamaterials, terahertz, tunable

Procedia PDF Downloads 327
113 Nanoparticle Exposure Levels in Indoor and Outdoor Demolition Sites

Authors: Aniruddha Mitra, Abbas Rashidi, Shane Lewis, Jefferson Doehling, Alexis Pawlak, Jacob Schwartz, Imaobong Ekpo, Atin Adhikari

Abstract:

Working or living close to demolition sites can increase risks of dust-related health problems. Demolition of concrete buildings may produce crystalline silica dust, which can be associated with a broad range of respiratory diseases including silicosis and lung cancers. Previous studies demonstrated significant associations between demolition dust exposure and increase in the incidence of mesothelioma or asbestos cancer. Dust is a generic term used for minute solid particles of typically <500 µm in diameter. Dust particles in demolition sites vary in a wide range of sizes. Larger particles tend to settle down from the air. On the other hand, the smaller and lighter solid particles remain dispersed in the air for a long period and pose sustained exposure risks. Submicron ultrafine particles and nanoparticles are respirable deeper into our alveoli beyond our body’s natural respiratory cleaning mechanisms such as cilia and mucous membranes and are likely to be retained in the lower airways. To our knowledge, how various demolition tasks release nanoparticles are largely unknown and previous studies mostly focused on course dust, PM2.5, and PM10. General belief is that the dust generated during demolition tasks are mostly large particles formed through crushing, grinding, or sawing of various concrete and wooden structures. Therefore, little consideration has been given to the generated submicron ultrafine and nanoparticles and their exposure levels. These data are, however, critically important because recent laboratory studies have demonstrated cytotoxicity of nanoparticles on lung epithelial cells. The above-described knowledge gaps were addressed in this study by a novel newly developed nanoparticle monitor, which was used for nanoparticle monitoring at two adjacent indoor and outdoor building demolition sites in southern Georgia. Nanoparticle levels were measured (n = 10) by TSI NanoScan SMPS Model 3910 at four different distances (5, 10, 15, and 30 m) from the work location as well as in control sites. Temperature and relative humidity levels were recorded. Indoor demolition works included acetylene torch, masonry drilling, ceiling panel removal, and other miscellaneous tasks. Whereas, outdoor demolition works included acetylene torch and skid-steer loader use to remove a HVAC system. Concentration ranges of nanoparticles of 13 particle sizes at the indoor demolition site were: 11.5 nm: 63 – 1054/cm³; 15.4 nm: 170 – 1690/cm³; 20.5 nm: 321 – 730/cm³; 27.4 nm: 740 – 3255/cm³; 36.5 nm: 1,220 – 17,828/cm³; 48.7 nm: 1,993 – 40,465/cm³; 64.9 nm: 2,848 – 58,910/cm³; 86.6 nm: 3,722 – 62,040/cm³; 115.5 nm: 3,732 – 46,786/cm³; 154 nm: 3,022 – 21,506/cm³; 205.4 nm: 12 – 15,482/cm³; 273.8 nm: Keywords: demolition dust, industrial hygiene, aerosol, occupational exposure

Procedia PDF Downloads 405
112 Cross Cultural Adaptation and Content Validation of the Assessment Instrument Preschooler Awareness of Stuttering Survey

Authors: Catarina Belchior, Catarina Martins, Sara Mendes, Ana Rita S. Valente, Elsa Marta Soares

Abstract:

Introduction: The negative feelings and attitudes that a person who stutters can develop are extremely relevant when considering assessment and intervention in Speech and Language Therapy. This relates to the fact that the person who stutters can experience feelings such as shame, fear and negative beliefs when communicating. Considering the complexity and importance of integrating diverse aspects in stuttering intervention, it is central to identify those emotions as early as possible. Therefore, this research aimed to achieve the translation, adaptation to European Portuguese and to analyze the content validation of the Preschooler Awareness Stuttering Survey (Abbiati, Guitar & Hutchins, 2015), an instrument that allows the assessment of the impact of stuttering on preschool children who stutter considering feelings and attitudes. Methodology: Cross-sectional descriptive qualitative research. The following methodological procedures were followed: translation, back-translation, panel of experts and pilot study. This abstract describes the results of the first three phases of this process. The translation was accomplished by two Speech Language Therapists (SLT). Both professionals have more than five years of experience and are users of English language. One of them has a broad experience in the field of stuttering. Back-translation was conducted by two bilingual individuals without experience in health or any knowledge about the instrument. The panel of experts was composed by 3 different SLT, experts in the field of stuttering. Results and Discussion: In the translation and back-translation process it was possible to verify differences in semantic and idiomatic equivalences of several concepts and expressions, as well as the need to include new information to enhance the understanding of the application of the instrument. The meeting between the two translators and the researchers allowed the achievement of a consensus version that was used in back-translation. Considering adaptation and content validation, the main change made by the experts was the conceptual equivalence of the questions and answers of the instrument's sheets. Considering that in the translated consensus version the questions began with various nouns such as 'is' or 'the cow' and that the answers did not contain the adverb 'much' as in the original instrument, the panel agreed that it would be more appropriate if the questions all started with 'how' and that all the answers should present the adverb 'much'. This decision was made to ensure that the translate instrument would be similar to the original and so that the results obtained could be comparable between the original and the translated instrument. There was also elaborated one semantic equivalence between concepts. The panel of experts found that all other items and specificities of the instrument were adequate, concluding the adequacy of the instrument considering its objectives and its intended target population. Conclusion: This research aspires to diversify the existing validated resources in this scope, adding a new instrument that allows the assessment of preschool children who stutter. Consequently, it is hoped that this instrument will provide a real and reliable assessment that can lead to an appropriate therapeutic intervention according to the characteristics and needs of each child.

Keywords: stuttering, assessment, feelings and attitudes, speech language therapy

Procedia PDF Downloads 123
111 Identifying Confirmed Resemblances in Problem-Solving Engineering, Both in the Past and Present

Authors: Colin Schmidt, Adrien Lecossier, Pascal Crubleau, Simon Richir

Abstract:

Introduction:The widespread availability of artificial intelligence, exemplified by Generative Pre-trained Transformers (GPT) relying on large language models (LLM), has caused a seismic shift in the realm of knowledge. Everyone now has the capacity to swiftly learn how these models can either serve them well or not. Today, conversational AI like ChatGPT is grounded in neural transformer models, a significant advance in natural language processing facilitated by the emergence of renowned LLMs constructed using neural transformer architecture. Inventiveness of an LLM : OpenAI's GPT-3 stands as a premier LLM, capable of handling a broad spectrum of natural language processing tasks without requiring fine-tuning, reliably producing text that reads as if authored by humans. However, even with an understanding of how LLMs respond to questions asked, there may be lurking behind OpenAI’s seemingly endless responses an inventive model yet to be uncovered. There may be some unforeseen reasoning emerging from the interconnection of neural networks here. Just as a Soviet researcher in the 1940s questioned the existence of Common factors in inventions, enabling an Under standing of how and according to what principles humans create them, it is equally legitimate today to explore whether solutions provided by LLMs to complex problems also share common denominators. Theory of Inventive Problem Solving (TRIZ) : We will revisit some fundamentals of TRIZ and how Genrich ALTSHULLER was inspired by the idea that inventions and innovations are essential means to solve societal problems. It's crucial to note that traditional problem-solving methods often fall short in discovering innovative solutions. The design team is frequently hampered by psychological barriers stemming from confinement within a highly specialized knowledge domain that is difficult to question. We presume ChatGPT Utilizes TRIZ 40. Hence, the objective of this research is to decipher the inventive model of LLMs, particularly that of ChatGPT, through a comparative study. This will enhance the efficiency of sustainable innovation processes and shed light on how the construction of a solution to a complex problem was devised. Description of the Experimental Protocol : To confirm or reject our main hypothesis that is to determine whether ChatGPT uses TRIZ, we will follow a stringent protocol that we will detail, drawing on insights from a panel of two TRIZ experts. Conclusion and Future Directions : In this endeavor, we sought to comprehend how an LLM like GPT addresses complex challenges. Our goal was to analyze the inventive model of responses provided by an LLM, specifically ChatGPT, by comparing it to an existing standard model: TRIZ 40. Of course, problem solving is our main focus in our endeavours.

Keywords: artificial intelligence, Triz, ChatGPT, inventiveness, problem-solving

Procedia PDF Downloads 39