Search results for: operating point
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6949

Search results for: operating point

1369 Exploring Emerging Viruses From a Protected Reserve

Authors: Nemat Sokhandan Bashir

Abstract:

Threats from viruses to agricultural crops could be even larger than the losses caused by the other pathogens because, in many cases, the viral infection is latent but crucial from an epidemic point of view. Wild vegetation can be a source of many viruses that eventually find their destiny in crop plants. Although often asymptomatic in wild plants due to adaptation, they can potentially cause serious losses in crops. Therefore, exploring viruses in wild vegetation is very important. Recently, omics have been quite useful for exploring plant viruses from various plant sources, especially wild vegetation. For instance, we have discovered viruses such as Ambrossia asymptomatic virus I (AAV-1) through the application of metagenomics from Oklahoma Prairie Reserve. Accordingly, extracts from randomly-sampled plants are subjected to high speed and ultracentrifugation to separated virus-like particles (VLP), then nucleic acids in the form of DNA or RNA are extracted from such VLPs by treatment with phenol—chloroform and subsequent precipitation by ethanol. The nucleic acid preparations are separately treated with RNAse or DNAse in order to determine the genome component of VLPs. In the case of RNAs, the complementary cDNAs are synthesized before submitting to DNA sequencing. However, for VLPs with DNA contents, the procedure would be relatively straightforward without making cDNA. Because the length of the nucleic acid content of VPLs can be different, various strategies are employed to achieve sequencing. Techniques similar to so-called "chromosome walking" may be used to achieve sequences of long segments. When the nucleotide sequence data were obtained, they were subjected to BLAST analysis to determine the most related previously reported virus sequences. In one case, we determined that the novel virus was AAV-l because the sequence comparison and analysis revealed that the reads were the closest to the Indian citrus ringspot virus (ICRSV). AAV—l had an RNA genome with 7408 nucleotides in length and contained six open reading frames (ORFs). Based on phylogenies inferred from the replicase and coat protein ORFs of the virus, it was placed in the genus Mandarivirus.

Keywords: wild, plant, novel, metagenomics

Procedia PDF Downloads 65
1368 Fischer Tropsch Synthesis in Compressed Carbon Dioxide with Integrated Recycle

Authors: Kanchan Mondal, Adam Sims, Madhav Soti, Jitendra Gautam, David Carron

Abstract:

Fischer-Tropsch (FT) synthesis is a complex series of heterogeneous reactions between CO and H2 molecules (present in the syngas) on the surface of an active catalyst (Co, Fe, Ru, Ni, etc.) to produce gaseous, liquid, and waxy hydrocarbons. This product is composed of paraffins, olefins, and oxygenated compounds. The key challenge in applying the Fischer-Tropsch process to produce transportation fuels is to make the capital and production costs economically feasible relative to the comparative cost of existing petroleum resources. To meet this challenge, it is imperative to enhance the CO conversion while maximizing carbon selectivity towards the desired liquid hydrocarbon ranges (i.e. reduction in CH4 and CO2 selectivities) at high throughputs. At the same time, it is equally essential to increase the catalyst robustness and longevity without sacrificing catalyst activity. This paper focuses on process development to achieve the above. The paper describes the influence of operating parameters on Fischer Tropsch synthesis (FTS) from coal derived syngas in supercritical carbon dioxide (ScCO2). In addition, the unreacted gas and solvent recycle was incorporated and the effect of unreacted feed recycle was evaluated. It was expected that with the recycle, the feed rate can be increased. The increase in conversion and liquid selectivity accompanied by the production of narrower carbon number distribution in the product suggest that higher flow rates can and should be used when incorporating exit gas recycle. It was observed that this process was capable of enhancing the hydrocarbon selectivity (nearly 98 % CO conversion), reducing improving the carbon efficiency from 17 % to 51 % in a once through process and further converting 16 % CO2 to liquid with integrated recycle of the product gas stream and increasing the life of the catalyst. Catalyst robustness enhancement has been attributed to the absorption of heat of reaction by the compressed CO2 which reduced the formation of hotspots and the dissolution of waxes by the CO2 solvent which reduced the blinding of active sites. In addition, the recycling the product gas stream reduced the reactor footprint to one-fourth of the once through size and product fractionation utilizing the solvent effects of supercritical CO2 were realized. In addition to the negative CO2 selectivities, methane production was also inhibited and was limited to less than 1.5%. The effect of the process conditions on the life of the catalysts will also be presented. Fe based catalysts are known to have a high proclivity for producing CO2 during FTS. The data of the product spectrum and selectivity on Co and Fe-Co based catalysts as well as those obtained from commercial sources will also be presented. The measurable decision criteria were the increase in CO conversion at H2:CO ratio of 1:1 (as commonly found in coal gasification product stream) in supercritical phase as compared to gas phase reaction, decrease in CO2 and CH4 selectivity, overall liquid product distribution, and finally an increase in the life of the catalysts.

Keywords: carbon efficiency, Fischer Tropsch synthesis, low GHG, pressure tunable fractionation

Procedia PDF Downloads 233
1367 Speaking of Genocide: Lithuanian 'Occupation’ Museums and Foucault's Discursive Formation

Authors: Craig Wight

Abstract:

Tourism visits to sites associated to varying degrees with death and dying have for some time inspired academic debate and research into what has come to be popularly described as ‘dark tourism’. Research to date has been based on the mobilisation of various social scientific methodologies to understand issues such as the motivations of visitors to consume dark tourism experiences and visitor interpretations of the various narratives that are part of the consumption experience. This thesis offers an alternative conceptual perspective for carrying out research into dark tourism by presenting a discourse analysis of Lithuanian occupation-themed museums using Foucault’s concept of ‘discursive formation’ from ‘Archaeology of Knowledge’. A constructivist methodology is therefore applied to locate the rhetorical representations of Lithuanian and Jewish subject positions and to identify the objects of discourse that are produced in five museums that interpret a historical era defined by occupation, the persecution of people and genocide. The discourses and consequent cultural function of these museums are examined, and the key finding of the research proposes that they authorise a particular Lithuanian individualism which marginalises the Jewish subject position and its related objects of discourse into abstraction. The thesis suggests that these museums create the possibility to undermine the ontological stability of Holocaust and the Jewish-Lithuanian subject which is produced as an anomalous, ‘non-Lithuanian’ cultural reference point. As with any Foucauldian archaeological research, it cannot be offered as something that is ‘complete’ since it captures only a partial field, or snapshot of knowledge, bound to a specific temporal and spatial context. The discourses that have been identified are perhaps part of a more elusive ‘positivity’ which is salient across a number of cultural and political surfaces which are ripe for a similar analytical approach in future. It is hoped that the study will motivate others to follow a discourse-analytical approach to research in order to further understand the critical role of museums in public culture when it comes to shaping knowledge about ‘inconvenient’ pasts.

Keywords: genocide heritage, foucault, Lithuanian tourism, discursive formatoin

Procedia PDF Downloads 225
1366 Metal Extraction into Ionic Liquids and Hydrophobic Deep Eutectic Mixtures

Authors: E. E. Tereshatov, M. Yu. Boltoeva, V. Mazan, M. F. Volia, C. M. Folden III

Abstract:

Room temperature ionic liquids (RTILs) are a class of liquid organic salts with melting points below 20 °C that are considered to be environmentally friendly ‘designers’ solvents. Pure hydrophobic ILs are known to extract metallic species from aqueous solutions. The closest analogues of ionic liquids are deep eutectic solvents (DESs), which are a eutectic mixture of at least two compounds with a melting point lower than that of each individual component. DESs are acknowledged to be attractive for organic synthesis and metal processing. Thus, these non-volatile and less toxic compounds are of interest for critical metal extraction. The US Department of Energy and the European Commission consider indium as a key metal. Its chemical homologue, thallium, is also an important material for some applications and environmental safety. The aim of this work is to systematically investigate In and Tl extraction from aqueous solutions into pure fluorinated ILs and hydrophobic DESs. The dependence of the Tl extraction efficiency on the structure and composition of the ionic liquid ions, metal oxidation state, and initial metal and aqueous acid concentrations have been studied. The extraction efficiency of the TlXz3–z anionic species (where X = Cl– and/or Br–) is greater for ionic liquids with more hydrophobic cations. Unexpectedly high distribution ratios (> 103) of Tl(III) were determined even by applying a pure ionic liquid as receiving phase. An improved mathematical model based on ion exchange and ion pair formation mechanisms has been developed to describe the co-extraction of two different anionic species, and the relative contributions of each mechanism have been determined. The first evidence of indium extraction into new quaternary ammonium- and menthol-based hydrophobic DESs from hydrochloric and oxalic acid solutions with distribution ratios up to 103 will be provided. Data obtained allow us to interpret the mechanism of thallium and indium extraction into ILs and DESs media. The understanding of Tl and In chemical behavior in these new media is imperative for the further improvement of separation and purification of these elements.

Keywords: deep eutectic solvents, indium, ionic liquids, thallium

Procedia PDF Downloads 233
1365 Academic Knowledge Transfer Units in the Western Balkans: Building Service Capacity and Shaping the Business Model

Authors: Andrea Bikfalvi, Josep Llach, Ferran Lazaro, Bojan Jovanovski

Abstract:

Due to the continuous need to foster university-business cooperation in both developed and developing countries, some higher education institutions face the challenge of designing, piloting, operating, and consolidating knowledge and technology transfer units. University-business cooperation has different maturity stages worldwide, with some higher education institutions excelling in these practices, but with lots of others that could be qualified as intermediate, or even some situated at the very beginning of their knowledge transfer adventure. These latter face the imminent necessity to formally create the technology transfer unit and to draw its roadmap. The complexity of this operation is due to various aspects that need to align and coordinate, including a major change in mission, vision, structure, priorities, and operations. Qualitative in approach, this study presents 5 case studies, consisting of higher education institutions located in the Western Balkans – 2 in Albania, 2 in Bosnia and Herzegovina, 1 in Montenegro- fully immersed in the entrepreneurial journey of creating their knowledge and technology transfer unit. The empirical evidence is developed in a pan-European project, illustratively called KnowHub (reconnecting universities and enterprises to unleash regional innovation and entrepreneurial activity), which is being implemented in three countries and has resulted in at least 15 pilot cooperation agreements between academia and business. Based on a peer-mentoring approach including more experimented and more mature technology transfer models of European partners located in Spain, Finland, and Austria, a series of initial lessons learned are already available. The findings show that each unit developed its tailor-made approach to engage with internal and external stakeholders, offer value to the academic staff, students, as well as business partners. The latest technology underpinning KnowHub services and institutional commitment are found to be key success factors. Although specific strategies and plans differ, they are based on a general strategy jointly developed and based on common tools and methods of strategic planning and business modelling. The main output consists of providing good practice for designing, piloting, and initial operations of units aiming to fully valorise knowledge and expertise available in academia. Policymakers can also find valuable hints on key aspects considered vital for initial operations. The value of this contribution is its focus on the intersection of three perspectives (service orientation, organisational innovation, business model) since previous research has only relied on a single topic or dual approaches, most frequently in the business context and less frequently in higher education.

Keywords: business model, capacity building, entrepreneurial education, knowledge transfer

Procedia PDF Downloads 133
1364 Some Characteristics Based on Literature, for an Ideal Disinfectant

Authors: Saimir Heta, Ilma Robo, Rialda Xhizdari, Kers Kapaj

Abstract:

The stability of an ideal disinfectant should be constant regardless of the change in the atmospheric conditions of the environment where it is kept. If the conditions such as temperature or humidity change, it is understood that it will also be necessary to approach possible changes in the holding materials such as plastic or glass bottles with the aim of protecting, for example, the disinfectant from the excessive lighting of the environment, which can also be translated as an increase in the temperature of disinfectant as a fluid. Material and Methods: In this study, an attempt was made to find the most recent published data about the best possible combination of disinfectants indicated for use after dental procedures. This purpose of the study was realized by comparing the basic literature that is studied in the field of dentistry by students with the most published data in the literature of recent years about this topic. Each disinfectant is represented by a number called the disinfectant count, in which different factors can influence the increase or reduction of variables whose production remains a specific statistic for a specific disinfectant. Results: The changes in the atmospheric conditions where the disinfectant is deposited and stored in the environment are known to affect the stability of the disinfectant as a fluid; this fact is known and even cited in the leaflets accompanying the manufactured boxes of disinfectants. It is these cares, in the form of advice, which are based not only on the preservation of the disinfectant but also on the application in order to have the desired clinical result. Aldehydes have the highest constant among the types of disinfectants, followed by acids. The lowest value of the constant belongs to the class of glycols, the predecessors of which were the halogens, in which class there are some representatives with disinfection applications. The class of phenols and acids have almost the same intervals of constants. Conclusions: If the goal were to find the ideal disinfectant among the large variety of disinfectants produced, a good starting point would be to find something unchanging or a fixed, unchanging element on the basis of which the comparison can be made properties of different disinfectants. Precisely based on the results of this study, the role of the specific constant according to the specific disinfectant is highlighted. Finding an ideal disinfectant, like finding a medication or the ideal antibiotic, is an ongoing but unattainable goal.

Keywords: different disinfectants, ideal, specific constant, dental procedures

Procedia PDF Downloads 62
1363 Studies on Organic and Inorganic Micro/Nano Particle Reinforced Epoxy Composites

Authors: Daniel Karthik, Vijay Baheti, Jiri Militky, Sundaramurthy Palanisamy

Abstract:

Fibre based nano particles are presently considered as one of the potential filler materials for the improvement of mechanical and physical properties of polymer composites. Due to high matrix-filler interfacial area there will be uniform and homogeneous dispersion of nanoparticles. In micro/nano filler reinforced composites, resin material is usually tailored by organic or inorganic nanoparticles to have improved matrix properties. The objective of this study was to compare the potential of reinforcement of different organic and inorganic micro/nano fillers in epoxy composites. Industrial and agricultural waste of fibres like Agave Americana, cornhusk, jute, basalt, carbon, glass and fly ash was utilized to prepare micro/nano particles. Micro/nano particles were obtained using high energy planetary ball milling process in dry condition. Milling time and ball size were kept constant throughout the ball milling process. Composites were fabricated by hand lay method. Particle loading was kept constant to 3% wt. for all composites. In present study, loading of fillers was selected as 3 wt. % for all composites. Dynamic mechanical properties of the nanocomposite films were performed in three-point bending mode with gauge length and sample width of 50 mm and 10 mm respectively. The samples were subjected to an oscillating frequency of 1 Hz, 5 Hz and 10 Hz and 100 % oscillating amplitude in the temperature ranges of 30°C to 150°C at the heating rate of 3°C/min. Damping was found to be higher with the jute composites. Amongst organic fillers lowest damping factor was observed with Agave Americana particles, this means that Agave americana fibre particles have betters interface adhesion with epoxy resin. Basalt, fly ash and glass particles have almost similar damping factors confirming better interface adhesion with epoxy.

Keywords: ball milling, damping factor, matrix-filler interface, particle reinforcements

Procedia PDF Downloads 259
1362 Small and Medium-Sized Enterprises, Flash Flooding and Organisational Resilience Capacity: Qualitative Findings on Implications of the Catastrophic 2017 Flash Flood Event in Mandra, Greece

Authors: Antonis Skouloudis, Georgios Deligiannakis, Panagiotis Vouros, Konstantinos Evangelinos, Loannis Nikolaou

Abstract:

On November 15th, 2017, a catastrophic flash flood devastated the city of Mandra in Central Greece, resulting in 24 fatalities and extensive damages to the built environment and infrastructure. It was Greece's deadliest and most destructive flood event for the past 40 years. In this paper, we examine the consequences of this event too small and medium-sized enterprises (SMEs) operating in Mandra during the flood event, which were affected by the floodwaters to varying extents. In this context, we conducted semi-structured interviews with business owners-managers of 45 SMEs located in flood inundated areas and are still active nowadays, based on an interview guide that spanned 27 topics. The topics pertained to the disaster experience of the business and business owners-managers, knowledge and attitudes towards climate change and extreme weather, aspects of disaster preparedness and related assistance needs. Our findings reveal that the vast majority of the affected businesses experienced heavy damages in equipment and infrastructure or total destruction, which resulted in business interruption from several weeks up to several months. Assistance from relatives or friends helped for the damage repairs and business recovery, while state compensations were deemed insufficient compared to the extent of the damages. Most interviewees pinpoint flooding as one of the most critical risks, and many connect it with the climate crisis. However, they are either not willing or unable to apply property-level prevention measures in their businesses due to cost considerations or complex and cumbersome bureaucratic processes. In all cases, the business owners are fully aware of the flood hazard implications, and since the recovery from the event, they have engaged in basic mitigation measures and contingency plans in case of future flood events. Such plans include insurance contracts whenever possible (as the vast majority of the affected SMEs were uninsured at the time of the 2017 event) as well as simple relocations of critical equipment within their property. The study offers fruitful insights on latent drivers and barriers of SMEs' resilience capacity to flash flooding. In this respect, findings such as ours, highlighting tensions that underpin behavioral responses and experiences, can feed into a) bottom-up approaches for devising actionable and practical guidelines, manuals and/or standards on business preparedness to flooding, and, ultimately, b) policy-making for an enabling environment towards a flood-resilient SME sector.

Keywords: flash flood, small and medium-sized enterprises, organizational resilience capacity, disaster preparedness, qualitative study

Procedia PDF Downloads 124
1361 Student Participation in Higher Education Quality Assurance Processes

Authors: Tomasz Zarebski

Abstract:

A very important element of the education system is its evaluation procedure. Each education system should be systematically evaluated and improved. Among the criteria subject to evaluation, attention should be paid to the following: structure of the study programme, implementation of the study programme, admission to studies, verification of learning outcomes achievement by students, giving credit for individual semesters and years, and awarding diplomas, competence, experience, qualifications and the number of staff providing education, staff development, and in-service training, education infrastructure, cooperation with social and economic stakeholders on the development, conditions for and methods of improving the internationalisation of education provided as part of the degree programme, supporting learning, social, academic or professional development of students and their entry on the labour market, public access to information about the study programme and quality assurance policy. Concerning the assessment process and the individual assessment indicators, the participation of students in these processes is essential. The purpose of this paper is to analyse the rules of student participation in accreditation processes on the example of individual countries in Europe. The rules of students' participation in the work of accreditation committees and their influence on the final grade of the committee were analysed. Most of the higher education institutions follow similar rules for accreditation. The general model gives the individual institution freedom to organize its own quality assurance, as long as the system lives up to the criteria for quality and relevance laid down in the particular provisions. This point also applies to students. The regulations of the following countries were examined in the legal-comparative aspect: Poland (Polish Accreditation Committee), Denmark (The Danish Accreditation Institution), France (High Council for the Evaluation of Research and Higher Education), Germany (Agency for Quality Assurance through Accreditation of Study Programmes) and Italy (National Agency for the Evaluation of Universities and Research Institutes).

Keywords: accreditation, student, study programme, quality assurance in higher education

Procedia PDF Downloads 155
1360 Climbing up to Safety and Security: The Facilitation of an NGO Awareness Culture

Authors: Mirad Böhm, Diede De Kok

Abstract:

It goes without saying that for many NGOs a high level of safety and security are crucial issues, which often necessitates the support of military personnel to varying degrees. The relationship between military and NGO personnel is usually a difficult one and while there has been progress, clashes naturally still occur owing to different interpretations of mission objectives amongst many other challenges. NGOs tend to view safety and security as necessary steps towards their goal instead of fundamental pillars of their core ‘business’. The military perspective, however, considers them primary objectives; thus, frequently creating a different vision of how joint operations should be conducted. This paper will argue that internalizing safety and security into the NGO organizational culture is compelling in order to ensure a more effective cooperation with military partners and, ultimately, to achieve their goals. This can be accomplished through a change in perception of safety and security concepts as a fixed and major point on the everyday agenda. Nowadays, there are several training programmes on offer addressing such issues but they primarily focus on the individual level. True internalization of these concepts should reach further by encompassing a wide range of NGO activities, beginning with daily proceedings in office facilities far from conflict zones including logistical and administrative tasks such as budgeting, and leading all the way to actual and potentially hazardous missions in the field. In order to effectuate this change, a tool is required to help NGOs realize, firstly, how they perceive and define safety and security, and secondly, how they can adjust this perception to their benefit. The ‘safety culture ladder’ is a concept that suggests what organizations can and should do to advance their safety. While usually applied to private industrial scenarios, this work will present the concept as a useful instrument to visualize and facilitate the internalization process NGOs ought to go through. The ‘ladder’ allows them to become more aware of the level of their safety and security measures, and moreover, cautions them to take these measures proactively rather than reactively. This in turn will contribute to a rapprochement between military and NGO priority setting in regard to what constitutes a safe working environment.

Keywords: NGO-military cooperation, organisational culture, safety and security awareness, safety culture ladder

Procedia PDF Downloads 316
1359 Tri/Tetra-Block Copolymeric Nanocarriers as a Potential Ocular Delivery System of Lornoxicam: Experimental Design-Based Preparation, in-vitro Characterization and in-vivo Estimation of Transcorneal Permeation

Authors: Alaa Hamed Salama, Rehab Nabil Shamma

Abstract:

Introduction: Polymeric micelles that can deliver drug to intended sites of the eye have attracted much scientific attention recently. The aim of this study was to review the aqueous-based formulation of drug-loaded polymeric micelles that hold significant promise for ophthalmic drug delivery. This study investigated the synergistic performance of mixed polymeric micelles made of linear and branched poly (ethylene oxide)-poly (propylene oxide) for the more effective encapsulation of Lornoxicam (LX) as a hydrophobic model drug. Methods: The co-micellization process of 10% binary systems combining different weight ratios of the highly hydrophilic poloxamers; Synperonic® PE/P84, and Synperonic® PE/F127 and the hydrophobic poloxamine counterpart (Tetronic® T701) was investigated by means of photon correlation spectroscopy and cloud point. The drug-loaded micelles were tested for their solubilizing capacity towards LX. Results: Results showed a sharp solubility increase from 0.46 mg/ml up to more than 4.34 mg/ml, representing about 136-fold increase. Optimized formulation was selected to achieve maximum drug solubilizing power and clarity with lowest possible particle size. The optimized formulation was characterized by 1HNMR analysis which revealed complete encapsulation of the drug within the micelles. Further investigations by histopathological and confocal laser studies revealed the non-irritant nature and good corneal penetrating power of the proposed nano-formulation. Conclusion: LX-loaded polymeric nanomicellar formulation was fabricated allowing easy application of the drug in the form of clear eye drops that do not cause blurred vision or discomfort, thus achieving high patient compliance.

Keywords: confocal laser scanning microscopy, Histopathological studies, Lornoxicam, micellar solubilization

Procedia PDF Downloads 443
1358 Treatment of Non-Small Cell Lung Cancer (NSCLC) With Activating Mutations Considering ctDNA Fluctuations

Authors: Moiseenko F. V., Volkov N. M., Zhabina A. S., Stepanova E. O., Kirillov A. V., Myslik A. V., Artemieva E. V., Agranov I. R., Oganesyan A. P., Egorenkov V. V., Abduloeva N. H., Aleksakhina S. Yu., Ivantsov A. O., Kuligina E. S., Imyanitov E. N., Moiseyenko V. M.

Abstract:

Analysis of ctDNA in patients with NSCLC is an emerging biomarker. Multiple research efforts of quantitative or at least qualitative analysis before and during the first periods of treatment with TKI showed the prognostic value of ctDNA clearance. Still, these important results are not incorporated in clinical standards. We evaluated the role of ctDNA in EGFR-mutated NSCLC receiving first-line TKI. Firstly, we analyzed sequential plasma samples from 30 patients that were collected before intake of the first tablet (at baseline) and at 6, 12, 24, 36, and 48 hours after the “starting point.” EGFR-M+ allele was measured by ddPCR. Afterward, we included sequential qualitative analysis of ctDNA with cobas® EGFR Mutation Test v2 from 99 NSCLC patients before the first dose, after 2 and 4 months of treatment, and on progression. Early response analysis showed the decline of EGFR-M+ level in plasma within the first 48 hours of treatment in 11 subjects. All these patients showed objective tumor response. 10 patients showed either elevation of EGFR-M+ plasma concentration (n = 5) or stable content of circulating EGFR-M+ after the start of the therapy (n = 5); only 3 of these patients achieved an objective response (p = 0.026) when compared to the former group). The rapid decline of plasma EGFR-M+ DNA concentration also predicted for longer PFS (13.7 vs. 11.4 months, p = 0.030). Long-term ctDNA monitoring showed clinically significant heterogeneity of EGFR-mutated NSCLC treated with 1st line TKIs in terms of progression-free and overall survival. Patients without detectable ctDNA at baseline (N = 32) possess the best prognosis on the duration of treatment (PFS: 24.07 [16.8-31.3] and OS: 56.2 [21.8-90.7] months). Those who achieve clearance after two months of TKI (N = 42) have indistinguishably good PFS (19.0 [13.7 – 24.2]). Individuals who retain ctDNA after 2 months (N = 25) have the worst prognosis (PFS: 10.3 [7.0 – 13.5], p = 0.000). 9/25 patients did not develop ctDNA clearance at 4 months with no statistical difference in PFS from those without clearance at 2 months. Prognostic heterogeneity of EGFR-mutated NSCLC should be taken into consideration in planning further clinical trials and optimizing the outcomes of patients.

Keywords: NSCLC, EGFR, targeted therapy, ctDNA, prognosis

Procedia PDF Downloads 43
1357 Calculational-Experimental Approach of Radiation Damage Parameters on VVER Equipment Evaluation

Authors: Pavel Borodkin, Nikolay Khrennikov, Azamat Gazetdinov

Abstract:

The problem of ensuring of VVER type reactor equipment integrity is now most actual in connection with justification of safety of the NPP Units and extension of their service life to 60 years and more. First of all, it concerns old units with VVER-440 and VVER-1000. The justification of the VVER equipment integrity depends on the reliability of estimation of the degree of the equipment damage. One of the mandatory requirements, providing the reliability of such estimation, and also evaluation of VVER equipment lifetime, is the monitoring of equipment radiation loading parameters. In this connection, there is a problem of justification of such normative parameters, used for an estimation of the pressure vessel metal embrittlement, as the fluence and fluence rate (FR) of fast neutrons above 0.5 MeV. From the point of view of regulatory practice, a comparison of displacement per atom (DPA) and fast neutron fluence (FNF) above 0.5 MeV has a practical concern. In accordance with the Russian regulatory rules, neutron fluence F(E > 0.5 MeV) is a radiation exposure parameter used in steel embrittlement prediction under neutron irradiation. However, the DPA parameter is a more physically legitimate quantity of neutron damage of Fe based materials. If DPA distribution in reactor structures is more conservative as neutron fluence, this case should attract the attention of the regulatory authority. The purpose of this work was to show what radiation load parameters (fluence, DPA) on all VVER equipment should be under control, and give the reasonable estimations of such parameters in the volume of all equipment. The second task is to give the conservative estimation of each parameter including its uncertainty. Results of recently received investigations allow to test the conservatism of calculational predictions, and, as it has been shown in the paper, combination of ex-vessel measured data with calculated ones allows to assess unpredicted uncertainties which are results of specific unique features of individual equipment for VVER reactor. Some results of calculational-experimental investigations are presented in this paper.

Keywords: equipment integrity, fluence, displacement per atom, nuclear power plant, neutron activation measurements, neutron transport calculations

Procedia PDF Downloads 150
1356 Feature Engineering Based Detection of Buffer Overflow Vulnerability in Source Code Using Deep Neural Networks

Authors: Mst Shapna Akter, Hossain Shahriar

Abstract:

One of the most important challenges in the field of software code audit is the presence of vulnerabilities in software source code. Every year, more and more software flaws are found, either internally in proprietary code or revealed publicly. These flaws are highly likely exploited and lead to system compromise, data leakage, or denial of service. C and C++ open-source code are now available in order to create a largescale, machine-learning system for function-level vulnerability identification. We assembled a sizable dataset of millions of opensource functions that point to potential exploits. We developed an efficient and scalable vulnerability detection method based on deep neural network models that learn features extracted from the source codes. The source code is first converted into a minimal intermediate representation to remove the pointless components and shorten the dependency. Moreover, we keep the semantic and syntactic information using state-of-the-art word embedding algorithms such as glove and fastText. The embedded vectors are subsequently fed into deep learning networks such as LSTM, BilSTM, LSTM-Autoencoder, word2vec, BERT, and GPT-2 to classify the possible vulnerabilities. Furthermore, we proposed a neural network model which can overcome issues associated with traditional neural networks. Evaluation metrics such as f1 score, precision, recall, accuracy, and total execution time have been used to measure the performance. We made a comparative analysis between results derived from features containing a minimal text representation and semantic and syntactic information. We found that all of the deep learning models provide comparatively higher accuracy when we use semantic and syntactic information as the features but require higher execution time as the word embedding the algorithm puts on a bit of complexity to the overall system.

Keywords: cyber security, vulnerability detection, neural networks, feature extraction

Procedia PDF Downloads 75
1355 Analysis of Radiation-Induced Liver Disease (RILD) and Evaluation of Relationship between Therapeutic Activity and Liver Clearance Rate with Tc-99m-Mebrofenin in Yttrium-90 Microspheres Treatment

Authors: H. Tanyildizi, M. Abuqebitah, I. Cavdar, M. Demir, L. Kabasakal

Abstract:

Aim: Whole liver radiation has the modest benefit in the treatment of unresectable hepatic metastases but the radiation doses must keep in control. Otherwise, RILD complications may arise. In this study, we aimed to calculate amount of maximum permissible activity (MPA) and critical organ absorbed doses with MIRD methodology, to evaluate tumour doses for treatment response and whole liver doses for RILD and to find optimal liver function test additionally. Materials and Methods: This study includes 29 patients who attended our nuclear medicine department suffering from Y-90 microspheres treatment. 10 mCi Tc-99m MAA was applied to the patients for dosimetry via IV. After the injection, whole body SPECT/CT images were taken in one hour. The minimum therapeutic tumour dose is on the point of being 120 Gy1, the amount of activities were calculated with MIRD methodology considering volumetric tumour/liver rate. A sub-working group was created with 11 patients randomly and liver clearance rate with Tc-99m-Mebrofenin was calculated according to Ekman formalism. Results: The volumetric tumour/liver rates were found between 33-66% (Maksimum Tolarable Dose (MTD) 48-52Gy3) for 4 patients, were found less than 33% (MTD 72Gy3) for 25 patients. According to these results the average amount of activity, mean liver dose and mean tumour dose were found 1793.9±1.46 MBq, 32.86±0.19 Gy, and 138.26±0.40 Gy. RILD was not observed in any patient. In sub-working group, the relationship between Bilirubin, Albumin, INR (which show presence of liver disease and its degree), liver clearance with Tc-99m-Mebrofenin and calculated activity amounts were found r=0.49, r=0.27, r=0.43, r=0.57, respectively. Discussions: The minimum tumour dose was found 120 Gy for positive dose-response relation. If volumetric tumour/liver rate was > 66%, dose 30 Gy; if volumetric tumour/liver rate 33-66%, dose escalation 48 Gy; if volumetric tumour/liver rate < 33%, dose 72 Gy. These dose limitations did not create RILD. Clearance measurement with Mebrofenin was concluded that the best method to determine the liver function. Therefore, liver clearance rate with Tc-99m-Mebrofenin should be considered in calculation of yttrium-90 microspheres dosimetry.

Keywords: clearance, dosimetry, liver, RILD

Procedia PDF Downloads 427
1354 A Kierkegaardian Reading of Iqbal's Poetry as a Communicative Act

Authors: Sevcan Ozturk

Abstract:

The overall aim of this paper is to present a Kierkegaardian approach to Iqbal’s use of literature as a form of communication. Despite belonging to different historical, cultural, and religious backgrounds, the philosophical approaches of Soren Kierkegaard, ‘the father of existentialism,' and Muhammad Iqbal ‘the spiritual father of Pakistan’ present certain parallels. Both Kierkegaard and Iqbal take human existence as the starting point for their reflections, emphasise the subject of becoming genuine religious personalities, and develop a notion of the self. While doing these they both adopt parallel methods, employ literary techniques and poetical forms, and use their literary works as a form of communication. The problem is that Iqbal does not provide a clear account of his method as Kierkegaard does in his works. As a result, Iqbal’s literary approach appears to be a collection of contradictions. This is mainly because despite he writes most of his works in the poetical form, he condemns all kinds of art including poetry. Moreover, while attacking on Islamic mysticism, he, at the same time, uses classical literary forms, and a number of traditional mystical, poetic symbols. This paper will argue that the contradictions found in Iqbal’s approach are actually a significant part of Iqbal’s way of communicating his reader. It is the contention of this paper that with the help of the parallels between the literary and philosophical theories of Kierkegaard and Iqbal, the application of Kierkegaard’s method to Iqbal’s use of poetry as a communicative act will make it possible to dispel the seeming ambiguities in Iqbal’s literary approach. The application of Kierkegaard’s theory to Iqbal’s literary method will include an analysis of the main principles of Kierkegaard’s own literary technique of ‘indirect communication,' which is a crucial term of his existentialist philosophy. Second, the clash between what Iqbal’s says about art and poetry and what he does will be highlighted in the light of Kierkegaardian theory of indirect communication. It will be argued that Iqbal’s literary technique can be considered as a form of ‘indirect communication,' and that reading his technique in this way helps on dispelling the contradictions in his approach. It is hoped that this paper will cultivate a dialogue between those who work in the fields of comparative philosophy Kierkegaard studies, existentialism, contemporary Islamic thought, Iqbal studies, and literary criticism.

Keywords: comparative philosophy, existentialism, indirect communication, intercultural philosophy, literary communication, Muhammad Iqbal, Soren Kierkegaard

Procedia PDF Downloads 318
1353 Evaluation of Teaching Performance in Higher Education: From the Students' Responsibility to Their Evaluative Competence

Authors: Natacha Jesus-Silva, Carla S. Pereira, Natercia Durao, Maria Das Dores Formosinho, Cristina Costa-Lobo

Abstract:

Any assessment process, by its very nature, raises a wide range of doubts, uncertainties, and insecurities of all kinds. The evaluation process should be ethically irreproachable, treating each and every one of the evaluated according to a conduct that ensures that the process is fair, contributing to all recognize and feel well with the processes and results of the evaluation. This is a very important starting point and implies that positive and constructive conceptions and attitudes are developed regarding the evaluation of teaching performance, where students' responsibility is desired. It is not uncommon to find teachers feeling threatened at various levels, in particular as regards their autonomy and their professional dignity. Evaluation must be useful in that it should enable decisions to be taken to improve teacher performance, the quality of teaching or the learning climate of the school. This study is part of a research project whose main objective is to identify, select, evaluate and synthesize the available evidence on Quality Indicators in Higher Education. In this work, the 01 parameters resulting from pedagogical surveys in a Portuguese higher education institution in the north of the country will be presented, surveys for the 2015/2016 school year, presented to 1751 students, in a total of 11 degrees and 18 master's degrees. It has analyzed the evaluation made by students with respect to the performance of a group of 68 teachers working full time. This paper presents the lessons learned in the last three academic years, allowing for the identification of the effects on the following areas: teaching strategies and methodologies, capacity of systematization, learning climate, creation of conditions for active student participation. This paper describes the procedures resulting from the descriptive analysis (frequency analysis, descriptive measures and association measures) and inferential analysis (ANOVA one-way, MANOVA one-way, MANOVA two-way and correlation analysis).

Keywords: teaching performance, higher education, students responsibility, indicators of teaching management

Procedia PDF Downloads 265
1352 Alternative Housing Systems: Influence on Blood Profile of Egg-Type Chickens in Humid Tropics

Authors: Olufemi M. Alabi, Foluke A. Aderemi, Adebayo A. Adewumi, Banwo O. Alabi

Abstract:

General well-being of animals is of paramount interest in some developed countries and of global importance hence the shift onto alternative housing systems for egg-type chickens as replacement for conventional battery cage system. However, there is paucity of information on the effect of this shift on physiological status of the hens to judge their health via the blood profile. Therefore, investigation was carried out on two strains of hen kept in three different housing systems in humid tropics to evaluate changes in their blood parameters. 108, 17-weeks old super black (SBL) hens and 108, 17-weeks old super brown (SBR) hens were randomly allotted to three different intensive systems Partitioned Conventional Cage (PCC), Extended Conventional Cage (ECC) and Deep Litter System (DLS) in a randomized complete block design with 36 hens per housing system, each with three replicates. The experiment lasted 37 weeks during which blood samples were collected at 18th week of age and bi-weekly thereafter for analyses. Parameters measured are packed cell volume (PCV), hemoglobin concentration (Hb), red blood counts (RBC), white blood counts (WBC) and serum metabolites such as total protein (TP), albumin (Alb), globulin (Glb), glucose, cholesterol, urea, bilirubin, serum cortisol while blood indices such as mean corpuscular hemoglobin (MCH), mean cell volume (MCV) and mean corpuscular hemoglobin concentration (MCHC) were calculated. The hematological values of the hens were not significantly (p>0.05) affected by the housing system and strain, so also the serum metabolites except for the serum cortisol which was significantly (p<0.05) affected by the housing system only. Hens housed on PCC had higher values (20.05 ng/ml for SBL and 20.55 ng/ml for SBR) followed by hens on ECC (18.15ng/ml for SBL and 18.38ng/ml for SBL) while hens on DLS had the lowest value (16.50ng/ml for SBL and 16.00ng/ml for SBR) thereby confirming indication of stress with conventionally caged birds. Alternative housing systems can also be adopted for egg-type chickens in the humid tropics from welfare point of view with the results of this work confirming stress among caged hens.

Keywords: blood, housing, humid-tropics, layers

Procedia PDF Downloads 455
1351 An Approach on Intelligent Tolerancing of Car Body Parts Based on Historical Measurement Data

Authors: Kai Warsoenke, Maik Mackiewicz

Abstract:

To achieve a high quality of assembled car body structures, tolerancing is used to ensure a geometric accuracy of the single car body parts. There are two main techniques to determine the required tolerances. The first is tolerance analysis which describes the influence of individually tolerated input values on a required target value. Second is tolerance synthesis to determine the location of individual tolerances to achieve a target value. Both techniques are based on classical statistical methods, which assume certain probability distributions. To ensure competitiveness in both saturated and dynamic markets, production processes in vehicle manufacturing must be flexible and efficient. The dimensional specifications selected for the individual body components and the resulting assemblies have a major influence of the quality of the process. For example, in the manufacturing of forming tools as operating equipment or in the higher level of car body assembly. As part of the metrological process monitoring, manufactured individual parts and assemblies are recorded and the measurement results are stored in databases. They serve as information for the temporary adjustment of the production processes and are interpreted by experts in order to derive suitable adjustments measures. In the production of forming tools, this means that time-consuming and costly changes of the tool surface have to be made, while in the body shop, uncertainties that are difficult to control result in cost-intensive rework. The stored measurement results are not used to intelligently design tolerances in future processes or to support temporary decisions based on real-world geometric data. They offer potential to extend the tolerancing methods through data analysis and machine learning models. The purpose of this paper is to examine real-world measurement data from individual car body components, as well as assemblies, in order to develop an approach for using the data in short-term actions and future projects. For this reason, the measurement data will be analyzed descriptively in the first step in order to characterize their behavior and to determine possible correlations. In the following, a database is created that is suitable for developing machine learning models. The objective is to create an intelligent way to determine the position and number of measurement points as well as the local tolerance range. For this a number of different model types are compared and evaluated. The models with the best result are used to optimize equally distributed measuring points on unknown car body part geometries and to assign tolerance ranges to them. The current results of this investigation are still in progress. However, there are areas of the car body parts which behave more sensitively compared to the overall part and indicate that intelligent tolerancing is useful here in order to design and control preceding and succeeding processes more efficiently.

Keywords: automotive production, machine learning, process optimization, smart tolerancing

Procedia PDF Downloads 100
1350 Comparative Settlement Analysis on the under of Embankment with Empirical Formulas and Settlement Plate Measurement for Reducing Building Crack around of Embankments

Authors: Safitri Nur Wulandari, M. Ivan Adi Perdana, Prathisto L. Panuntun Unggul, R. Dary Wira Mahadika

Abstract:

In road construction on the soft soil, we need a soil improvement method to improve the soil bearing capacity of the land base so that the soil can withstand the traffic loads. Most of the land in Indonesia has a soft soil, where soft soil is a type of clay that has the consistency of very soft to medium stiff, undrained shear strength, Cu <0:25 kg/cm2, or the estimated value of NSPT <5 blows/ft. This study focuses on the analysis of the effect on preloading load (embarkment) to the amount of settlement ratio on the under of embarkment that will impact on the building cracks around of embarkment. The method used in this research is a superposition method for embarkment distribution on 27 locations with undisturbed soil samples at some borehole point in Java and Kalimantan, Indonesia. Then correlating the results of settlement plate monitoring on the field with Asaoka method. The results of settlement plate monitoring taken from an embarkment of Ahmad Yani airport in Semarang on 32 points. Where the value of Cc (index compressible) soil data based on some laboratory test results, while the value of Cc is not tested obtained from empirical formula Ardhana and Mochtar, 1999. From this research, the results of the field monitoring showed almost the same results with an empirical formulation with the standard deviation of 4% where the formulation of the empirical results of this analysis obtained by linear formula. Value empirical linear formula is to determine the effect of compression heap area as high as 4,25 m is 3,1209x + y = 0.0026 for the slope of the embankment 1: 8 for the same analysis with an initial height of embankment on the field. Provided that at the edge of the embankment settlement worth is not equal to 0 but at a quarter of embankment has a settlement ratio average 0.951 and at the edge of embankment has a settlement ratio 0,049. The influence areas around of embankment are approximately 1 meter for slope 1:8 and 7 meters for slope 1:2. So, it can cause the building cracks, to build in sustainable development.

Keywords: building cracks, influence area, settlement plate, soft soil, empirical formula, embankment

Procedia PDF Downloads 336
1349 An Integrated Approach to Cultural Heritage Management in the Indian Context

Authors: T. Lakshmi Priya

Abstract:

With the widening definition of heritage, the challenges of heritage management has become more complex . Today heritage not only includes significant monuments but comprises historic areas / sites, historic cities, cultural landscapes, and living heritage sites. There is a need for a comprehensive understanding of the values associated with these heritage resources, which will enable their protection and management. These diverse cultural resources are managed by multiple agencies having their own way of operating in the heritage sites. An Integrated approach to management of these cultural resources ensures its sustainability for the future generation. This paper outlines the importance of an integrated approach for the management and protection of complex heritage sites in India by examining four case studies. The methodology for this study is based on secondary research and primary surveys conducted during the preparation of the conservation management plansfor the various sites. The primary survey included basic documentation, inventorying, and community surveys. Red Fort located in the city of Delhi is one of the most significant forts built in 1639 by the Mughal Emperor Shahjahan. This fort is a national icon and stands testimony to the various historical events . It is on the ramparts of Red Fort that the national flag was unfurled on 15th August 1947, when India became independent, which continues even today. Management of this complex fort necessitated the need for an integrated approach, where in the needs of the official and non official stakeholders were addressed. The understanding of the inherent values and significance of this site was arrived through a systematic methodology of inventorying and mapping of information. Hampi, located in southern part of India, is a living heritage site inscribed in the World Heritage list in 1986. The site comprises of settlements, built heritage structures, traditional water systems, forest, agricultural fields and the remains of the metropolis of the 16th century Vijayanagar empire. As Hampi is a living heritage site having traditional systems of management and practices, the aim has been to include these practices in the current management so that there is continuity in belief, thought and practice. The existing national, regional and local planning instruments have been examined and the local concerns have been addressed.A comprehensive understanding of the site, achieved through an integrated model, is being translated to an action plan which safeguards the inherent values of the site. This paper also examines the case of the 20th century heritage building of National Archives of India, Delhi and protection of a 12th century Tomb of Sultan Ghari located in south Delhi. A comprehensive understanding of the site, lead to the delineation of the Archaeological Park of Sultan Ghari, in the current Master Plan for Delhi, for the protection of the tomb and the settlement around it. Through this study it is concluded that the approach of Integrated Conservation has enabled decision making that sustains the values of these complex heritage sites in Indian context.

Keywords: conservation, integrated, management, approach

Procedia PDF Downloads 73
1348 The Main Characteristics of Destructive Motivation

Authors: Elen Gasparyan, Naira Hakobyan

Abstract:

One of the leading factors determining the effectiveness of work in a modern organization is the motivation of its employees. In the scientific psychological literature, this phenomenon is understood mainly as constructive forms of motivation and the search for ways to increase it. At the same time, the motivation of employees can sometimes lead to a decrease in the productivity of the organization, i.e., destructive motivation is usually not considered from the point of view of various motivational theories. This article provides an analysis of various forms of destructive motivation of employees. These forms include formalism in labor behavior, inadequate assessment of the work done, and an imbalance of personal and organizational interests. The destructive motivation of personnel has certain negative consequences both for the employees themselves and for the entire organization - it leads to a decrease in the rate of production and the quality of products or services, increased conflict in the behavior of employees, etc. Currently, there is an increase in scientific interest in the study of destructive motivation. The subject of psychological research is not only modern socio-psychological processes but also the achievements of scientific thought in the field of theories of motivation and management. This article examines the theoretical approaches of J. S. Adams and Porter-Lawler, provides an analysis of theoretical concepts, and emphasizes the main characteristics of the destructiveness of motivation. Destructive work motivation is presented at the macro, meso, and micro levels. These levels express various directions of development of motivation stimuli, such as social, organizational, and personal ones. At the macro level, the most important characteristics of destructive motivation are the high-income gap between employers and employees, а high degree of unemployment, weak social protection of workers, non-compliance by employers with labor legislation, and emergencies. At the organizational level, the main characteristics are decreasing the diversity of work and insufficient work conditions. At the personal level, the main characteristic of destructive motivation is a discrepancy between personal and organizational interests. A comparative analysis of the theoretical and methodological foundations of the study of motivation makes it possible to identify not only the main characteristics of destructive motivation but also to determine the contours of psychological counseling to reduce destructiveness in the behavior of employees.

Keywords: destructive, motivation, organization, behavior

Procedia PDF Downloads 28
1347 Implementation of A Treatment Escalation Plan During The Covid 19 Outbreak in Aneurin Bevan University Health Board

Authors: Peter Collett, Mike Pynn, Haseeb Ur Rahman

Abstract:

For the last few years across the UK there has been a push towards implementing treatment escalation plans (TEP) for every patient admitted to hospital. This is a paper form which is completed by a junior doctor then countersigned by the consultant responsible for the patient's care. It is designed to address what level of care is appropriate for the patient in question at point of entry to hospital. It helps decide whether the patient would benefit for ward based, high dependency or intensive care. They are completed to ensure the patient's best interests are maintained and aim to facilitate difficult decisions which may be required at a later date. For example, a frail patient with significant co-morbidities, unlikely to survive a pathology requiring an intensive care admission is admitted to hospital the decision can be made early to state the patient would not benefit from an ICU admission. This decision can be reversed depending on the clinical course of the patient's admission. It promotes discussions with the patient regarding their wishes to receive certain levels of healthcare. This poster describes the steps taken in the Aneurin Bevan University Health Board (ABUHB) when implementing the TEP form. The team implementing the TEP form campaigned for it's use to the board of directors. The directors were eager to hear of experiences of other health boards who had implemented the TEP form. The team presented the data produced in a number of health boards and demonstrated the proposed form. Concern was raised regarding the legalities of the form and that it could upset patients and relatives if the form was not explained properly. This delayed the effectuation of the TEP form and further research and discussion would be required. When COVID 19 reached the UK the National Institute for Health and Clinical Excellence issued guidance stating every patient admitted to hospital should be issued a TEP form. The TEP form was accelerated through the vetting process and was approved with immediate effect. The TEP form in ABUHB has now been in circulation for a month. An audit investigating it's uptake and a survey gathering opinions have been conducted.

Keywords: acute medicine, clinical governance, intensive care, patient centered decision making

Procedia PDF Downloads 168
1346 Epistemological and Ethical Dimensions of Current Concepts of Human Resilience in the Neurosciences

Authors: Norbert W. Paul

Abstract:

Since a number of years, scientific interest in human resilience is rapidly increasing especially in psychology and more recently and highly visible in neurobiological research. Concepts of resilience are regularly discussed in the light of liminal experiences and existential challenges in human life. Resilience research is providing both, explanatory models and strategies to promote or foster human resilience. Surprisingly, these approaches attracted little attention so far in philosophy in general and in ethics in particular. This is even more astonishing given the fact that the neurosciences as such have been and still are of major interest to philosophy and ethics and even brought about the specialized field of neuroethics, which, however, is not concerned with concepts of resilience, so far. As a result of the little attention given to the topic of resilience, the whole concept has to date been a philosophically under-theorized. This abstinence of ethics and philosophy in resilience research is lamentable because resilience as a concept as well as resilience interventions based on neurobiological findings do undoubtedly pose philosophical, social and ethical questions. In this paper, we will argue that particular notions of resilience are crossing the sometimes fine line between maintaining a person’s mental health despite the impact of severe psychological or physical adverse events and ethically more debatable discourses of enhancement. While we neither argue for or against enhancement nor re-interpret resilience research and interventions by subsuming them strategies of psychological and/or neuro-enhancement, we encourage those who see social or ethical problems with enhancement technologies should also take a closer look on resilience and the related neurobiological concepts. We will proceed in three steps. In our first step, we will describe the concept of resilience in general and its neurobiological study in particular. Here, we will point out some important differences in the way ‘resilience’ is conceptualized and how neurobiological research understands resilience. In what follows we will try to show that a one-sided concept of resilience – as it is often presented in neurobiological research on resilience – does pose social and ethical problems. Secondly, we will identify and explore the social and ethical challenges of (neurobiological) enhancement. In the last and final step of this paper, we will argue that a one-sided reading of resilience can be understood as latent form of enhancement in transition and poses ethical questions similar to those discussed in relation to other approaches to the biomedical enhancement of humans.

Keywords: resilience, neurosciences, epistemology, bioethics

Procedia PDF Downloads 151
1345 Augmented Reality Enhanced Order Picking: The Potential for Gamification

Authors: Stavros T. Ponis, George D. Plakas-Koumadorakis, Sotiris P. Gayialis

Abstract:

Augmented Reality (AR) can be defined as a technology, which takes the capabilities of computer-generated display, sound, text and effects to enhance the user's real-world experience by overlaying virtual objects into the real world. By doing that, AR is capable of providing a vast array of work support tools, which can significantly increase employee productivity, enhance existing job training programs by making them more realistic and in some cases introduce completely new forms of work and task executions. One of the most promising AR industrial applications, as literature shows, is the use of Head Worn, monocular or binocular Displays (HWD) to support logistics and production operations, such as order picking, part assembly and maintenance. This paper presents the initial results of an ongoing research project for the introduction of a dedicated AR-HWD solution to the picking process of a Distribution Center (DC) in Greece operated by a large Telecommunication Service Provider (TSP). In that context, the proposed research aims to determine whether gamification elements should be integrated in the functional requirements of the AR solution, such as providing points for reaching objectives and creating leaderboards and awards (e.g. badges) for general achievements. Up to now, there is a an ambiguity on the impact of gamification in logistics operations since gamification literature mostly focuses on non-industrial organizational contexts such as education and customer/citizen facing applications, such as tourism and health. To the contrary, the gamification efforts described in this study focus in one of the most labor- intensive and workflow dependent logistics processes, i.e. Customer Order Picking (COP). Although introducing AR in COP, undoubtedly, creates significant opportunities for workload reduction and increased process performance the added value of gamification is far from certain. This paper aims to provide insights on the suitability and usefulness of AR-enhanced gamification in the hard and very demanding environment of a logistics center. In doing so, it will utilize a review of the current state-of-the art regarding gamification of production and logistics processes coupled with the results of questionnaire guided interviews with industry experts, i.e. logisticians, warehouse workers (pickers) and AR software developers. The findings of the proposed research aim to contribute towards a better understanding of AR-enhanced gamification, the organizational change it entails and the consequences it potentially has for all implicated entities in the often highly standardized and structured work required in the logistics setting. The interpretation of these findings will support the decision of logisticians regarding the introduction of gamification in their logistics processes by providing them useful insights and guidelines originating from a real life case study of a large DC operating more than 300 retail outlets in Greece.

Keywords: augmented reality, technology acceptance, warehouse management, vision picking, new forms of work, gamification

Procedia PDF Downloads 140
1344 Research on Transverse Ecological Compensation Mechanism in Yangtze River Economic Belt Based on Evolutionary Game Theory

Authors: Tingyu Zhang

Abstract:

The cross-basin ecological compensation mechanism is key to stimulating active participation in ecological protection across the entire basin. This study constructs an evolutionary game model of cross-basin ecological compensation in the Yangtze River Economic Belt (YREB), introducing a central government constraint and incentive mechanism (CGCIM) to explore the conditions for achieving strategies of protection and compensation that meet societal expectations. Furthermore, using a water quality-water quantity model combined with factual data from the YREB in 2020, the amount of ecological compensation is calculated. The results indicate that the stability of the evolutionary game model of the upstream and downstream governments in the YREB is closely related to the CGCIM. When the sum of the central government's reward amount to the upstream government and the penalty amount to both sides simultaneously is greater than 39.948 billion yuan, and the sum of the reward amount to the downstream government and the penalty amount to only the lower reaches is greater than 1.567 billion yuan, or when the sum of the reward amount to the downstream government and the penalty amount to both sides simultaneously is greater than 1.567 billion yuan, and the sum of the reward amount to the upstream government and the penalty amount to only the upstream government is greater than 399.48 billion yuan, the protection and compensation become the only evolutionarily stable strategy for the evolutionary game system composed of the upstream and downstream governments in the YREB. At this point, the total ecological compensation that the downstream government of the YREB should pay to the upstream government is 1.567 billion yuan, with Hunan paying 0.03 billion yuan, Hubei 2.53 billion yuan, Jiangxi 0.18 billion yuan, Anhui 1.68 billion yuan, Zhejiang 0.75 billion yuan, Jiangsu 6.57 billion yuan, and Shanghai 3.93 billion yuan. The research results can provide a reference for promoting the improvement and perfection of the cross-basin ecological compensation system in the YREB.

Keywords: ecological compensation, evolutionary game model, central government constraint and incentive mechanism, Yangtze river economic belt

Procedia PDF Downloads 52
1343 CO2 Utilization by Reverse Water-Shift and Fischer-Tropsch Synthesis for Production of Heavier Fraction Hydrocarbons in a Container-Sized Mobile Unit

Authors: Francisco Vidal Vázquez, Pekka Simell, Christian Frilund, Matti Reinikainen, Ilkka Hiltunen, Tim Böltken, Benjamin Andris, Paolo Piermartini

Abstract:

Carbon capture and utilization (CCU) are one of the key topics in mitigation of CO2 emissions. There are many different technologies that are applied for the production of diverse chemicals from CO2 such as synthetic natural gas, Fischer-Tropsch products, methanol and polymers. Power-to-Gas and Power-to-Liquids concepts arise as a synergetic solution for storing energy and producing value added products from the intermittent renewable energy sources and CCU. VTT is a research and technology development company having energy in transition as one of the key focus areas. VTT has extensive experience in piloting and upscaling of new energy and chemical processes. Recently, VTT has developed and commissioned a Mobile Synthesis Unit (MOBSU) in close collaboration with INERATEC, a spin-off company of Karlsruhe Institute of Technology (KIT, Germany). The MOBSU is a multipurpose synthesis unit for CO2 upgrading to energy carriers and chemicals, which can be transported on-site where CO2 emission and renewable energy are available. The MOBSU is initially used for production of fuel compounds and chemical intermediates by combination of two consecutive processes: reverse Water-Gas Shift (rWGS) and Fischer-Tropsch synthesis (FT). First, CO2 is converted to CO by high-pressure rWGS and then, the CO and H2 rich effluent is used as feed for FT using an intensified reactor technology developed and designed by INERATEC. Chemical equilibrium of rWGS reaction is not affected by pressure. Nevertheless, compression would be required in between rWGS and FT in the case when rWGS is operated at atmospheric pressure. This would also require cooling of rWGS effluent, water removal and reheating. For that reason, rWGS is operated using precious metal catalyst in the MOBSU at similar pressure as FT to simplify the process. However, operating rWGS at high pressures has also some disadvantages such as methane and carbon formation, and more demanding specifications for materials. The main parts of FT module are an intensified reactor, a hot trap to condense the FT wax products, and a cold trap to condense the FT liquid products. The FT synthesis is performed using cobalt catalyst in a novel compact reactor technology with integrated highly-efficient water evaporation cooling cycle. The MOBSU started operation in November 2016. First, the FT module is tested using as feedstock H2 and CO. Subsequently, rWGS and FT modules are operated together using CO2 and H2 as feedstock of ca. 5 Nm3/hr total flowrate. On spring 2017, The MOBSU unit will be integrated together with a direct air capture (DAC) of CO2 unit, and a PEM electrolyser unit at Lappeenranta University of Technology (LUT) premises for demonstration of the SoletAir concept. This would be the first time when synthetic fuels are produced by combination of DAC unit and electrolyser unit which uses solar power for H2 production.

Keywords: CO2 utilization, demonstration, Fischer-Tropsch synthesis, intensified reactors, reverse water-gas shift

Procedia PDF Downloads 280
1342 Detection of Curvilinear Structure via Recursive Anisotropic Diffusion

Authors: Sardorbek Numonov, Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Dongeun Choi, Byung-Woo Hong

Abstract:

The detection of curvilinear structures often plays an important role in the analysis of images. In particular, it is considered as a crucial step for the diagnosis of chronic respiratory diseases to localize the fissures in chest CT imagery where the lung is divided into five lobes by the fissures that are characterized by linear features in appearance. However, the characteristic linear features for the fissures are often shown to be subtle due to the high intensity variability, pathological deformation or image noise involved in the imaging procedure, which leads to the uncertainty in the quantification of anatomical or functional properties of the lung. Thus, it is desired to enhance the linear features present in the chest CT images so that the distinctiveness in the delineation of the lobe is improved. We propose a recursive diffusion process that prefers coherent features based on the analysis of structure tensor in an anisotropic manner. The local image features associated with certain scales and directions can be characterized by the eigenanalysis of the structure tensor that is often regularized via isotropic diffusion filters. However, the isotropic diffusion filters involved in the computation of the structure tensor generally blur geometrically significant structure of the features leading to the degradation of the characteristic power in the feature space. Thus, it is required to take into consideration of local structure of the feature in scale and direction when computing the structure tensor. We apply an anisotropic diffusion in consideration of scale and direction of the features in the computation of the structure tensor that subsequently provides the geometrical structure of the features by its eigenanalysis that determines the shape of the anisotropic diffusion kernel. The recursive application of the anisotropic diffusion with the kernel the shape of which is derived from the structure tensor leading to the anisotropic scale-space where the geometrical features are preserved via the eigenanalysis of the structure tensor computed from the diffused image. The recursive interaction between the anisotropic diffusion based on the geometry-driven kernels and the computation of the structure tensor that determines the shape of the diffusion kernels yields a scale-space where geometrical properties of the image structure are effectively characterized. We apply our recursive anisotropic diffusion algorithm to the detection of curvilinear structure in the chest CT imagery where the fissures present curvilinear features and define the boundary of lobes. It is shown that our algorithm yields precise detection of the fissures while overcoming the subtlety in defining the characteristic linear features. The quantitative evaluation demonstrates the robustness and effectiveness of the proposed algorithm for the detection of fissures in the chest CT in terms of the false positive and the true positive measures. The receiver operating characteristic curves indicate the potential of our algorithm as a segmentation tool in the clinical environment. This work was supported by the MISP(Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by the IITP(Institute for Information and Communications Technology Promotion).

Keywords: anisotropic diffusion, chest CT imagery, chronic respiratory disease, curvilinear structure, fissure detection, structure tensor

Procedia PDF Downloads 222
1341 Exploring the Influence of Climate Change on Food Behavior in Medieval France: A Multi-Method Analysis of Human-Animal Interactions

Authors: Unsain Dianne, Roussel Audrey, Goude Gwenaëlle, Magniez Pierre, Storå Jan

Abstract:

This paper aims to investigate the changes in husbandry practices and meat consumption during the transition from the Medieval Climate Anomaly to the Little Ice Age in the South of France. More precisely, we will investigate breeding strategies, animal size and health status, carcass exploitation strategies, and the impact of socioeconomic status on human-environment interactions. For that purpose, we will analyze faunal remains from ten sites equally distributed between the two periods. Those include consumers from different socio-economic backgrounds (peasants, city dwellers, soldiers, lords, and the Popes). The research will employ different methods used in zooarchaeology: comparative anatomy, biometry, pathologies analyses, traceology, and utility indices, as well as experimental archaeology, to reconstruct and understand the changes in animal breeding and consumption practices. Their analysis will allow the determination of modifications in the animal production chain, with the composition of the flocks (species, size), their management (age, sex, health status), culinary practices (strategies for the exploitation of carcasses, cooking, tastes) or the importance of trade (butchers, sales of processed animal products). The focus will also be on the social extraction of consumers. The aim will be to determine whether climate change has had a greater impact on the most modest groups (such as peasants), whether the consequences have been global and have also affected the highest levels of society, or whether the social and economic factors have been sufficient to balance out the climatic hazards, leading to no significant changes. This study will contribute to our understanding of the impact of climate change on breeding and consumption strategies in medieval society from a historical and social point of view. It combines various research methods to provide a comprehensive analysis of the changes in human-animal interactions during different climatic periods.

Keywords: archaeology, animal economy, cooking, husbandry practices, climate change, France

Procedia PDF Downloads 49
1340 Optical Flow Technique for Supersonic Jet Measurements

Authors: Haoxiang Desmond Lim, Jie Wu, Tze How Daniel New, Shengxian Shi

Abstract:

This paper outlines the development of a novel experimental technique in quantifying supersonic jet flows, in an attempt to avoid seeding particle problems frequently associated with particle-image velocimetry (PIV) techniques at high Mach numbers. Based on optical flow algorithms, the idea behind the technique involves using high speed cameras to capture Schlieren images of the supersonic jet shear layers, before they are subjected to an adapted optical flow algorithm based on the Horn-Schnuck method to determine the associated flow fields. The proposed method is capable of offering full-field unsteady flow information with potentially higher accuracy and resolution than existing point-measurements or PIV techniques. Preliminary study via numerical simulations of a circular de Laval jet nozzle successfully reveals flow and shock structures typically associated with supersonic jet flows, which serve as useful data for subsequent validation of the optical flow based experimental results. For experimental technique, a Z-type Schlieren setup is proposed with supersonic jet operated in cold mode, stagnation pressure of 8.2 bar and exit velocity of Mach 1.5. High-speed single-frame or double-frame cameras are used to capture successive Schlieren images. As implementation of optical flow technique to supersonic flows remains rare, the current focus revolves around methodology validation through synthetic images. The results of validation test offers valuable insight into how the optical flow algorithm can be further improved to improve robustness and accuracy. Details of the methodology employed and challenges faced will be further elaborated in the final conference paper should the abstract be accepted. Despite these challenges however, this novel supersonic flow measurement technique may potentially offer a simpler way to identify and quantify the fine spatial structures within the shock shear layer.

Keywords: Schlieren, optical flow, supersonic jets, shock shear layer

Procedia PDF Downloads 306