Search results for: opportunity construction
462 The Paradox of Design Aesthetics and the Sustainable Design
Authors: Asena Demirci, Gozen Guner Aktaş, Nur Ayalp
Abstract:
Nature provides a living space for humans, also in contrast it is destroyed by humans for their personal needs and ambitions. For decreasing these damages against nature, solutions are started to generate and to develop. Moreover, precautions are implemented. After 1960s, especially when the ozone layer got harmed and got thinner by toxic substances coming from man made structures, environmental problems which effected human’s activities of daily living. Thus, this subject about environmental solutions and precautions is becoming a priority issue for scientists. Most of the environmental problems are caused by buildings and factories which are built without any concerns about protecting nature. This situation creates awareness about environmental issues and also the terms like sustainability, Renewable energy show up in building, Construction and architecture sectors to provide environmental protection. In this perspective, the design disciplines also should be respectful to nature and the sustainability. Designs which involve the features like sustainability, renewability and being ecologic have specialties to be less detrimental to the environment rather than the designs which do not involve. Furthermore, these designs produce their own energy for consuming, So they do not use the natural resources. They do not contain harmful substances and they are made of recyclable materials. Thus, they are becoming environmentally friendly structures. There is a common concern among designers about the issue of sustainable design. They believe that the idea of sustainability inhibits the creativity. All works of design resemble each other from the point of aesthetics and technological matters. In addition, there is a concern about design ethics which aesthetic designs cannot be accepted as a priority. For these reasons, there are few designs included the features of being eco-friendly and well-designed and also had design concerns around the world. Despite the other design disciplines, The concept of sustainability is getting more important each day in interior architecture and interior design. As it is known that human being spends 90 % of his life in interior spaces, The importance of that concept in interior spaces is obvious. Aesthetic is another vital concern in interior space design also. Most of the time sustainable materials and sustainable interior design applications conflicts with personal aesthetic parameters. This study aims to discuss the great paradox between the design aesthetic and the sustainable design. Does the sustainable approach in interior design disturbs the design aesthetic? This is one of the most popular questions that have been discussed for a while. With this paper this question will be evaluated with a case study which analyzes the aesthetic perceptions and preferences of the users and designers in sustainable interior spaces.Keywords: aesthetics, interior design, sustainable design, sustainability
Procedia PDF Downloads 291461 The Use of Random Set Method in Reliability Analysis of Deep Excavations
Authors: Arefeh Arabaninezhad, Ali Fakher
Abstract:
Since the deterministic analysis methods fail to take system uncertainties into account, probabilistic and non-probabilistic methods are suggested. Geotechnical analyses are used to determine the stress and deformation caused by construction; accordingly, many input variables which depend on ground behavior are required for geotechnical analyses. The Random Set approach is an applicable reliability analysis method when comprehensive sources of information are not available. Using Random Set method, with relatively small number of simulations compared to fully probabilistic methods, smooth extremes on system responses are obtained. Therefore random set approach has been proposed for reliability analysis in geotechnical problems. In the present study, the application of random set method in reliability analysis of deep excavations is investigated through three deep excavation projects which were monitored during the excavating process. A finite element code is utilized for numerical modeling. Two expected ranges, from different sources of information, are established for each input variable, and a specific probability assignment is defined for each range. To determine the most influential input variables and subsequently reducing the number of required finite element calculations, sensitivity analysis is carried out. Input data for finite element model are obtained by combining the upper and lower bounds of the input variables. The relevant probability share of each finite element calculation is determined considering the probability assigned to input variables present in these combinations. Horizontal displacement of the top point of excavation is considered as the main response of the system. The result of reliability analysis for each intended deep excavation is presented by constructing the Belief and Plausibility distribution function (i.e. lower and upper bounds) of system response obtained from deterministic finite element calculations. To evaluate the quality of input variables as well as applied reliability analysis method, the range of displacements extracted from models has been compared to the in situ measurements and good agreement is observed. The comparison also showed that Random Set Finite Element Method applies to estimate the horizontal displacement of the top point of deep excavation. Finally, the probability of failure or unsatisfactory performance of the system is evaluated by comparing the threshold displacement with reliability analysis results.Keywords: deep excavation, random set finite element method, reliability analysis, uncertainty
Procedia PDF Downloads 268460 Estimating the Ladder Angle and the Camera Position From a 2D Photograph Based on Applications of Projective Geometry and Matrix Analysis
Authors: Inigo Beckett
Abstract:
In forensic investigations, it is often the case that the most potentially useful recorded evidence derives from coincidental imagery, recorded immediately before or during an incident, and that during the incident (e.g. a ‘failure’ or fire event), the evidence is changed or destroyed. To an image analysis expert involved in photogrammetric analysis for Civil or Criminal Proceedings, traditional computer vision methods involving calibrated cameras is often not appropriate because image metadata cannot be relied upon. This paper presents an approach for resolving this problem, considering in particular and by way of a case study, the angle of a simple ladder shown in a photograph. The UK Health and Safety Executive (HSE) guidance document published in 2014 (INDG455) advises that a leaning ladder should be erected at 75 degrees to the horizontal axis. Personal injury cases can arise in the construction industry because a ladder is too steep or too shallow. Ad-hoc photographs of such ladders in their incident position provide a basis for analysis of their angle. This paper presents a direct approach for ascertaining the position of the camera and the angle of the ladder simultaneously from the photograph(s) by way of a workflow that encompasses a novel application of projective geometry and matrix analysis. Mathematical analysis shows that for a given pixel ratio of directly measured collinear points (i.e. features that lie on the same line segment) from the 2D digital photograph with respect to a given viewing point, we can constrain the 3D camera position to a surface of a sphere in the scene. Depending on what we know about the ladder, we can enforce another independent constraint on the possible camera positions which enables us to constrain the possible positions even further. Experiments were conducted using synthetic and real-world data. The synthetic data modeled a vertical plane with a ladder on a horizontally flat plane resting against a vertical wall. The real-world data was captured using an Apple iPhone 13 Pro and 3D laser scan survey data whereby a ladder was placed in a known location and angle to the vertical axis. For each case, we calculated camera positions and the ladder angles using this method and cross-compared them against their respective ‘true’ values.Keywords: image analysis, projective geometry, homography, photogrammetry, ladders, Forensics, Mathematical modeling, planar geometry, matrix analysis, collinear, cameras, photographs
Procedia PDF Downloads 52459 PsyVBot: Chatbot for Accurate Depression Diagnosis using Long Short-Term Memory and NLP
Authors: Thaveesha Dheerasekera, Dileeka Sandamali Alwis
Abstract:
The escalating prevalence of mental health issues, such as depression and suicidal ideation, is a matter of significant global concern. It is plausible that a variety of factors, such as life events, social isolation, and preexisting physiological or psychological health conditions, could instigate or exacerbate these conditions. Traditional approaches to diagnosing depression entail a considerable amount of time and necessitate the involvement of adept practitioners. This underscores the necessity for automated systems capable of promptly detecting and diagnosing symptoms of depression. The PsyVBot system employs sophisticated natural language processing and machine learning methodologies, including the use of the NLTK toolkit for dataset preprocessing and the utilization of a Long Short-Term Memory (LSTM) model. The PsyVBot exhibits a remarkable ability to diagnose depression with a 94% accuracy rate through the analysis of user input. Consequently, this resource proves to be efficacious for individuals, particularly those enrolled in academic institutions, who may encounter challenges pertaining to their psychological well-being. The PsyVBot employs a Long Short-Term Memory (LSTM) model that comprises a total of three layers, namely an embedding layer, an LSTM layer, and a dense layer. The stratification of these layers facilitates a precise examination of linguistic patterns that are associated with the condition of depression. The PsyVBot has the capability to accurately assess an individual's level of depression through the identification of linguistic and contextual cues. The task is achieved via a rigorous training regimen, which is executed by utilizing a dataset comprising information sourced from the subreddit r/SuicideWatch. The diverse data present in the dataset ensures precise and delicate identification of symptoms linked with depression, thereby guaranteeing accuracy. PsyVBot not only possesses diagnostic capabilities but also enhances the user experience through the utilization of audio outputs. This feature enables users to engage in more captivating and interactive interactions. The PsyVBot platform offers individuals the opportunity to conveniently diagnose mental health challenges through a confidential and user-friendly interface. Regarding the advancement of PsyVBot, maintaining user confidentiality and upholding ethical principles are of paramount significance. It is imperative to note that diligent efforts are undertaken to adhere to ethical standards, thereby safeguarding the confidentiality of user information and ensuring its security. Moreover, the chatbot fosters a conducive atmosphere that is supportive and compassionate, thereby promoting psychological welfare. In brief, PsyVBot is an automated conversational agent that utilizes an LSTM model to assess the level of depression in accordance with the input provided by the user. The demonstrated accuracy rate of 94% serves as a promising indication of the potential efficacy of employing natural language processing and machine learning techniques in tackling challenges associated with mental health. The reliability of PsyVBot is further improved by the fact that it makes use of the Reddit dataset and incorporates Natural Language Toolkit (NLTK) for preprocessing. PsyVBot represents a pioneering and user-centric solution that furnishes an easily accessible and confidential medium for seeking assistance. The present platform is offered as a modality to tackle the pervasive issue of depression and the contemplation of suicide.Keywords: chatbot, depression diagnosis, LSTM model, natural language process
Procedia PDF Downloads 69458 Equivalences and Contrasts in the Morphological Formation of Echo Words in Two Indo-Aryan Languages: Bengali and Odia
Authors: Subhanan Mandal, Bidisha Hore
Abstract:
The linguistic process whereby repetition of all or part of the base word with or without internal change before or after the base itself takes place is regarded as reduplication. The reduplicated morphological construction annotates with itself a new grammatical category and meaning. Reduplication is a very frequent and abundant phenomenon in the eastern Indian languages from the states of West Bengal and Odisha, i.e. Bengali and Odia respectively. Bengali, an Indo-Aryan language and a part of the Indo-European language family is one of the largest spoken languages in India and is the national language of Bangladesh. Despite this classification, Bengali has certain influences in terms of vocabulary and grammar due to its geographical proximity to Tibeto-Burman and Austro-Asiatic language speaking communities. Bengali along with Odia belonged to a single linguistic branch. But with time and gradual linguistic changes due to various factors, Odia was the first to break away and develop as a separate distinct language. However, less of contrasts and more of similarities still exist among these languages along the line of linguistics, leaving apart the script. This paper deals with the procedure of echo word formations in Bengali and Odia. The morphological research of the two languages concerning the field of reduplication reveals several linguistic processes. The revelation is based on the information elicited from native language speakers and also on the analysis of echo words found in discourse and conversational patterns. For the purpose of partial reduplication analysis, prefixed class and suffixed class word formations are taken into consideration which show specific rule based changes. For example, in suffixed class categorization, both consonant and vowel alterations are found, following the rules: i) CVx à tVX, ii) CVCV à CVCi. Further classifications were also found on sentential studies of both languages which revealed complete reduplication complexities while forming echo words where the head word lose its original meaning. Complexities based on onomatopoetic/phonetic imitation of natural phenomena and not according to any rule-based occurrences were also found. Taking these aspects into consideration which are very prevalent in both the languages, inferences are drawn from the study which bring out many similarities in both the languages in this area in spite of branching away from each other several years ago.Keywords: consonant alteration, onomatopoetic, partial reduplication and complete reduplication, reduplication, vowel alteration
Procedia PDF Downloads 242457 Syngas From Polypropylene Gasification in a Fluidized Bed
Authors: Sergio Rapagnà, Alessandro Antonio Papa, Armando Vitale, Andre Di Carlo
Abstract:
In recent years the world population has enormously increased the use of plastic products for their living needs, in particular for transporting and storing consumer goods such as food and beverage. Plastics are widely used in the automotive industry, in construction of electronic equipment, clothing and home furnishings. Over the last 70 years, the annual production of plastic products has increased from 2 million tons to 460 million tons. About 20% of the last quantity is mismanaged as waste. The consequence of this mismanagement is the release of plastic waste into the terrestrial and marine environments which represents a danger to human health and the ecosystem. Recycling all plastics is difficult because they are often made with mixtures of polymers that are incompatible with each other and contain different additives. The products obtained are always of lower quality and after two/three recycling cycles they must be eliminated either by thermal treatment to produce heat or disposed of in landfill. An alternative to these current solutions is to obtain a mixture of gases rich in H₂, CO and CO₂ suitable for being profitably used for the production of chemicals with consequent savings fossil sources. Obtaining a hydrogen-rich syngas can be achieved by gasification process using the fluidized bed reactor, in presence of steam as the fluidization medium. The fluidized bed reactor allows the gasification process of plastics to be carried out at a constant temperature and allows the use of different plastics with different compositions and different grain sizes. Furthermore, during the gasification process the use of steam increase the gasification of char produced by the first pyrolysis/devolatilization process of the plastic particles. The bed inventory can be made with particles having catalytic properties such as olivine, capable to catalyse the steam reforming reactions of heavy hydrocarbons normally called tars, with a consequent increase in the quantity of gases produced. The plant is composed of a fluidized bed reactor made of AISI 310 steel, having an internal diameter of 0.1 m, containing 3 kg of olivine particles as a bed inventory. The reactor is externally heated by an oven up to 1000 °C. The hot producer gases that exit the reactor, after being cooled, are quantified using a mass flow meter. Gas analyzers are present to measure instantly the volumetric composition of H₂, CO, CO₂, CH₄ and NH₃. At the conference, the results obtained from the continuous gasification of polypropylene (PP) particles in a steam atmosphere at temperatures of 840-860 °C will be presented.Keywords: gasification, fluidized bed, hydrogen, olivine, polypropyle
Procedia PDF Downloads 27456 Description of a Structural Health Monitoring and Control System Using Open Building Information Modeling
Authors: Wahhaj Ahmed Farooqi, Bilal Ahmad, Sandra Maritza Zambrano Bernal
Abstract:
In view of structural engineering, monitoring of structural responses over time is of great importance with respect to recent developments of construction technologies. Recently, developments of advanced computing tools have enabled researcher’s better execution of structural health monitoring (SHM) and control systems. In the last decade, building information modeling (BIM) has substantially enhanced the workflow of planning and operating engineering structures. Typically, building information can be stored and exchanged via model files that are based on the Industry Foundation Classes (IFC) standard. In this study a modeling approach for semantic modeling of SHM and control systems is integrated into the BIM methodology using the IFC standard. For validation of the modeling approach, a laboratory test structure, a four-story shear frame structure, is modeled using a conventional BIM software tool. An IFC schema extension is applied to describe information related to monitoring and control of a prototype SHM and control system installed on the laboratory test structure. The SHM and control system is described by a semantic model applying Unified Modeling Language (UML). Subsequently, the semantic model is mapped into the IFC schema. The test structure is composed of four aluminum slabs and plate-to-column connections are fully fixed. In the center of the top story, semi-active tuned liquid column damper (TLCD) is installed. The TLCD is used to reduce effects of structural responses in context of dynamic vibration and displacement. The wireless prototype SHM and control system is composed of wireless sensor nodes. For testing the SHM and control system, acceleration response is automatically recorded by the sensor nodes equipped with accelerometers and analyzed using embedded computing. As a result, SHM and control systems can be described within open BIM, dynamic responses and information of damages can be stored, documented, and exchanged on the formal basis of the IFC standard.Keywords: structural health monitoring, open building information modeling, industry foundation classes, unified modeling language, semi-active tuned liquid column damper, nondestructive testing
Procedia PDF Downloads 151455 Developing a Decision-Making Tool for Prioritizing Green Building Initiatives
Authors: Tayyab Ahmad, Gerard Healey
Abstract:
Sustainability in built environment sector is subject to many development constraints. Building projects are developed under different requirements of deliverables which makes each project unique. For an owner organization, i.e., a higher-education institution, involved in a significant building stock, it is important to prioritize some of the sustainability initiatives over the others in order to align the sustainable building development with organizational goals. The point-based green building rating tools i.e. Green Star, LEED, BREEAM are becoming increasingly popular and are well-acknowledged worldwide for verifying a sustainable development. It is imperative to synthesize a multi-criteria decision-making tool that can capitalize on the point-based methodology of rating systems while customizing the sustainable development of building projects according to the individual requirements and constraints of the client organization. A multi-criteria decision-making tool for the University of Melbourne is developed that builds on the action-learning and experience of implementing Green Buildings at the University of Melbourne. The tool evaluates the different sustainable building initiatives based on the framework of Green Star rating tool of Green Building Council of Australia. For each different sustainability initiative the decision-making tool makes an assessment based on at least five performance criteria including the ease with which a sustainability initiative can be achieved and the potential of a sustainability initiative to enhance project objectives, reduce life-cycle costs, enhance University’s reputation, and increase the confidence in quality construction. The use of a weighted aggregation mathematical model in the proposed tool can have a considerable role in the decision-making process of a Green Building project by indexing the Green Building initiatives in terms of organizational priorities. The index value of each initiative will be based on its alignment with some of the key performance criteria. The usefulness of the decision-making tool is validated by conducting structured interviews with some of the key stakeholders involved in the development of sustainable building projects at the University of Melbourne. The proposed tool is realized to help a client organization in deciding that within limited resources which sustainability initiatives and practices are more important to be pursued than others.Keywords: higher education institution, multi-criteria decision-making tool, organizational values, prioritizing sustainability initiatives, weighted aggregation model
Procedia PDF Downloads 234454 Gilgel Gibe III: Dam-Induced Displacement in Ethiopia and Kenya
Authors: Jonny Beirne
Abstract:
Hydropower developments have come to assume an important role within the Ethiopian government's overall development strategy for the country during the last ten years. The Gilgel Gibe III on the Omo river, due to become operational in September 2014, represents the most ambitious, and controversial, of these projects to date. Further aspects of the government's national development strategy include leasing vast areas of designated 'unused' land for large-scale commercial agricultural projects and 'voluntarily' villagizing scattered, semi-nomadic agro-pastoralist groups to centralized settlements so as to use land and water more efficiently and to better provide essential social services such as education and healthcare. The Lower Omo valley, along the Omo River, is one of the sites of this villagization programme as well as of these large-scale commercial agricultural projects which are made possible owing to the regulation of the river's flow by Gibe III. Though the Ethiopian government cite many positive aspects of these agricultural and hydropower developments there are still expected to be serious regional and transnational effects, including on migration flows, in an area already characterized by increasing climatic vulnerability with attendant population movements and conflicts over scarce resources. The following paper is an attempt to track actual and anticipated migration flows resulting from the construction of Gibe III in the immediate vicinity of the dam, downstream in the Lower Omo Valley and across the border in Kenya around Lake Turkana. In the case of those displaced in the Lower Omo Valley, this will be considered in view of the distinction between voluntary villagization and forced resettlement. The research presented is not primary-source material. Instead, it is drawn from the reports and assessments of the Ethiopian government, rights-based groups, and academic researchers as well as media articles. It is hoped that this will serve to draw greater attention to the issue and encourage further methodological research on the dynamics of dam constructions (and associated large-scale irrigation schemes) on migration flows and on the ultimate experience of displacement and resettlement for environmental migrants in the region.Keywords: forced displacement, voluntary resettlement, migration, human rights, human security, land grabs, dams, commercial agriculture, pastoralism, ecosystem modification, natural resource conflict, livelihoods, development
Procedia PDF Downloads 381453 Structural Performance of Mechanically Connected Stone Panels under Cyclic Loading: Application to Aesthetic and Environmental Building Skin Design
Authors: Michel Soto Chalhoub
Abstract:
Building designers in the Mediterranean region and other parts of the world utilize natural stone panels on the exterior façades as skin cover. This type of finishing is not only intended for aesthetic reasons but also environmental. The stone, since the earliest ages of civilization, has been used in construction and to-date some of the most appealing buildings owe their beauty to stone finishing. The stone also provides warmth in winter and freshness in summer as it moderates heat transfer and absorbs radiation. However, as structural codes became increasingly stringent about the dynamic performance of buildings, it became essential to study the performance of stone panels under cyclic loading – a condition that arises under the building is subjected to wind or earthquakes. The present paper studies the performance of stone panels using mechanical connectors when subjected to load reversal. In this paper, we present a theoretical model that addresses modes of failure in the steel connectors, by yield, and modes of failure in the stone, by fracture. Then we provide an experimental set-up and test results for rectangular stone panels of varying thickness. When the building is subjected to an earthquake, its rectangular panels within the structural system are subjected to shear deformations, which in turn impart stress into the stone cover. Rectangular stone panels, which typically range from 40cmx80cm to 60cmx120cm, need to be designed to withstand transverse loading from the direct application of lateral loads, and to withstand simultaneously in-plane loading (membrane stress) caused by inter-story drift and overall building lateral deflection. Results show correlation between the theoretical model which we derive from solid mechanics fundamentals and the experimental results, and lead to practical design recommendations. We find that for panel thickness below a certain threshold, it is more advantageous to utilize structural adhesive materials to connect stone panels to the main structural system of the building. For larger panel thicknesses, it is recommended to utilize mechanical connectors with special detailing to ensure a minimum level of ductility and energy dissipation.Keywords: solid mechanics, cyclic loading, mechanical connectors, natural stone, seismic, wind, building skin
Procedia PDF Downloads 255452 Prospects of Acellular Organ Scaffolds for Drug Discovery
Authors: Inna Kornienko, Svetlana Guryeva, Natalia Danilova, Elena Petersen
Abstract:
Drug toxicity often goes undetected until clinical trials, the most expensive and dangerous phase of drug development. Both human cell culture and animal studies have limitations that cannot be overcome by improvements in drug testing protocols. Tissue engineering is an emerging alternative approach to creating models of human malignant tumors for experimental oncology, personalized medicine, and drug discovery studies. This new generation of bioengineered tumors provides an opportunity to control and explore the role of every component of the model system including cell populations, supportive scaffolds, and signaling molecules. An area that could greatly benefit from these models is cancer research. Recent advances in tissue engineering demonstrated that decellularized tissue is an excellent scaffold for tissue engineering. Decellularization of donor organs such as heart, liver, and lung can provide an acellular, naturally occurring three-dimensional biologic scaffold material that can then be seeded with selected cell populations. Preliminary studies in animal models have provided encouraging results for the proof of concept. Decellularized Organs preserve organ microenvironment, which is critical for cancer metastasis. Utilizing 3D tumor models results greater proximity of cell culture morphological characteristics in a model to its in vivo counterpart, allows more accurate simulation of the processes within a functioning tumor and its pathogenesis. 3D models allow study of migration processes and cell proliferation with higher reliability as well. Moreover, cancer cells in a 3D model bear closer resemblance to living conditions in terms of gene expression, cell surface receptor expression, and signaling. 2D cell monolayers do not provide the geometrical and mechanical cues of tissues in vivo and are, therefore, not suitable to accurately predict the responses of living organisms. 3D models can provide several levels of complexity from simple monocultures of cancer cell lines in liquid environment comprised of oxygen and nutrient gradients and cell-cell interaction to more advanced models, which include co-culturing with other cell types, such as endothelial and immune cells. Following this reasoning, spheroids cultivated from one or multiple patient-derived cell lines can be utilized to seed the matrix rather than monolayer cells. This approach furthers the progress towards personalized medicine. As an initial step to create a new ex vivo tissue engineered model of a cancer tumor, optimized protocols have been designed to obtain organ-specific acellular matrices and evaluate their potential as tissue engineered scaffolds for cultures of normal and tumor cells. Decellularized biomatrix was prepared from animals’ kidneys, urethra, lungs, heart, and liver by two decellularization methods: perfusion in a bioreactor system and immersion-agitation on an orbital shaker with the use of various detergents (SDS, Triton X-100) in different concentrations and freezing. Acellular scaffolds and tissue engineered constructs have been characterized and compared using morphological methods. Models using decellularized matrix have certain advantages, such as maintaining native extracellular matrix properties and biomimetic microenvironment for cancer cells; compatibility with multiple cell types for cell culture and drug screening; utilization to culture patient-derived cells in vitro to evaluate different anticancer therapeutics for developing personalized medicines.Keywords: 3D models, decellularization, drug discovery, drug toxicity, scaffolds, spheroids, tissue engineering
Procedia PDF Downloads 301451 The importance of Clinical Pharmacy and Computer Aided Drug Design
Authors: Peter Edwar Mortada Nasif
Abstract:
The use of CAD (Computer Aided Design) technology is ubiquitous in the architecture, engineering and construction (AEC) industry. This has led to its inclusion in the curriculum of architecture schools in Nigeria as an important part of the training module. This article examines the ethical issues involved in implementing CAD (Computer Aided Design) content into the architectural education curriculum. Using existing literature, this study begins with the benefits of integrating CAD into architectural education and the responsibilities of different stakeholders in the implementation process. It also examines issues related to the negative use of information technology and the perceived negative impact of CAD use on design creativity. Using a survey method, data from the architecture department of Chukwuemeka Odumegwu Ojukwu Uli University was collected to serve as a case study on how the issues raised were being addressed. The article draws conclusions on what ensures successful ethical implementation. Millions of people around the world suffer from hepatitis C, one of the world's deadliest diseases. Interferon (IFN) is treatment options for patients with hepatitis C, but these treatments have their side effects. Our research focused on developing an oral small molecule drug that targets hepatitis C virus (HCV) proteins and has fewer side effects. Our current study aims to develop a drug based on a small molecule antiviral drug specific for the hepatitis C virus (HCV). Drug development using laboratory experiments is not only expensive, but also time-consuming to conduct these experiments. Instead, in this in silicon study, we used computational techniques to propose a specific antiviral drug for the protein domains of found in the hepatitis C virus. This study used homology modeling and abs initio modeling to generate the 3D structure of the proteins, then identifying pockets in the proteins. Acceptable lagans for pocket drugs have been developed using the de novo drug design method. Pocket geometry is taken into account when designing ligands. Among the various lagans generated, a new specific for each of the HCV protein domains has been proposed.Keywords: drug design, anti-viral drug, in-silicon drug design, hepatitis C virus, computer aided design, CAD education, education improvement, small-size contractor automatic pharmacy, PLC, control system, management system, communication
Procedia PDF Downloads 23450 Reconceptualizing “Best Practices” in Public Sector
Authors: Eftychia Kessopoulou, Styliani Xanthopoulou, Ypatia Theodorakioglou, George Tsiotras, Katerina Gotzamani
Abstract:
Public sector managers frequently herald that implementing best practices as a set of standards, may lead to superior organizational performance. However, recent research questions the objectification of best practices, highlighting: a) the inability of public sector organizations to develop innovative administrative practices, as well as b) the adoption of stereotypical renowned practices inculcated in the public sector by international governance bodies. The process through which organizations construe what a best practice is, still remains a black box that is yet to be investigated, given the trend of continuous changes in public sector performance, as well as the burgeoning interest of sharing popular administrative practices put forward by international bodies. This study aims to describe and understand how organizational best practices are constructed by public sector performance management teams, like benchmarkers, during the benchmarking-mediated performance improvement process and what mechanisms enable this construction. A critical realist action research methodology is employed, starting from a description of various approaches on best practice nature when a benchmarking-mediated performance improvement initiative, such as the Common Assessment Framework, is applied. Firstly, we observed the benchmarker’s management process of best practices in a public organization, so as to map their theories-in-use. As a second step we contextualized best administrative practices by reflecting the different perspectives emerged from the previous stage on the design and implementation of an interview protocol. We used this protocol to conduct 30 semi-structured interviews with “best practice” process owners, in order to examine their experiences and performance needs. Previous research on best practices has shown that needs and intentions of benchmarkers cannot be detached from the causal mechanisms of the various contexts in which they work. Such causal mechanisms can be found in: a) process owner capabilities, b) the structural context of the organization, and c) state regulations. Therefore, we developed an interview protocol theoretically informed in the first part to spot causal mechanisms suggested by previous research studies and supplemented it with questions regarding the provision of best practice support from the government. Findings of this work include: a) a causal account of the nature of best administrative practices in the Greek public sector that shed light on explaining their management, b) a description of the various contexts affecting best practice conceptualization, and c) a description of how their interplay changed the organization’s best practice management.Keywords: benchmarking, action research, critical realism, best practices, public sector
Procedia PDF Downloads 127449 Tagging a corpus of Media Interviews with Diplomats: Challenges and Solutions
Authors: Roberta Facchinetti, Sara Corrizzato, Silvia Cavalieri
Abstract:
Increasing interconnection between data digitalization and linguistic investigation has given rise to unprecedented potentialities and challenges for corpus linguists, who need to master IT tools for data analysis and text processing, as well as to develop techniques for efficient and reliable annotation in specific mark-up languages that encode documents in a format that is both human and machine-readable. In the present paper, the challenges emerging from the compilation of a linguistic corpus will be taken into consideration, focusing on the English language in particular. To do so, the case study of the InterDiplo corpus will be illustrated. The corpus, currently under development at the University of Verona (Italy), represents a novelty in terms both of the data included and of the tag set used for its annotation. The corpus covers media interviews and debates with diplomats and international operators conversing in English with journalists who do not share the same lingua-cultural background as their interviewees. To date, this appears to be the first tagged corpus of international institutional spoken discourse and will be an important database not only for linguists interested in corpus analysis but also for experts operating in international relations. In the present paper, special attention will be dedicated to the structural mark-up, parts of speech annotation, and tagging of discursive traits, that are the innovational parts of the project being the result of a thorough study to find the best solution to suit the analytical needs of the data. Several aspects will be addressed, with special attention to the tagging of the speakers’ identity, the communicative events, and anthropophagic. Prominence will be given to the annotation of question/answer exchanges to investigate the interlocutors’ choices and how such choices impact communication. Indeed, the automated identification of questions, in relation to the expected answers, is functional to understand how interviewers elicit information as well as how interviewees provide their answers to fulfill their respective communicative aims. A detailed description of the aforementioned elements will be given using the InterDiplo-Covid19 pilot corpus. The data yielded by our preliminary analysis of the data will highlight the viable solutions found in the construction of the corpus in terms of XML conversion, metadata definition, tagging system, and discursive-pragmatic annotation to be included via Oxygen.Keywords: spoken corpus, diplomats’ interviews, tagging system, discursive-pragmatic annotation, english linguistics
Procedia PDF Downloads 185448 Leveraging Digital Transformation Initiatives and Artificial Intelligence to Optimize Readiness and Simulate Mission Performance across the Fleet
Authors: Justin Woulfe
Abstract:
Siloed logistics and supply chain management systems throughout the Department of Defense (DOD) has led to disparate approaches to modeling and simulation (M&S), a lack of understanding of how one system impacts the whole, and issues with “optimal” solutions that are good for one organization but have dramatic negative impacts on another. Many different systems have evolved to try to understand and account for uncertainty and try to reduce the consequences of the unknown. As the DoD undertakes expansive digital transformation initiatives, there is an opportunity to fuse and leverage traditionally disparate data into a centrally hosted source of truth. With a streamlined process incorporating machine learning (ML) and artificial intelligence (AI), advanced M&S will enable informed decisions guiding program success via optimized operational readiness and improved mission success. One of the current challenges is to leverage the terabytes of data generated by monitored systems to provide actionable information for all levels of users. The implementation of a cloud-based application analyzing data transactions, learning and predicting future states from current and past states in real-time, and communicating those anticipated states is an appropriate solution for the purposes of reduced latency and improved confidence in decisions. Decisions made from an ML and AI application combined with advanced optimization algorithms will improve the mission success and performance of systems, which will improve the overall cost and effectiveness of any program. The Systecon team constructs and employs model-based simulations, cutting across traditional silos of data, aggregating maintenance, and supply data, incorporating sensor information, and applying optimization and simulation methods to an as-maintained digital twin with the ability to aggregate results across a system’s lifecycle and across logical and operational groupings of systems. This coupling of data throughout the enterprise enables tactical, operational, and strategic decision support, detachable and deployable logistics services, and configuration-based automated distribution of digital technical and product data to enhance supply and logistics operations. As a complete solution, this approach significantly reduces program risk by allowing flexible configuration of data, data relationships, business process workflows, and early test and evaluation, especially budget trade-off analyses. A true capability to tie resources (dollars) to weapon system readiness in alignment with the real-world scenarios a warfighter may experience has been an objective yet to be realized to date. By developing and solidifying an organic capability to directly relate dollars to readiness and to inform the digital twin, the decision-maker is now empowered through valuable insight and traceability. This type of educated decision-making provides an advantage over the adversaries who struggle with maintaining system readiness at an affordable cost. The M&S capability developed allows program managers to independently evaluate system design and support decisions by quantifying their impact on operational availability and operations and support cost resulting in the ability to simultaneously optimize readiness and cost. This will allow the stakeholders to make data-driven decisions when trading cost and readiness throughout the life of the program. Finally, sponsors are available to validate product deliverables with efficiency and much higher accuracy than in previous years.Keywords: artificial intelligence, digital transformation, machine learning, predictive analytics
Procedia PDF Downloads 160447 Nanomaterials for Archaeological Stone Conservation: Re-Assembly of Archaeological Heavy Stones Using Epoxy Resin Modified with Clay Nanoparticles
Authors: Sayed Mansour, Mohammad Aldoasri, Nagib Elmarzugi, Nadia A. Al-Mouallimi
Abstract:
The archaeological large stone used in construction of ancient Pharaonic tombs, temples, obelisks and other sculptures, always subject to physicomechanical deterioration and destructive forces, leading to their partial or total broken. The task of reassembling this type of artifact represent a big challenge for the conservators. Recently, the researchers are turning to new technologies to improve the properties of traditional adhesive materials and techniques used in re-assembly of broken large stone. The epoxy resins are used extensively in stone conservation and re-assembly of broken stone because of their outstanding mechanical properties. The introduction of nanoparticles to polymeric adhesives at low percentages may lead to substantial improvements of their mechanical performances in structural joints and large objects. The aim of this study is to evaluate the effectiveness of clay nanoparticles in enhancing the performances of epoxy adhesives used in re-assembly of archaeological massive stone by adding proper amounts of those nanoparticles. The nanoparticles reinforced epoxy nanocomposite was prepared by direct melt mixing with a nanoparticles content of 3% (w/v), and then mould forming in the form of rectangular samples, and used as adhesive for experimental stone samples. Scanning electron microscopy (SEM) was employed to investigate the morphology of the prepared nanocomposites, and the distribution of nanoparticles inside the composites. The stability and efficiency of the prepared epoxy-nanocomposites and stone block assemblies with new formulated adhesives were tested by aging artificially the samples under different environmental conditions. The effect of incorporating clay nanoparticles on the mechanical properties of epoxy adhesives was evaluated comparatively before and after aging by measuring the tensile, compressive, and Elongation strength tests. The morphological studies revealed that the mixture process between epoxy and nanoparticles has succeeded with a relatively homogeneous morphology and good dispersion in low nano-particles loadings in epoxy matrix was obtained. The results show that the epoxy-clay nanocomposites exhibited superior tensile, compressive, and Elongation strength. Moreover, a marked improvement of the mechanical properties of stone joints increased in all states by adding nano-clay to epoxy in comparison with pure epoxy resin.Keywords: epoxy resins, nanocomposites, clay nanoparticles, re-assembly, archaeological massive stones, mechanical properties
Procedia PDF Downloads 113446 Bond Strength of Nano Silica Concrete Subjected to Corrosive Environments
Authors: Muhammad S. El-Feky, Mohamed I. Serag, Ahmed M. Yasien, Hala Elkady
Abstract:
Reinforced concrete requires steel bars in order to provide the tensile strength that is needed in structural concrete. However, when steel bars corrode, a loss in bond between the concrete and the steel bars occurs due to the formation of rust on the bars surface. Permeability of concrete is a fundamental property in perspective of the durability of concrete as it represents the ease with which water or other fluids can move through concrete, subsequently transporting corrosive agents. Nanotechnology is a standout amongst active research zones that envelops varies disciplines including construction materials. The application of nanotechnology in the corrosion protection of metal has lately gained momentum as nano scale particles have ultimate physical, chemical and physicochemical properties, which may enhance the corrosion protection in comparison to large size materials. The presented research aims to study the bond performance of concrete containing relatively high volume nano silica (up to 4.5%) exposed to corrosive conditions. This was extensively studied through tensile, bond strengths as well as the permeability of nano silica concrete. In addition micro-structural analysis was performed in order to evaluate the effect of nano silica on the properties of concrete at both; the micro and nano levels. The results revealed that by the addition of nano silica, the permeability of concrete mixes decreased significantly to reach about 50% of the control mix by the addition of 4.5% nano silica. As for the corrosion resistance, the nano silica concrete is comparatively higher resistance than ordinary concrete. Increasing Nano Silica percentage increased significantly the critical time corresponding to a metal loss (equal to 50 ϻm) which usually corresponding to the first concrete cracking due to the corrosion of reinforcement to reach about 49 years instead of 40 years as for the normal concrete. Finally, increasing nano Silica percentage increased significantly the residual bond strength of concrete after being subjected to corrosive environment. After being subjected to corrosive environment, the pullout behavior was observed for the bars embedded in all of the mixes instead of the splitting behavior that was observed before being corroded. Adding 4.5% nano silica in concrete increased the residual bond strength to reach 79% instead of 27% only as compared to control mix (0%W) before the subjection of the corrosive environment. From the conducted study we can conclude that the Nano silica proved to be a significant pore blocker material.Keywords: bond strength, concrete, corrosion resistance, nano silica, permeability
Procedia PDF Downloads 309445 Optimizing Residential Housing Renovation Strategies at Territorial Scale: A Data Driven Approach and Insights from the French Context
Authors: Rit M., Girard R., Villot J., Thorel M.
Abstract:
In a scenario of extensive residential housing renovation, stakeholders need models that support decision-making through a deep understanding of the existing building stock and accurate energy demand simulations. To address this need, we have modified an optimization model using open data that enables the study of renovation strategies at both territorial and national scales. This approach provides (1) a definition of a strategy to simplify decision trees from theoretical combinations, (2) input to decision makers on real-world renovation constraints, (3) more reliable identification of energy-saving measures (changes in technology or behaviour), and (4) discrepancies between currently planned and actually achieved strategies. The main contribution of the studies described in this document is the geographic scale: all residential buildings in the areas of interest were modeled and simulated using national data (geometries and attributes). These buildings were then renovated, when necessary, in accordance with the environmental objectives, taking into account the constraints applicable to each territory (number of renovations per year) or at the national level (renovation of thermal deficiencies (Energy Performance Certificates F&G)). This differs from traditional approaches that focus only on a few buildings or archetypes. This model can also be used to analyze the evolution of a building stock as a whole, as it can take into account both the construction of new buildings and their demolition or sale. Using specific case studies of French territories, this paper highlights a significant discrepancy between the strategies currently advocated by decision-makers and those proposed by our optimization model. This discrepancy is particularly evident in critical metrics such as the relationship between the number of renovations per year and achievable climate targets or the financial support currently available to households and the remaining costs. In addition, users are free to seek optimizations for their building stock across a range of different metrics (e.g., financial, energy, environmental, or life cycle analysis). These results are a clear call to re-evaluate existing renovation strategies and take a more nuanced and customized approach. As the climate crisis moves inexorably forward, harnessing the potential of advanced technologies and data-driven methodologies is imperative.Keywords: residential housing renovation, MILP, energy demand simulations, data-driven methodology
Procedia PDF Downloads 68444 Properties of Sustainable Artificial Lightweight Aggregate
Authors: Wasan Ismail Khalil, Hisham Khalid Ahmed, Zainab Ali
Abstract:
Structural Lightweight Aggregate Concrete (SLWAC) has been developed in recent years because it reduces the dead load, cost, thermal conductivity and coefficient of thermal expansion of the structure. So SLWAC has the advantage of being a relatively green building material. Lightweight Aggregate (LWA) is either occurs as natural material such as pumice, scoria, etc. or as artificial material produced from different raw materials such as expanded shale, clay, slate, etc. The use of SLWAC in Iraq is limited due to the lack in natural LWA. The existence of Iraqi clay deposit with different types and characteristics leads to the idea of producing artificial expanded clay aggregate. The main aim in this work is to present of the properties of artificial LWA produced in the laboratory. Available local bentonite clay which occurs in the Western region of Iraq was used as raw material to produce the LWA. Sodium silicate as liquid industrial waste material from glass plant was mixed with bentonite clay in mix proportion 1:1 by weight. The manufacturing method of the lightweight aggregate including, preparation and mixing of clay and sodium silicate, burning of the mixture in the furnace at the temperature between 750-800˚C for two hours, and finally gradually cooling process. The produced LWA was then crushed to small pieces then screened on standard sieve series and prepared with grading which conforms to the specifications of LWA. The maximum aggregate size used in this investigation is 10 mm. The chemical composition and the physical properties of the produced LWA are investigated. The results indicate that the specific gravity of the produced LWA is 1.5 with the density of 543kg/m3 and water absorption of 20.7% which is in conformity with the international standard of LWA. Many trail mixes were carried out in order to produce LWAC containing the artificial LWA produced in this research. The selected mix proportion is 1:1.5:2 (cement: sand: aggregate) by weight with water to cement ratio of 0.45. The experimental results show that LWAC has oven dry density of 1720 kg/m3, water absorption of 8.5%, the thermal conductivity of 0.723 W/m.K and compressive strength of 23 N/mm2. The SLWAC produced in this research can be used in the construction of different thermal insulated buildings and masonry units. It can be concluded that the SLWA produced in this study contributes to sustainable development by, using industrial waste materials, conserving energy, enhancing the thermal and structural efficiency of concrete.Keywords: expanded clay, lightweight aggregate, structural lightweight aggregate concrete, sustainable
Procedia PDF Downloads 328443 Mobile and Hot Spot Measurement with Optical Particle Counting Based Dust Monitor EDM264
Authors: V. Ziegler, F. Schneider, M. Pesch
Abstract:
With the EDM264, GRIMM offers a solution for mobile short- and long-term measurements in outdoor areas and at production sites. For research as well as permanent areal observations on a near reference quality base. The model EDM264 features a powerful and robust measuring cell based on optical particle counting (OPC) principle with all the advantages that users of GRIMM's portable aerosol spectrometers are used to. The system is embedded in a compact weather-protection housing with all-weather sampling, heated inlet system, data logger, and meteorological sensor. With TSP, PM10, PM4, PM2.5, PM1, and PMcoarse, the EDM264 provides all fine dust fractions real-time, valid for outdoor applications and calculated with the proven GRIMM enviro-algorithm, as well as six additional dust mass fractions pm10, pm2.5, pm1, inhalable, thoracic and respirable for IAQ and workplace measurements. This highly versatile instrument performs real-time monitoring of particle number, particle size and provides information on particle surface distribution as well as dust mass distribution. GRIMM's EDM264 has 31 equidistant size channels, which are PSL traceable. A high-end data logger enables data acquisition and wireless communication via LTE, WLAN, or wired via Ethernet. Backup copies of the measurement data are stored in the device directly. The rinsing air function, which protects the laser and detector in the optical cell, further increases the reliability and long term stability of the EDM264 under different environmental and climatic conditions. The entire sample volume flow of 1.2 L/min is analyzed by 100% in the optical cell, which assures excellent counting efficiency at low and high concentrations and complies with the ISO 21501-1standard for OPCs. With all these features, the EDM264 is a world-leading dust monitor for precise monitoring of particulate matter and particle number concentration. This highly reliable instrument is an indispensable tool for many users who need to measure aerosol levels and air quality outdoors, on construction sites, or at production facilities.Keywords: aerosol research, aerial observation, fence line monitoring, wild fire detection
Procedia PDF Downloads 151442 The Awareness of Sustainability Concerns in Design Studio Education Process: A Case from TOBB ETU University, Interior Architecture Department in Turkey
Authors: Pelin Atav, Gözen Güner Aktaş, Nur Ayalp
Abstract:
Today’s world has started to develop design process within an interdisciplinary working environment. There is an aim of creating the most permanent design for the future. While satisfying people’s needs, environment and people relationships should be considered. When this relationship was considered for the future, the sustainability term comes to mind. The term Sustainability has been adapted very well by designer and architects. It is also one of the main and significant parts of the design process. As the education process cultivates the future professionals, the awareness of those concepts in the education process has a vital importance. The question is stated as thus: Are the 3rd and 4th year design studio students, familiar and sensitive to the concept of sustainability in the TOBB ETU University Interior Design Studio. Design studios and the instructors should be taken into consideration while this sustainability term is taught. The term "Sustainability" can not be learned without making any application in the actual real world. While students make this study, They can have the chance to search the topic of sustainability step by step. Due to having various extent, sustainability term becomes quite a comprehensive issue. In order not to create negative consequences, designers and architects work by adapting this term. In terms of material, construction process, lighting, building service, furniture, systems that are used, energy consumption issues that are considered and creating positive drawbacks for the future are aimed. This research is aimed at how university education shapes designer’s works in terms of sustainability. By giving a project that is a main interest in the field of sustainability, students are expected to reach well-thought-of results and analysis. Project process were conducted with instructor and student studies together. According to critics from their instructors, students try to product well- designed results. TOBB University was choosen as a research area situated in Ankara in Turkey. Third and fourth class (interior designer/architect department) students who are from the Faculty of Fine Arts Design and Architecture are the subject group selected for this study. Aim of this study is demonstrating sustainability as a term having application in design studio. Thus, awareness of sustainability terms will be evaluated and its development process in the university education will be observed. Consequently, results that are expected is how sustainability term is conducted in project and for the sustainability term awareness in design studios and their projects have been sufficient or not.Keywords: design education, design process, interior design studios, sustainability
Procedia PDF Downloads 283441 The Development and Change of Settlement in Tainan County (1904-2015) Using Historical Geographic Information System
Authors: Wei Ting Han, Shiann-Far Kung
Abstract:
In the early time, most of the arable land is dry farming and using rainfall as water sources for irrigation in Tainan county. After the Chia-nan Irrigation System (CIS) was completed in 1930, Chia-nan Plain was more efficient allocation of limited water sources or irrigation, because of the benefit from irrigation systems, drainage systems, and land improvement projects. The problem of long-term drought, flood and salt damage in the past were also improved by CIS. The canal greatly improved the paddy field area and agricultural output, Tainan county has become one of the important agricultural producing areas in Taiwan. With the development of water conservancy facilities, affected by national policies and other factors, many agricultural communities and settlements are formed indirectly, also promoted the change of settlement patterns and internal structures. With the development of historical geographic information system (HGIS), Academia Sinica developed the WebGIS theme with the century old maps of Taiwan which is the most complete historical map of database in Taiwan. It can be used to overlay historical figures of different periods, present the timeline of the settlement change, also grasp the changes in the natural environment or social sciences and humanities, and the changes in the settlements presented by the visualized areas. This study will explore the historical development and spatial characteristics of the settlements in various areas of Tainan County. Using of large-scale areas to explore the settlement changes and spatial patterns of the entire county, through the dynamic time and space evolution from Japanese rule to the present day. Then, digitizing the settlement of different periods to perform overlay analysis by using Taiwan historical topographic maps in 1904, 1921, 1956 and 1989. Moreover, using document analysis to analyze the temporal and spatial changes of regional environment and settlement structure. In addition, the comparison analysis method is used to classify the spatial characteristics and differences between the settlements. Exploring the influence of external environments in different time and space backgrounds, such as government policies, major construction, and industrial development. This paper helps to understand the evolution of the settlement space and the internal structural changes in Tainan County.Keywords: historical geographic information system, overlay analysis, settlement change, Tainan County
Procedia PDF Downloads 128440 Small Community’s Proactive Thinking to Move from Zero to 100 Percent Water Reuse
Authors: Raj Chavan
Abstract:
The City of Jal serves a population of approximately 3,500 people, including 2,100 permanent inhabitants and 1,400 oil and gas sector workers and RV park occupants. Over the past three years, Jal's population has increased by about 70 percent, mostly due to the oil and gas industry. The City anticipates that the population will exceed 4,200 by 2020, necessitating the construction of a new wastewater treatment plant (WWTP) because the old plant (aerated lagoon system) cannot accommodate such rapid population expansion without major renovations or replacement. Adhering to discharge permit restrictions has been challenging due to aging infrastructure and equipment replacement needs, as well as increasing nutrient loading to the wastewater collecting system from the additional oil and gas residents' recreational vehicles. The WWTP has not been able to maintain permit discharge standards for total nitrogen of less than 20 mg N/L and other characteristics in recent years. Based on discussions with the state's environmental department, it is likely that the future permit renewal would impose stricter conditions. Given its location in the dry, western part of the country, the City must rely on its meager groundwater supplies and scant annual precipitation. The city's groundwater supplies will be depleted sooner than predicted due to rising demand from the growing population for drinking, leisure, and other industrial uses (fracking). The sole type of reuse the city was engaging in (recreational reuse for a golf course) had to be put on hold because of an effluent water compliance issue. As of right now, all treated effluent is evaporated. The city's long-term goal is to become a zero-waste community that sends all of its treated wastewater effluent either to the golf course, Jal Lake, or the oil and gas industry for reuse. Hydraulic fracturing uses a lot of water, but if the oil and gas industry can use recycled water, it can reduce its impact on freshwater supplies. The City's goal of 100% reuse has been delayed by the difficulties of meeting the constraints of the regular discharge permit due to the large rise in influent loads and the aging infrastructure. The City of Jal plans to build a new WWTP that can keep up with the city's rapid population increase due to the oil and gas industry. Several treatment methods were considered in light of the City's needs and its long-term goals, but MBR was ultimately chosen recommended since it meets all of the permit's requirements while also providing 100 percent beneficial reuse. This talk will lay out the plan for the city to reach its goal of 100 percent reuse, as well as the various avenues for funding the small community that have been considered.Keywords: membrane bioreactor, nitrogent, reuse, small community
Procedia PDF Downloads 87439 Rewriting the 'Sick Man' History: Imagining Chinese Masculinity in the Contemporary Military Action Genre
Authors: Yongde Dai
Abstract:
The recent Chinese military action blockbusters, notably known as, Wolf Warrior/Zhan Lang (2015), Operation Mekong/Mei gong he xing dong (2016), Warrior 2/Zhan Lang 2 (2017) and Operation Red Sea/Hong hai xing dong (2018), have achieved phenomenal box-office successes and in particular, Wolf Warrior 2 became China’s highest-grossing film of all time. However, their yearly presence tends to show a paradigmic shift from China’s primacy of wen manliness (soft) to wu masculinity (hard). With the increasing cinematic exposure of a more muscular image manifesting in both the Chinese heroic soldiers and China itself as a rising global power, the backlash of the Chinese public against the proliferation of the feminized masculinity influenced by the ‘pretty-boy’ pop-culture and China’s harder approach to the current Sino-US tensions have correspondingly emerged and continued to brew. Chinese masculinity imagined in these films is one of the key factors that enable a gendered interpretation of the correlation between the Chinese on-screen fantasy and off-screen reality, that is, China’s public and official discourse about the hegemonic masculinity and non-hegemonic masculinity as well as China’s international profile on cinematic appearance and in today’s Sino-US relation. By reading closely at the four megahits as visual-audio texts with Chinese masculinity studies by Kam Louie and Geng Song, this paper attempts to examine the Chinese construction of manliness with historical accounts and argue why and how the recurrent emphasis of hard/military masculinity (wu) on screen are viewed as China’s contemporary rewriting of the ‘sick-man’ history in the film form. Through this investigation, the paper finds that the rewriting of the ‘sick-man’ history in the cinematic world through heroic brawny soldiers comes to resonate a collective anxiety of China in countering the real-life increasing feminized masculinity on the public appearance, particularly on the male celebrities. In addition, the superpower fantasy about China illuminates a hypermasculine imaginary of China as a global rising power and this coincidently echoes China’s current tougher diplomatic strategy tackling the Sino-US trade war, South China sea dispute and Huawei-US lawsuits.Keywords: Chinese masculinity, Chinese military action film, feminized masculinity, manhood and nationhood, sick man of Asia
Procedia PDF Downloads 146438 A Pilot Study on the Development and Validation of an Instrument to Evaluate Inpatient Beliefs, Expectations and Attitudes toward Reflexology (IBEAR)-16
Authors: Samuel Attias, Elad Schiff, Zahi Arnon, Eran Ben-Arye, Yael Keshet, Ibrahim Matter, Boker Lital Keinan
Abstract:
Background: Despite the extensive use of manual therapies, reflexology in particular, no validated tools have been developed to evaluate patients' beliefs, attitudes and expectations regarding reflexology. Such tools however are essential to improve the results of the reflexology treatment, by better adjusting it to the patients' attitudes and expectations. The tool also enables assessing correlations with clinical results of interventional studies using reflexology. Methods: The IBEAR (Inpatient Beliefs, Expectations and Attitudes toward Reflexology) tool contains 25 questions (8 demographic and 17 specifically addressing reflexology), and was constructed in several stages: brainstorming by a multidisciplinary team of experts; evaluation of each of the proposed questions by the experts' team; and assessment of the experts' degree of agreement per each question, based on a Likert 1-7 scale (1 – don't agree at all; 7 – agree completely). Cronbach's Alpha was computed to evaluate the questionnaire's reliability while the Factor analysis test was used for further validation (228 patients). The questionnaire was tested and re-tested (48h) on a group of 199 patients to assure clarity and reliability, using the Pearson coefficient and the Kappa test. It was modified based on these results into its final form. Results: After its construction, the IBEAR questionnaire passed the expert group's preliminary consensus, evaluation of the questions' clarity (from 5.1 to 7.0), inner validation (from 5.5 to 7) and structural validation (from 5.5 to 6.75). Factor analysis pointed to two content worlds in a division into 4 questions discussing attitudes and expectations versus 5 questions on belief and attitudes. Of the 221 questionnaires collected, a Cronbach's Alpha coefficient was calculated on nine questions relating to beliefs, expectations, and attitudes regarding reflexology. This measure stood at 0.716 (satisfactory reliability). At the Test-Retest stage, 199 research participants filled in the questionnaire a second time. The Pearson coefficient for all questions ranged between 0.73 and 0.94 (good to excellent reliability). As for dichotomic answers, Kappa scores ranged between 0.66 and 1.0 (mediocre to high). One of the questions was removed from the IBEAR following questionnaire validation. Conclusions: The present study provides evidence that the proposed IBEAR-16 questionnaire is a valid and reliable tool for the characterization of potential reflexology patients and may be effectively used in settings which include the evaluation of inpatients' beliefs, expectations, and attitudes toward reflexology.Keywords: reflexology, attitude, expectation, belief, CAM, inpatient
Procedia PDF Downloads 228437 Modeling the Acquisition of Expertise in a Sequential Decision-Making Task
Authors: Cristóbal Moënne-Loccoz, Rodrigo C. Vergara, Vladimir López, Domingo Mery, Diego Cosmelli
Abstract:
Our daily interaction with computational interfaces is plagued of situations in which we go from inexperienced users to experts through self-motivated exploration of the same task. In many of these interactions, we must learn to find our way through a sequence of decisions and actions before obtaining the desired result. For instance, when drawing cash from an ATM machine, choices are presented in a step-by-step fashion so that a specific sequence of actions must be performed in order to produce the expected outcome. But, as they become experts in the use of such interfaces, do users adopt specific search and learning strategies? Moreover, if so, can we use this information to follow the process of expertise development and, eventually, predict future actions? This would be a critical step towards building truly adaptive interfaces that can facilitate interaction at different moments of the learning curve. Furthermore, it could provide a window into potential mechanisms underlying decision-making behavior in real world scenarios. Here we tackle this question using a simple game interface that instantiates a 4-level binary decision tree (BDT) sequential decision-making task. Participants have to explore the interface and discover an underlying concept-icon mapping in order to complete the game. We develop a Hidden Markov Model (HMM)-based approach whereby a set of stereotyped, hierarchically related search behaviors act as hidden states. Using this model, we are able to track the decision-making process as participants explore, learn and develop expertise in the use of the interface. Our results show that partitioning the problem space into such stereotyped strategies is sufficient to capture a host of exploratory and learning behaviors. Moreover, using the modular architecture of stereotyped strategies as a Mixture of Experts, we are able to simultaneously ask the experts about the user's most probable future actions. We show that for those participants that learn the task, it becomes possible to predict their next decision, above chance, approximately halfway through the game. Our long-term goal is, on the basis of a better understanding of real-world decision-making processes, to inform the construction of interfaces that can establish dynamic conversations with their users in order to facilitate the development of expertise.Keywords: behavioral modeling, expertise acquisition, hidden markov models, sequential decision-making
Procedia PDF Downloads 252436 Modification of Carbon-Based Gas Sensors for Boosting Selectivity
Authors: D. Zhao, Y. Wang, G. Chen
Abstract:
Gas sensors that utilize carbonaceous materials as sensing media offer numerous advantages, making them the preferred choice for constructing chemical sensors over those using other sensing materials. Carbonaceous materials, particularly nano-sized ones like carbon nanotubes (CNTs), provide these sensors with high sensitivity. Additionally, carbon-based sensors possess other advantageous properties that enhance their performance, including high stability, low power consumption for operation, and cost-effectiveness in their construction. These properties make carbon-based sensors ideal for a wide range of applications, especially in miniaturized devices created through MEMS or NEMS technologies. To capitalize on these properties, a group of chemoresistance-type carbon-based gas sensors was developed and tested against various volatile organic compounds (VOCs) and volatile inorganic compounds (VICs). The results demonstrated exceptional sensitivity to both VOCs and VICs, along with the sensor’s long-term stability. However, this broad sensitivity also led to poor selectivity towards specific gases. This project aims at addressing the selectivity issue by modifying the carbon-based sensing materials and enhancing the sensor's specificity to individual gas. Multiple groups of sensors were manufactured and modified using proprietary techniques. To assess their performance, we conducted experiments on representative sensors from each group to detect a range of VOCs and VICs. The VOCs tested included acetone, dimethyl ether, ethanol, formaldehyde, methane, and propane. The VICs comprised carbon monoxide (CO), carbon dioxide (CO2), hydrogen (H2), nitric oxide (NO), and nitrogen dioxide (NO2). The concentrations of the sample gases were all set at 50 parts per million (ppm). Nitrogen (N2) was used as the carrier gas throughout the experiments. The results of the gas sensing experiments are as follows. In Group 1, the sensors exhibited selectivity toward CO2, acetone, NO, and NO2, with NO2 showing the highest response. Group 2 primarily responded to NO2. Group 3 displayed responses to nitrogen oxides, i.e., both NO and NO2, with NO2 slightly surpassing NO in sensitivity. Group 4 demonstrated the highest sensitivity among all the groups toward NO and NO2, with NO2 being more sensitive than NO. In conclusion, by incorporating several modifications using carbon nanotubes (CNTs), sensors can be designed to respond well to NOx gases with great selectivity and without interference from other gases. Because the response levels to NO and NO2 from each group are different, the individual concentration of NO and NO2 can be deduced.Keywords: gas sensors, carbon, CNT, MEMS/NEMS, VOC, VIC, high selectivity, modification of sensing materials
Procedia PDF Downloads 127435 Long-Term Economic-Ecological Assessment of Optimal Local Heat-Generating Technologies for the German Unrefurbished Residential Building Stock on the Quarter Level
Authors: M. A. Spielmann, L. Schebek
Abstract:
In order to reach the long-term national climate goals of the German government for the building sector, substantial energetic measures have to be executed. Historically, those measures were primarily energetic efficiency measures at the buildings’ shells. Advanced technologies for the on-site generation of heat (or other types of energy) often are not feasible at this small spatial scale of a single building. Therefore, the present approach uses the spatially larger dimension of a quarter. The main focus of the present paper is the long-term economic-ecological assessment of available decentralized heat-generating (CHP power plants and electrical heat pumps) technologies at the quarter level for the German unrefurbished residential buildings. Three distinct terms have to be described methodologically: i) Quarter approach, ii) Economic assessment, iii) Ecological assessment. The quarter approach is used to enable synergies and scaling effects over a single-building. For the present study, generic quarters that are differentiated according to significant parameters concerning their heat demand are used. The core differentiation of those quarters is made by the construction time period of the buildings. The economic assessment as the second crucial parameter is executed with the following structure: Full costs are quantized for each technology combination and quarter. The investment costs are analyzed on an annual basis and are modeled with the acquisition of debt. Annuity loans are assumed. Consequently, for each generic quarter, an optimal technology combination for decentralized heat generation is provided in each year of the temporal boundaries (2016-2050). The ecological assessment elaborates for each technology combination and each quarter a Life Cycle assessment. The measured impact category hereby is GWP 100. The technology combinations for heat production can be therefore compared against each other concerning their long-term climatic impacts. Core results of the approach can be differentiated to an economic and ecological dimension. With an annual resolution, the investment and running costs of different energetic technology combinations are quantified. For each quarter an optimal technology combination for local heat supply and/or energetic refurbishment of the buildings within the quarter is provided. Coherently to the economic assessment, the climatic impacts of the technology combinations are quantized and compared against each other.Keywords: building sector, economic-ecological assessment, heat, LCA, quarter level
Procedia PDF Downloads 224434 Availability Analysis of Process Management in the Equipment Maintenance and Repair Implementation
Authors: Onur Ozveri, Korkut Karabag, Cagri Keles
Abstract:
It is an important issue that the occurring of production downtime and repair costs when machines fail in the machine intensive production industries. In the case of failure of more than one machine at the same time, which machines will have the priority to repair, how to determine the optimal repair time should be allotted for this machines and how to plan the resources needed to repair are the key issues. In recent years, Business Process Management (BPM) technique, bring effective solutions to different problems in business. The main feature of this technique is that it can improve the way the job done by examining in detail the works of interest. In the industries, maintenance and repair works are operating as a process and when a breakdown occurs, it is known that the repair work is carried out in a series of process. Maintenance main-process and repair sub-process are evaluated with process management technique, so it is thought that structure could bring a solution. For this reason, in an international manufacturing company, this issue discussed and has tried to develop a proposal for a solution. The purpose of this study is the implementation of maintenance and repair works which is integrated with process management technique and at the end of implementation, analyzing the maintenance related parameters like quality, cost, time, safety and spare part. The international firm that carried out the application operates in a free region in Turkey and its core business area is producing original equipment technologies, vehicle electrical construction, electronics, safety and thermal systems for the world's leading light and heavy vehicle manufacturers. In the firm primarily, a project team has been established. The team dealt with the current maintenance process again, and it has been revised again by the process management techniques. Repair process which is sub-process of maintenance process has been discussed again. In the improved processes, the ABC equipment classification technique was used to decide which machine or machines will be given priority in case of failure. This technique is a prioritization method of malfunctioned machine based on the effect of the production, product quality, maintenance costs and job security. Improved maintenance and repair processes have been implemented in the company for three months, and the obtained data were compared with the previous year data. In conclusion, breakdown maintenance was found to occur in a shorter time, with lower cost and lower spare parts inventory.Keywords: ABC equipment classification, business process management (BPM), maintenance, repair performance
Procedia PDF Downloads 194433 Construction of Graph Signal Modulations via Graph Fourier Transform and Its Applications
Authors: Xianwei Zheng, Yuan Yan Tang
Abstract:
Classical window Fourier transform has been widely used in signal processing, image processing, machine learning and pattern recognition. The related Gabor transform is powerful enough to capture the texture information of any given dataset. Recently, in the emerging field of graph signal processing, researchers devoting themselves to develop a graph signal processing theory to handle the so-called graph signals. Among the new developing theory, windowed graph Fourier transform has been constructed to establish a time-frequency analysis framework of graph signals. The windowed graph Fourier transform is defined by using the translation and modulation operators of graph signals, following the similar calculations in classical windowed Fourier transform. Specifically, the translation and modulation operators of graph signals are defined by using the Laplacian eigenvectors as follows. For a given graph signal, its translation is defined by a similar manner as its definition in classical signal processing. Specifically, the translation operator can be defined by using the Fourier atoms; the graph signal translation is defined similarly by using the Laplacian eigenvectors. The modulation of the graph can also be established by using the Laplacian eigenvectors. The windowed graph Fourier transform based on these two operators has been applied to obtain time-frequency representations of graph signals. Fundamentally, the modulation operator is defined similarly to the classical modulation by multiplying a graph signal with the entries in each Fourier atom. However, a single Laplacian eigenvector entry cannot play a similar role as the Fourier atom. This definition ignored the relationship between the translation and modulation operators. In this paper, a new definition of the modulation operator is proposed and thus another time-frequency framework for graph signal is constructed. Specifically, the relationship between the translation and modulation operations can be established by the Fourier transform. Specifically, for any signal, the Fourier transform of its translation is the modulation of its Fourier transform. Thus, the modulation of any signal can be defined as the inverse Fourier transform of the translation of its Fourier transform. Therefore, similarly, the graph modulation of any graph signal can be defined as the inverse graph Fourier transform of the translation of its graph Fourier. The novel definition of the graph modulation operator established a relationship of the translation and modulation operations. The new modulation operation and the original translation operation are applied to construct a new framework of graph signal time-frequency analysis. Furthermore, a windowed graph Fourier frame theory is developed. Necessary and sufficient conditions for constructing windowed graph Fourier frames, tight frames and dual frames are presented in this paper. The novel graph signal time-frequency analysis framework is applied to signals defined on well-known graphs, e.g. Minnesota road graph and random graphs. Experimental results show that the novel framework captures new features of graph signals.Keywords: graph signals, windowed graph Fourier transform, windowed graph Fourier frames, vertex frequency analysis
Procedia PDF Downloads 342