Search results for: details
255 Heat Loss Control in Stave Cooled Blast Furnace by Optimizing Gas Flow Pattern through Burden Distribution
Authors: Basant Kumar Singh, S. Subhachandhar, Vineet Ranjan Tripathi, Amit Kumar Singh, Uttam Singh, Santosh Kumar Lal
Abstract:
Productivity of Blast Furnace is largely impacted by fuel efficiency and controlling heat loss is one of the enabling parameters for achieving lower fuel rate. 'I' Blast Furnace is the latest and largest Blast Furnace of Tata Steel Jamshedpur with working volume of 3230 m³ and with rated capacity of 3.055 million tons per annum. Optimizing heat losses in Belly and Bosh zone remained major challenge for blast furnace operators after its commissioning. 'I' Blast has installed Cast Iron & Copper Staves cooling members where copper staves are installed in Belly, Bosh & Lower Stack whereas cast iron staves are installed in upper stack area. Stave cooled Blast Furnaces are prone to higher heat losses in Belly and Bosh region with an increase in coal injection rate as Bosh gas volume increases. Under these conditions, managing gas flow pattern through proper burden distribution, casting techniques & by maintaining desired raw material qualities are of utmost importance for sustaining high injection rates. This study details, the burden distribution control by Ore & Coke ratio adjustment at wall and center of Blast Furnace as the coal injection rates increased from 140 kg/thm to 210 kg/thm. Control of blowing parameters, casting philosophy, specification for raw materials & devising operational practice for controlling heat losses is also elaborated with the model that is used to visualize heat loss pattern in different zones of Blast Furnace.Keywords: blast furnace, staves, gas flow pattern, belly/bosh heat losses, ore/coke ratio, blowing parameters, casting, operation practice
Procedia PDF Downloads 374254 Optical Flow Technique for Supersonic Jet Measurements
Authors: Haoxiang Desmond Lim, Jie Wu, Tze How Daniel New, Shengxian Shi
Abstract:
This paper outlines the development of a novel experimental technique in quantifying supersonic jet flows, in an attempt to avoid seeding particle problems frequently associated with particle-image velocimetry (PIV) techniques at high Mach numbers. Based on optical flow algorithms, the idea behind the technique involves using high speed cameras to capture Schlieren images of the supersonic jet shear layers, before they are subjected to an adapted optical flow algorithm based on the Horn-Schnuck method to determine the associated flow fields. The proposed method is capable of offering full-field unsteady flow information with potentially higher accuracy and resolution than existing point-measurements or PIV techniques. Preliminary study via numerical simulations of a circular de Laval jet nozzle successfully reveals flow and shock structures typically associated with supersonic jet flows, which serve as useful data for subsequent validation of the optical flow based experimental results. For experimental technique, a Z-type Schlieren setup is proposed with supersonic jet operated in cold mode, stagnation pressure of 8.2 bar and exit velocity of Mach 1.5. High-speed single-frame or double-frame cameras are used to capture successive Schlieren images. As implementation of optical flow technique to supersonic flows remains rare, the current focus revolves around methodology validation through synthetic images. The results of validation test offers valuable insight into how the optical flow algorithm can be further improved to improve robustness and accuracy. Details of the methodology employed and challenges faced will be further elaborated in the final conference paper should the abstract be accepted. Despite these challenges however, this novel supersonic flow measurement technique may potentially offer a simpler way to identify and quantify the fine spatial structures within the shock shear layer.Keywords: Schlieren, optical flow, supersonic jets, shock shear layer
Procedia PDF Downloads 312253 Ethnobotanical Medicines for Treating Snakebites among the Indigenous Maya Populations of Belize
Authors: Kerry Hull, Mark Wright
Abstract:
This paper brings light to ethnobotanical medicines used by the Maya of Belize to treat snake bites. The varying ecological zones of Belize boast over fifty species of snakes, nine of which are poisonous and dangerous to humans. Two distinct Maya groups occupy neighboring regions of Belize, the Q’eqchi’ and the Mopan. With Western medical care often far from their villages, what traditional methods are used to treat poisonous snake bites? Based primarily on data gathered with native consultants during the authors’ fieldwork with both groups, this paper details the ethnobotanical resources used by the Q’eqchi’ and Mopan traditional healers. The Q’eqchi’ and Mopan most commonly rely on traditional ‘bush doctors’ (ilmaj in Mopan), both male and female, and specialized ‘snake doctors’ to heal bites from venomous snakes. First, this paper presents each plant employed by healers for bites for the nine poisonous snakes in Belize along with the specific botanical recipes and methods of application for each remedy. Individual chemical and therapeutic qualities of some of those plants are investigated in an effort to explain their possible medicinal value for different toxins or the symptoms caused by those toxins. In addition, this paper explores mythological associations with certain snakes that inform local understanding regarding which plants are considered efficacious in each case, arguing that numerous oral traditions (recorded by the authors) help to link botanical medicines to episodes within their mythic traditions. Finally, the use of plants to counteract snakebites brought about through sorcery is discussed inasmuch as some snakes are seen as ‘helpers’ of sorcerers. Snake bites given under these circumstances can only be cured by those who know both the proper corresponding plant(s) and ritual prayer(s). This paper provides detailed documentation of traditional ethnomedicines and practices from the dying art of traditional Maya healers and argues for multi-faceted diagnostic techniques to determine toxin severity, the presence or absence of sorcery, and the appropriate botanical remedy.Keywords: ethnobotany, Maya, medicine, snake bites
Procedia PDF Downloads 237252 Designing Stochastic Non-Invasively Applied DC Pulses to Suppress Tremors in Multiple Sclerosis by Computational Modeling
Authors: Aamna Lawrence, Ashutosh Mishra
Abstract:
Tremors occur in 60% of the patients who have Multiple Sclerosis (MS), the most common demyelinating disease that affects the central and peripheral nervous system, and are the primary cause of disability in young adults. While pharmacological agents provide minimal benefits, surgical interventions like Deep Brain Stimulation and Thalamotomy are riddled with dangerous complications which make non-invasive electrical stimulation an appealing treatment of choice for dealing with tremors. Hence, we hypothesized that if the non-invasive electrical stimulation parameters (mainly frequency) can be computed by mathematically modeling the nerve fibre to take into consideration the minutest details of the axon morphologies, tremors due to demyelination can be optimally alleviated. In this computational study, we have modeled the random demyelination pattern in a nerve fibre that typically manifests in MS using the High-Density Hodgkin-Huxley model with suitable modifications to account for the myelin. The internode of the nerve fibre in our model could have up to ten demyelinated regions each having random length and myelin thickness. The arrival time of action potentials traveling the demyelinated and the normally myelinated nerve fibre between two fixed points in space was noted, and its relationship with the nerve fibre radius ranging from 5µm to 12µm was analyzed. It was interesting to note that there were no overlaps between the arrival time for action potentials traversing the demyelinated and normally myelinated nerve fibres even when a single internode of the nerve fibre was demyelinated. The study gave us an opportunity to design DC pulses whose frequency of application would be a function of the random demyelination pattern to block only the delayed tremor-causing action potentials. The DC pulses could be delivered to the peripheral nervous system non-invasively by an electrode bracelet that would suppress any shakiness beyond it thus paving the way for wearable neuro-rehabilitative technologies.Keywords: demyelination, Hodgkin-Huxley model, non-invasive electrical stimulation, tremor
Procedia PDF Downloads 128251 Role of Cognitive Flexibility and Employee Engagement in Determining Turnover Intentions of Employees
Authors: Prashant Das, Tushar Singh, Virendra Byadwal
Abstract:
The present study attempted to understand the role of cognitive flexibility and employee engagement in predicting employees’ turnover intentions. Employee turnover is a significant problem that many organizations are facing these days. Employee turnover is not only extremely expensive for the employer but also results in poor production levels. In developing countries like India, organizations once believed to have most stable employees, are facing major turnover problems. One such organization is banking organizations. Due to globalization, banks are now changing their work scenarios under which the employees have many different roles to perform. Cognitive flexibility which refers to an individual’s ability to shift cognitive sets and to adapt to one’s changing environment, thus seems to be an important factor that are responsible for the employee turnover in organizations. It is hypothesized that those with higher cognitive flexibility would be more able to adapt to the changing work demands of the organizations and thus would show less turnover intentions. Another factor that seems to be important in predicting turnover is employee engagement. Kahn referred to engagement in terms of the harnessing of organization members’ selves to their work roles [by which they] employ and express themselves physically, cognitively, and emotionally during role performances. Studies have shown a strong relationship between employee engagement and turnover intentions. Those with higher engagement with their jobs have found to show low turnover intentions. This study thus hypothesizes that employees with higher engagement will show lower levels of turnover intentions. A total of 150 bank employees (75 from private and 75 from public) participated in this study. They were administered Cognitive Flexibility Scale, Gallup Questionnaire and Intention to Stay Questionnaire along with another questionnaire asking for their demographic details. Results of the study revealed that employees with higher levels of cognitive flexibility and employee engagement show lover levels of turnover intentions. However, the effect is more prominent in case of employees of private banks. Demographic characteristics such as level of the employee and years of engagement in the current job have also been found to be influencing the relationship between cognitive flexibility, employee engagement and turnover intentions. Results of the study are interpreted in accordance to the prevalent literature and theoretical positions.Keywords: cognitive flexibility, employee engagement, organization, turnover intentions
Procedia PDF Downloads 423250 An Inquiry on Imaging of Soft Tissues in Micro-Computed Tomography
Authors: Matej Patzelt, Jana Mrzilkova, Jan Dudak, Frantisek Krejci, Jan Zemlicka, Zdenek Wurst, Petr Zach, Vladimir Musil
Abstract:
Introduction: Micro-CT is well used for examination of bone structures and teeth. On the other hand visualization of the soft tissues is still limited. The goal of our study was to elaborate methodology for soft tissue samples imaging in micro-CT. Methodology: We used organs of rats and mice. We either did a preparation of the organs and fixation in contrast solution or we did cannulation of blood vessels and their injection for imaging of the vascular system. First, we scanned native specimens, then we created corrosive specimens by resins. In the next step, we injected vascular system either by Aurovist contrast agent or by Exitron. In the next step, we focused on soft tissues contrast increase. We scanned samples fixated in Lugol solution, samples fixated in pure ethanol and in formaldehyde solution. All used methods were afterwards compared. Results: Native specimens did not provide sufficient contrast of the tissues in any of organs. Corrosive samples of the blood stream provided great contrast and details; on the other hand, it was necessary to destroy the organ. Further examined possibility was injection of the AuroVist contrast that leads to the great bloodstream contrast. Injection of Exitron contrast agent comparing to Aurovist did not provide such a great contrast. The soft tissues (kidney, heart, lungs, brain, and liver) were best visualized after fixation in ethanol. This type of fixation showed best results in all studied tissues. Lugol solution had great results in muscle tissue. Fixation by formaldehyde solution showed similar quality of contrast in the tissues like ethanol. Conclusion: Before imaging, we need to, first, determinate which structures of the soft tissues we want to visualize. In the case of the bloodstream, the best was AuroVist and corrosive specimens. Muscle tissue is best visualized by Lugol solution. In the case of the organs containing cavities, like kidneys or brain, the best way was ethanol fixation.Keywords: experimental imaging, fixation, micro-CT, soft tissues
Procedia PDF Downloads 325249 Worldbuilding as Critical Architectural Pedagogy
Authors: Jesse Rafeiro
Abstract:
This paper discusses worldbuilding as a pedagogical approach to the first-year architectural design studio. The studio ran for three consecutive terms between 2016-2018. Taking its departure from the fifty-five city narratives in Italo Calvino’s Invisible Cities, students collectively designed in a “nowhere” space where intersecting and diverging narratives could be played out. Along with Calvino, students navigated between three main exercises and their imposed limits to develop architectural insight at three scales simulating the considerations of architectural practice: detail, building, and city. The first exercise asked each student to design and model a ruin based on randomly assigned incongruent fragments. Each student was given one plan fragment and two section fragments from different Renaissance Treatises. The students were asked to translate these in alternating axonometric projection and model-making explorations. Although the fragments themselves were imposed, students were free to interpret how the drawings fit together by imagining new details and atypical placements. An undulating terrain model was introduced in the second exercise to ground the worldbuilding exercises. Here, students were required to negotiate with one another to design a city of ruins. Free to place their models anywhere on the site, the students were restricted by the negotiation of territories marked by other students and the requirement to provide thresholds, open spaces, and corridors. The third exercise introduced new life into the ruined city through a series of design interventions. Each student was assigned an atypical building program suggesting a place for an activity, human or nonhuman. The atypical nature of the programs challenged the triviality of functional planning through explorations in spatial narratives free from preconceived assumptions. By contesting, playing out, or dreaming responses to realities taught in other coursework, this third exercise actualized learnings that are too often self-contained in the silos of differing course agendas. As such, the studio fostered an initial worldbuilding space within which to sharpen sensibility and criticality for subsequent years of education.Keywords: architectural pedagogy, critical pedagogy, Italo Calvino, worldbuilding
Procedia PDF Downloads 131248 Testing and Validation Stochastic Models in Epidemiology
Authors: Snigdha Sahai, Devaki Chikkavenkatappa Yellappa
Abstract:
This study outlines approaches for testing and validating stochastic models used in epidemiology, focusing on the integration and functional testing of simulation code. It details methods for combining simple functions into comprehensive simulations, distinguishing between deterministic and stochastic components, and applying tests to ensure robustness. Techniques include isolating stochastic elements, utilizing large sample sizes for validation, and handling special cases. Practical examples are provided using R code to demonstrate integration testing, handling of incorrect inputs, and special cases. The study emphasizes the importance of both functional and defensive programming to enhance code reliability and user-friendliness.Keywords: computational epidemiology, epidemiology, public health, infectious disease modeling, statistical analysis, health data analysis, disease transmission dynamics, predictive modeling in health, population health modeling, quantitative public health, random sampling simulations, randomized numerical analysis, simulation-based analysis, variance-based simulations, algorithmic disease simulation, computational public health strategies, epidemiological surveillance, disease pattern analysis, epidemic risk assessment, population-based health strategies, preventive healthcare models, infection dynamics in populations, contagion spread prediction models, survival analysis techniques, epidemiological data mining, host-pathogen interaction models, risk assessment algorithms for disease spread, decision-support systems in epidemiology, macro-level health impact simulations, socioeconomic determinants in disease spread, data-driven decision making in public health, quantitative impact assessment of health policies, biostatistical methods in population health, probability-driven health outcome predictions
Procedia PDF Downloads 6247 Lessons from Seven Years of Teaching Mindfulness to Children Living in a Context of Vulnerability
Authors: Annie Devault
Abstract:
Mindfulness-based interventions (MBI) can be beneficial for the well-being of children. MBIs offered for children in contexts of vulnerability (poverty, neglect) report positive results in terms of emotion regulation and cognitive flexibility. Anxiety is a common issue for children living in a vulnerable context. It has a negative impact on children’s attention span, emotional regulation and self-esteem. The MBI (12 weeks) associated with this research has been developed for a total of 30 children suffering from anxiety (7 to 9 years old) and receiving services from a community center over the last seven years. The first objective is to describe in details the content of the mindfulness-based intervention. The second purpose is to document what helps and what hinders the practice of mindfulness for children living in a context of vulnerability. A special attention will be given to the importance of the way that the intervention is offered and the principles that are followed by the practitioners. Perceived effects of the intervention on children were collected through an individual semi-structured interview with each child at the end of the program. Parents were also interviewed to have their point of view on the effect of their children’s participation in the group. Anxiety was measure with the Beck youth pre-post and at follow up (2 months). Qualitative analysis of the interviews with children showed that most of them mentioned that the program helped them become calmer, more confident, less scared and more able to deal with difficult emotions. Almost all of them reported having used the material provided to them to practice at home. This result has been confirmed by parents. They reported that their child had gained confidence and were better at verbalizing emotions. Children also grew calmer, even though all anxiety was not gone. They would have liked more material to practice at home. The quantitative instrument used to measure anxiety did not corroborate the qualitative interviews about anxiety. Discussion will question the use of this questionnaire for children who have important cognitive limitations. Discussion will also report the importance of the personalized contact with children, along with other consideration, to enhance the adherence of children and parents. The MBI seems to have benefited children in different ways, which is corroborated by most parents. Since the sample was limited, we will need to continue documenting its effects with more children and parents. The major strength of this research is to have reported the subjective perspectives of children on their experience of mindfulness.Keywords: anxiety, mindfulness, children, best practices
Procedia PDF Downloads 113246 Automated, Short Cycle Production of Polymer Composite Applications with Special Regards to the Complexity and Recyclability of Composite Elements
Authors: Peter Pomlenyi, Orsolya Semperger, Gergely Hegedus
Abstract:
The purpose of the project is to develop a complex composite component with visible class ‘A’ surface. It is going to integrate more functions, including continuous fiber reinforcement, foam core, injection molded ribs, and metal inserts. Therefore we are going to produce recyclable structural composite part from thermoplastic polymer in serial production with short cycle time for automotive applications. Our design of the process line is determined by the principles of Industry 4.0. Accordingly, our goal is to map in details the properties of the final product including the mechanical properties in order to replace metal elements used in automotive industry, with special regard to the effect of each manufacturing process step on the afore mentioned properties. Period of the project is 3 years, which lasts from the 1st of December 2016 to the 30th November 2019. There are four consortium members in the R&D project evopro systems engineering Ltd., Department of Polymer Engineering of the Budapest University of Technology and Economics, Research Centre for Natural Sciences of Hungarian Academy of Sciences and eCon Engineering Ltd. One of the most important result that we can obtain short cycle time (up to 2-3 min) with in-situ polymerization method, which is an innovation in the field of thermoplastic composite production. Because of the mentioned method, our fully automated production line is able to manufacture complex thermoplastic composite parts and satisfies the short cycle time required by the automotive industry. In addition to the innovative technology, we are able to design, analyze complex composite parts with finite element method, and validate our results. We are continuously collecting all the information, knowledge and experience to improve our technology and obtain even more accurate results with respect to the quality and complexity of the composite parts, the cycle time of the production, and the design and analyzing method of the composite parts.Keywords: T-RTM technology, composite, automotive, class A surface
Procedia PDF Downloads 139245 Simulation of Scaled Model of Tall Multistory Structure: Raft Foundation for Experimental and Numerical Dynamic Studies
Authors: Omar Qaftan
Abstract:
Earthquakes can cause tremendous loss of human life and can result in severe damage to a several of civil engineering structures especially the tall buildings. The response of a multistory structure subjected to earthquake loading is a complex task, and it requires to be studied by physical and numerical modelling. For many circumstances, the scale models on shaking table may be a more economical option than the similar full-scale tests. A shaking table apparatus is a powerful tool that offers a possibility of understanding the actual behaviour of structural systems under earthquake loading. It is required to use a set of scaling relations to predict the behaviour of the full-scale structure. Selecting the scale factors is the most important steps in the simulation of the prototype into the scaled model. In this paper, the principles of scaling modelling procedure are explained in details, and the simulation of scaled multi-storey concrete structure for dynamic studies is investigated. A procedure for a complete dynamic simulation analysis is investigated experimentally and numerically with a scale factor of 1/50. The frequency domain accounting and lateral displacement for both numerical and experimental scaled models are determined. The procedure allows accounting for the actual dynamic behave of actual size porotype structure and scaled model. The procedure is adapted to determine the effects of the tall multi-storey structure on a raft foundation. Four generated accelerograms were used as inputs for the time history motions which are in complying with EC8. The output results of experimental works expressed regarding displacements and accelerations are compared with those obtained from a conventional fixed-base numerical model. Four-time history was applied in both experimental and numerical models, and they concluded that the experimental has an acceptable output accuracy in compare with the numerical model output. Therefore this modelling methodology is valid and qualified for different shaking table experiments tests.Keywords: structure, raft, soil, interaction
Procedia PDF Downloads 136244 Design and Development of an Autonomous Beach Cleaning Vehicle
Authors: Mahdi Allaoua Seklab, Süleyman BaşTürk
Abstract:
In the quest to enhance coastal environmental health, this study introduces a fully autonomous beach cleaning machine, a breakthrough in leveraging green energy and advanced artificial intelligence for ecological preservation. Designed to operate independently, the machine is propelled by a solar-powered system, underscoring a commitment to sustainability and the use of renewable energy in autonomous robotics. The vehicle's autonomous navigation is achieved through a sophisticated integration of LIDAR and a camera system, utilizing an SSD MobileNet V2 object detection model for accurate and real-time trash identification. The SSD framework, renowned for its efficiency in detecting objects in various scenarios, is coupled with the lightweight and precise highly MobileNet V2 architecture, making it particularly suited for the computational constraints of on-board processing in mobile robotics. Training of the SSD MobileNet V2 model was conducted on Google Colab, harnessing cloud-based GPU resources to facilitate a rapid and cost-effective learning process. The model was refined with an extensive dataset of annotated beach debris, optimizing the parameters using the Adam optimizer and a cross-entropy loss function to achieve high-precision trash detection. This capability allows the machine to intelligently categorize and target waste, leading to more effective cleaning operations. This paper details the design and functionality of the beach cleaning machine, emphasizing its autonomous operational capabilities and the novel application of AI in environmental robotics. The results showcase the potential of such technology to fill existing gaps in beach maintenance, offering a scalable and eco-friendly solution to the growing problem of coastal pollution. The deployment of this machine represents a significant advancement in the field, setting a new standard for the integration of autonomous systems in the service of environmental stewardship.Keywords: autonomous beach cleaning machine, renewable energy systems, coastal management, environmental robotics
Procedia PDF Downloads 27243 Engineering Properties of Different Lithological Varieties of a Singapore Granite
Authors: Louis Ngai Yuen Wong, Varun Maruvanchery
Abstract:
The Bukit Timah Granite, which is a major rock formation in Singapore, encompasses different rock types such as granite, adamellite, and granodiorite with various hybrid rocks. The present study focuses on the Central Singapore Granite found in the Mandai area. Even within this small aerial extent, lithological variations with respect to the composition, texture as well as the grain size have been recognized in this igneous body. Over the years, the research effort on the Bukit Timah Granite has been focused on achieving a better understanding of its engineering properties in association with civil engineering projects. To our best understanding, a few types of research attempted to systematically investigate the influence of grain size, mineral composition, texture etc. on the strength of Bukit Timah Granite rocks in a comprehensive manner. In typical local industry practices, the different lithological varieties are not differentiated, but all are grouped under Bukit Timah Granite during core logging and the subsequent determination of engineering properties. To address such a major gap in the local engineering geological practice, a preliminary study is conducted on the variations of uniaxial compressive strength (UCS) in seven distinctly different lithological varieties found in the Bukit Timah Granite. Other physical properties including Young’s modulus, P-wave velocity and dry density determined from laboratory testing will also be discussed. The study is supplemented by a petrographical thin section examination. In addition, the specimen failure mode is classified and further correlated with the lithological varieties by carefully observing the details of crack initiation, propagation and coalescence processes in the specimens undergoing loading tests using a high-speed camera. The outcome of this research, which is the first of its type in Singapore, will have a direct implication on the sampling and design practices in the field of civil engineering and particularly underground space development in Singapore.Keywords: Bukit Timah Granite, lithological variety, thin section study, high speed video, failure mode
Procedia PDF Downloads 322242 Initial Palaeotsunami and Historical Tsunami in the Makran Subduction Zone of the Northwest Indian Ocean
Authors: Mohammad Mokhtari, Mehdi Masoodi, Parvaneh Faridi
Abstract:
history of tsunami generating earthquakes along the Makran Subduction Zone provides evidence of the potential tsunami hazard for the whole coastal area. In comparison with other subduction zone in the world, the Makran region of southern Pakistan and southeastern Iran remains low seismicity. Also, it is one of the least studied area in the northwest of the Indian Ocean regarding tsunami studies. We present a review of studies dealing with the historical /and ongoing palaeotsunamis supported by IGCP of UNESCO in the Makran Subduction Zone. The historical tsunami presented here includes about nine tsunamis in the Makran Subduction Zone, of which over 7 tsunamis occur in the eastern Makran. Tsunami is not as common in the western Makran as in the eastern Makran, where a database of historical events exists. The historically well-documented event is related to the 1945 earthquake with a magnitude of 8.1moment magnitude and tsunami in the western and eastern Makran. There are no details as to whether a tsunami was generated by a seismic event before 1945 off western Makran. But several potentially large tsunamigenic events in the MSZ before 1945 occurred in 325 B.C., 1008, 1483, 1524, 1765, 1851, 1864, and 1897. Here we will present new findings from a historical point of view, immediately, we would like to emphasize that the area needs to be considered with higher research investigation. As mentioned above, a palaeotsunami (geological evidence) is now being planned, and here we will present the first phase result. From a risk point of view, the study shows as preliminary achievement within 20 minutes the wave reaches to Iranian as well Pakistan and Oman coastal zone with very much destructive tsunami waves capable of inundating destructive effect. It is important to note that all the coastal areas of all states surrounding the MSZ are being developed very rapidly, so any event would have a devastating effect on this region. Although several papers published about modelling, seismology, tsunami deposits in the last decades; as Makran is a forgotten subduction zone, more data such as the main crustal structure, fault location, and its related parameter are required.Keywords: historical tsunami, Indian ocean, makran subduction zone, palaeotsunami
Procedia PDF Downloads 130241 Career Guidance System Using Machine Learning
Authors: Mane Darbinyan, Lusine Hayrapetyan, Elen Matevosyan
Abstract:
Artificial Intelligence in Education (AIED) has been created to help students get ready for the workforce, and over the past 25 years, it has grown significantly, offering a variety of technologies to support academic, institutional, and administrative services. However, this is still challenging, especially considering the labor market's rapid change. While choosing a career, people face various obstacles because they do not take into consideration their own preferences, which might lead to many other problems like shifting jobs, work stress, occupational infirmity, reduced productivity, and manual error. Besides preferences, people should properly evaluate their technical and non-technical skills, as well as their personalities. Professional counseling has become a difficult undertaking for counselors due to the wide range of career choices brought on by changing technological trends. It is necessary to close this gap by utilizing technology that makes sophisticated predictions about a person's career goals based on their personality. Hence, there is a need to create an automated model that would help in decision-making based on user inputs. Improving career guidance can be achieved by embedding machine learning into the career consulting ecosystem. There are various systems of career guidance that work based on the same logic, such as the classification of applicants, matching applications with appropriate departments or jobs, making predictions, and providing suitable recommendations. Methodologies like KNN, Neural Networks, K-means clustering, D-Tree, and many other advanced algorithms are applied in the fields of data and compute some data, which is helpful to predict the right careers. Besides helping users with their career choice, these systems provide numerous opportunities which are very useful while making this hard decision. They help the candidate to recognize where he/she specifically lacks sufficient skills so that the candidate can improve those skills. They are also capable to offer an e-learning platform, taking into account the user's lack of knowledge. Furthermore, users can be provided with details on a particular job, such as the abilities required to excel in that industry.Keywords: career guidance system, machine learning, career prediction, predictive decision, data mining, technical and non-technical skills
Procedia PDF Downloads 80240 Career Guidance System Using Machine Learning
Authors: Mane Darbinyan, Lusine Hayrapetyan, Elen Matevosyan
Abstract:
Artificial Intelligence in Education (AIED) has been created to help students get ready for the workforce, and over the past 25 years, it has grown significantly, offering a variety of technologies to support academic, institutional, and administrative services. However, this is still challenging, especially considering the labor market's rapid change. While choosing a career, people face various obstacles because they do not take into consideration their own preferences, which might lead to many other problems like shifting jobs, work stress, occupational infirmity, reduced productivity, and manual error. Besides preferences, people should evaluate properly their technical and non-technical skills, as well as their personalities. Professional counseling has become a difficult undertaking for counselors due to the wide range of career choices brought on by changing technological trends. It is necessary to close this gap by utilizing technology that makes sophisticated predictions about a person's career goals based on their personality. Hence, there is a need to create an automated model that would help in decision-making based on user inputs. Improving career guidance can be achieved by embedding machine learning into the career consulting ecosystem. There are various systems of career guidance that work based on the same logic, such as the classification of applicants, matching applications with appropriate departments or jobs, making predictions, and providing suitable recommendations. Methodologies like KNN, neural networks, K-means clustering, D-Tree, and many other advanced algorithms are applied in the fields of data and compute some data, which is helpful to predict the right careers. Besides helping users with their career choice, these systems provide numerous opportunities which are very useful while making this hard decision. They help the candidate to recognize where he/she specifically lacks sufficient skills so that the candidate can improve those skills. They are also capable of offering an e-learning platform, taking into account the user's lack of knowledge. Furthermore, users can be provided with details on a particular job, such as the abilities required to excel in that industry.Keywords: career guidance system, machine learning, career prediction, predictive decision, data mining, technical and non-technical skills
Procedia PDF Downloads 70239 Neural Networks Models for Measuring Hotel Users Satisfaction
Authors: Asma Ameur, Dhafer Malouche
Abstract:
Nowadays, user comments on the Internet have an important impact on hotel bookings. This confirms that the e-reputation issue can influence the likelihood of customer loyalty to a hotel. In this way, e-reputation has become a real differentiator between hotels. For this reason, we have a unique opportunity in the opinion mining field to analyze the comments. In fact, this field provides the possibility of extracting information related to the polarity of user reviews. This sentimental study (Opinion Mining) represents a new line of research for analyzing the unstructured textual data. Knowing the score of e-reputation helps the hotelier to better manage his marketing strategy. The score we then obtain is translated into the image of hotels to differentiate between them. Therefore, this present research highlights the importance of hotel satisfaction ‘scoring. To calculate the satisfaction score, the sentimental analysis can be manipulated by several techniques of machine learning. In fact, this study treats the extracted textual data by using the Artificial Neural Networks Approach (ANNs). In this context, we adopt the aforementioned technique to extract information from the comments available in the ‘Trip Advisor’ website. This actual paper details the description and the modeling of the ANNs approach for the scoring of online hotel reviews. In summary, the validation of this used method provides a significant model for hotel sentiment analysis. So, it provides the possibility to determine precisely the polarity of the hotel users reviews. The empirical results show that the ANNs are an accurate approach for sentiment analysis. The obtained results show also that this proposed approach serves to the dimensionality reduction for textual data’ clustering. Thus, this study provides researchers with a useful exploration of this technique. Finally, we outline guidelines for future research in the hotel e-reputation field as comparing the ANNs with other technique.Keywords: clustering, consumer behavior, data mining, e-reputation, machine learning, neural network, online hotel ‘reviews, opinion mining, scoring
Procedia PDF Downloads 136238 Streamlining .NET Data Access: Leveraging JSON for Data Operations in .NET
Authors: Tyler T. Procko, Steve Collins
Abstract:
New features in .NET (6 and above) permit streamlined access to information residing in JSON-capable relational databases, such as SQL Server (2016 and above). Traditional methods of data access now comparatively involve unnecessary steps which compromise system performance. This work posits that the established ORM (Object Relational Mapping) based methods of data access in applications and APIs result in common issues, e.g., object-relational impedance mismatch. Recent developments in C# and .NET Core combined with a framework of modern SQL Server coding conventions have allowed better technical solutions to the problem. As an amelioration, this work details the language features and coding conventions which enable this streamlined approach, resulting in an open-source .NET library implementation called Codeless Data Access (CODA). Canonical approaches rely on ad-hoc mapping code to perform type conversions between the client and back-end database; with CODA, no mapping code is needed, as JSON is freely mapped to SQL and vice versa. CODA streamlines API data access by improving on three aspects of immediate concern to web developers, database engineers and cybersecurity professionals: Simplicity, Speed and Security. Simplicity is engendered by cutting out the “middleman” steps, effectively making API data access a whitebox, whereas traditional methods are blackbox. Speed is improved because of the fewer translational steps taken, and security is improved as attack surfaces are minimized. An empirical evaluation of the speed of the CODA approach in comparison to ORM approaches ] is provided and demonstrates that the CODA approach is significantly faster. CODA presents substantial benefits for API developer workflows by simplifying data access, resulting in better speed and security and allowing developers to focus on productive development rather than being mired in data access code. Future considerations include a generalization of the CODA method and extension outside of the .NET ecosystem to other programming languages.Keywords: API data access, database, JSON, .NET core, SQL server
Procedia PDF Downloads 66237 Broadband Optical Plasmonic Antennas Using Fano Resonance Effects
Authors: Siamak Dawazdah Emami, Amin Khodaei, Harith Bin Ahmad, Hairul A. Adbul-Rashid
Abstract:
The Fano resonance effect on plasmonic nanoparticle materials results in such materials possessing a number of unique optical properties, and the potential applicability for sensing, nonlinear devices and slow-light devices. A Fano resonance is a consequence of coherent interference between superradiant and subradiant hybridized plasmon modes. Incident light on subradiant modes will initiate excitation that results in superradiant modes, and these superradient modes possess zero or finite dipole moments alongside a comparable negligible coupling with light. This research work details the derivation of an electrodynamics coupling model for the interaction of dipolar transitions and radiation via plasmonic nanoclusters such as quadrimers, pentamers and heptamers. The directivity calculation is analyzed in order to qualify the redirection of emission. The geometry of a configured array of nanostructures strongly influenced the transmission and reflection properties, which subsequently resulted in the directivity of each antenna being related to the nanosphere size and gap distances between the nanospheres in each model’s structure. A well-separated configuration of nanospheres resulted in the structure behaving similarly to monomers, with spectra peaks of a broad superradiant mode being centered within the vicinity of 560 nm wavelength. Reducing the distance between ring nanospheres in pentamers and heptamers to 20~60 nm caused the coupling factor and charge distributions to increase and invoke a subradiant mode centered within the vicinity of 690 nm. Increasing the outside ring’s nanosphere distance from the centered nanospheres caused the coupling factor to decrease, with the coupling factor being inversely proportional to cubic of the distance between nanospheres. This phenomenon led to a dramatic decrease of the superradiant mode at a 200 nm distance between the central nanosphere and outer rings. Effects from a superradiant mode vanished beyond a 240 nm distance between central and outer ring nanospheres.Keywords: fano resonance, optical antenna, plasmonic, nano-clusters
Procedia PDF Downloads 429236 Imposing Personal Liability on Shareholder's/Partner's in a Corporate Entity; Implementation of UK’s Personal Liability Institutions in Georgian Corporate Law: Content and Outcomes
Authors: Gvantsa Magradze
Abstract:
The paper examines the grounds for the imposition of a personal liability on shareholder/partner, mainly under Georgian and UK law’s comparative analysis. The general emphasis was made on personal responsibility grounds adaptation in practice and presents the analyze of court decisions. On this base, reader will be capable to find a difference between the dogmatic and practical grounds for imposition personal liability. The first chapter presents the general information about discussed issue and notion of personal liability. The second chapter is devoted to an explanation the concept – ‘the head of the corporation’ to make it clear who is the subject of responsibility in the article and not to remain individuals beyond the attention, who do not hold the position of director but are participating in governing activities and, therefore, have to have fiduciury duties. After short comparative analysis of personal responsibility, the Georgian Corporate law reality is further discussed. Here, the problem of determining personal liability is a problematic issue, thus a separate chapter is devoted to the issue, which explains the grounds for personal liability imposition in details. Within the paper is discussed the content and the purpose of personal liability institutions under UK’s corporate law and an attempt to implement them, and especially ‘Alter Ego’ doctrine in Georgian corporate Law reality and the outcomes of the experiment. For the research purposes will be examined national case law in regard to personal liability imposition, as well as UK’s experience in that regard. Comparative analyze will make it clear, wherein the Georgian statute, are gaps and how to fill them up. The articles major finding as stated, is that Georgian Corporate law does not provide any legally consolidated grounds for personal liability imposition, which in fact, leads to unfaithful, unlawful actions on partners’/shareholders’ behalf. In order to make business market fair, advancement of a national statute is inevitable, and for that, the experience sharing from developed countries is an irreplaceable gift. Overall, the article analyses, how discussed amendments might influence case law and if such amendments were made years ago, how the judgments could look like (before and after amendments).Keywords: alter ego doctrine, case law, corporate law, good faith, personal liability
Procedia PDF Downloads 149235 An Investigation of the Operation and Performance of London Cycle Hire Scheme
Authors: Amer Ali, Jessica Cecchinelli, Antonis Charalambous
Abstract:
Cycling is one of the most environmentally friendly, economic and healthy modes of transport but it needs more efficient cycle infrastructure and more effective safety measures. This paper represents an investigation into the performance and operation of the London Cycle Hire Scheme which started to operate in July 2010 using 5,000 cycles and 315 docking stations and currently has more than 10,000 cycles and over 700 docking stations across London which are available 24/7, 365 days a year. The study, which was conducted during the second half of 2014, consists of two parts; namely, the longitudinal review of the hire scheme between its introduction in 2010 and November 2014, and a field survey in November 2014 in the form of face-face interviews of the users of the cycle scheme to ascertain the existing limitations and difficulties experienced by those users and how it could be improved in terms of capability and safety. The study also includes a correlation between the usage of the cycle scheme and the corresponding weather conditions. The main findings are that on average the number of users (hiring frequency) had increased from just over two millions hires in 2010 to just less than ten millions in 2014. The field survey showed that 80% of the users are satisfied with the performance of the scheme whilst 50% of the users raised concern about the safety level of using the available cycle routes and infrastructure. The study also revealed that a high percentage of the cycle trips were relatively short (less than 30 minutes). Although the weather condition had some effect on cycling, the cost of using the cycle scheme and the main events in London had more effect on the number of cycle hires. The key conclusions are that despite the safety concern and the lack of infrastructure for continuous routes there was an encouraging number of people who opted for cycling as a clean, affordable, and healthy mode of transport. There is a need to expand the scheme by providing more cycles and docking stations and to support that by more well-designed and maintained cycle routes. More details about the development of London Cycle Hire Scheme during the last five years, its performance and the key issues revealed by the surveyed users will be reported in the full version of the paper.Keywords: cycling mode of transport, london cycle hire scheme, safety, environmental and health benefits, user satisfaction
Procedia PDF Downloads 387234 Implementation of Quality Function Development to Incorporate Customer’s Value in the Conceptual Design Stage of a Construction Projects
Authors: Ayedh Alqahtani
Abstract:
Many construction firms in Saudi Arabia dedicated to building projects agree that the most important factor in the real estate market is the value that they can give to their customer. These firms understand the value of their client in different ways. Value can be defined as the size of the building project in relationship to the cost or the design quality of the materials utilized in finish work or any other features of building rooms such as the bathroom. Value can also be understood as something suitable for the money the client is investing for the new property. A quality tool is required to support companies to achieve a solution for the building project and to understand and manage the customer’s needs. Quality Function Development (QFD) method will be able to play this role since the main difference between QFD and other conventional quality management tools is QFD a valuable and very flexible tool for design and taking into the account the VOC. Currently, organizations and agencies are seeking suitable models able to deal better with uncertainty, and that is flexible and easy to use. The primary aim of this research project is to incorporate customer’s requirements in the conceptual design of construction projects. Towards this goal, QFD is selected due to its capability to integrate the design requirements to meet the customer’s needs. To develop QFD, this research focused upon the contribution of the different (significantly weighted) input factors that represent the main variables influencing QFD and subsequent analysis of the techniques used to measure them. First of all, this research will review the literature to determine the current practice of QFD in construction projects. Then, the researcher will review the literature to define the current customers of residential projects and gather information on customers’ requirements for the design of the residential building. After that, qualitative survey research will be conducted to rank customer’s needs and provide the views of stakeholder practitioners about how these needs can affect their satisfy. Moreover, a qualitative focus group with the members of the design team will be conducted to determine the improvements level and technical details for the design of residential buildings. Finally, the QFD will be developed to establish the degree of significance of the design’s solution.Keywords: quality function development, construction projects, Saudi Arabia, quality tools
Procedia PDF Downloads 124233 Developing Manufacturing Process for the Graphene Sensors
Authors: Abdullah Faqihi, John Hedley
Abstract:
Biosensors play a significant role in the healthcare sectors, scientific and technological progress. Developing electrodes that are easy to manufacture and deliver better electrochemical performance is advantageous for diagnostics and biosensing. They can be implemented extensively in various analytical tasks such as drug discovery, food safety, medical diagnostics, process controls, security and defence, in addition to environmental monitoring. Development of biosensors aims to create high-performance electrochemical electrodes for diagnostics and biosensing. A biosensor is a device that inspects the biological and chemical reactions generated by the biological sample. A biosensor carries out biological detection via a linked transducer and transmits the biological response into an electrical signal; stability, selectivity, and sensitivity are the dynamic and static characteristics that affect and dictate the quality and performance of biosensors. In this research, a developed experimental study for laser scribing technique for graphene oxide inside a vacuum chamber for processing of graphene oxide is presented. The processing of graphene oxide (GO) was achieved using the laser scribing technique. The effect of the laser scribing on the reduction of GO was investigated under two conditions: atmosphere and vacuum. GO solvent was coated onto a LightScribe DVD. The laser scribing technique was applied to reduce GO layers to generate rGO. The micro-details for the morphological structures of rGO and GO were visualised using scanning electron microscopy (SEM) and Raman spectroscopy so that they could be examined. The first electrode was a traditional graphene-based electrode model, made under normal atmospheric conditions, whereas the second model was a developed graphene electrode fabricated under a vacuum state using a vacuum chamber. The purpose was to control the vacuum conditions, such as the air pressure and the temperature during the fabrication process. The parameters to be assessed include the layer thickness and the continuous environment. Results presented show high accuracy and repeatability achieving low cost productivity.Keywords: laser scribing, lightscribe DVD, graphene oxide, scanning electron microscopy
Procedia PDF Downloads 120232 Acquisition and Preservation of Traditional Medicinal Knowledge in Rural Areas of KwaZulu Natal, South Africa
Authors: N. Khanyile, P. Dlamini, M. Masenya
Abstract:
Background: Most of the population in Africa is still dependent on indigenous medicinal knowledge for treating and managing ailments. Indigenous traditional knowledge owners/practitioners who own this knowledge are consulted by communities, but their knowledge is not known how they get it. The question of how knowledge is acquired and preserved remains one of the biggest challenges in traditional healing and treatment with herbal medicines. It is regrettable that despite the importance and recognition of indigenous medicinal knowledge globally, the details of acquirement, storing and transmission, and preservation techniques are not known. Hence this study intends to unveil the process of acquirement and transmission, and preservation techniques of indigenous medical knowledge by its owners. Objectives: This study aims to assess the process of acquiring and preservation of traditional medicinal knowledge by traditional medicinal knowledge owners/practitioners in uMhlathuze Municipality, in the province of KwaZulu-Natal, South Africa. The study was guided by four research objectives which were to: identify the types of traditional medicinal knowledge owners who possess this knowledge, establish the approach used by indigenous medicinal knowledge owners/healers for acquiring medicinal knowledge, identify the process of preservation of medicinal knowledge by indigenous medicinal knowledge owners/healers, and determine the challenges encountered in transferring the knowledge. Method: The study adopted a qualitative research approach, and a snowball sampling technique was used to identify the study population. Data was collected through semi-structured interviews with indigenous medicinal knowledge owners. Results: The findings suggested that uMhlathuze municipality had different types of indigenous medicinal knowledge owners who possess valuable knowledge. These are diviners (Izangoma), faith healers (Abathandazi), and herbalists (Izinyanga). The study demonstrated that indigenous medicinal knowledge is acquired in many different ways, including visions, dreams, and vigorous training. The study also revealed the acquired knowledge is preserved or shared with specially chosen children and trainees. Conclusion: The study concluded that this knowledge is gotten through vigorous training, which requires the learner to be attentive and eager to learn. It was recommended that a study of this nature be conducted but at a broader level to enhance an informed conclusion and recommendations.Keywords: preserving, indigenous medicinal knowledge, indigenous knowledge, indigenous medicinal knowledge owners/practitioners, acquiring
Procedia PDF Downloads 87231 Latent Heat Storage Using Phase Change Materials
Authors: Debashree Ghosh, Preethi Sridhar, Shloka Atul Dhavle
Abstract:
The judicious and economic consumption of energy for sustainable growth and development is nowadays a thing of primary importance; Phase Change Materials (PCM) provide an ingenious option of storing energy in the form of Latent Heat. Energy storing mechanism incorporating phase change material increases the efficiency of the process by minimizing the difference between supply and demand; PCM heat exchangers are used to storing the heat or non-convectional energy within the PCM as the heat of fusion. The experimental study evaluates the effect of thermo-physical properties, variation in inlet temperature, and flow rate on charging period of a coiled heat exchanger. Secondly, a numerical study is performed on a PCM double pipe heat exchanger packed with two different PCMs, namely, RT50 and Fatty Acid, in the annular region. In this work, the simulation of charging of paraffin wax (RT50) using water as high-temperature fluid (HTF) is performed. Commercial software Ansys-Fluent 15 is used for simulation, and hence charging of PCM is studied. In the Enthalpy-porosity model, a single momentum equation is applicable to describe the motion of both solid and liquid phases. The details of the progress of phase change with time are presented through the contours of melt-fraction, temperature. The velocity contour is shown to describe the motion of the liquid phase. The experimental study revealed that paraffin wax melts with almost the same temperature variation at the two Intermediate positions. Fatty acid, on the other hand, melts faster owing to greater thermal conductivity and low melting temperature. It was also observed that an increase in flow rate leads to a reduction in the charging period. The numerical study also supports some of the observations found in the experimental study like the significant dependence of driving force on the process of melting. The numerical study also clarifies the melting pattern of the PCM, which cannot be observed in the experimental study.Keywords: latent heat storage, charging period, discharging period, coiled heat exchanger
Procedia PDF Downloads 116230 The Identification of Instructional Approach for Enhancing Competency of Autism, Attention Deficit Hyperactivity Disorder and Learning Disability Groups
Authors: P. Srisuruk, P. Narot
Abstract:
The purpose of this research were 1) to develop the curriculum and instructional approach that are suitable for children with autism, attention deficit hyperactivity disorder and learning disability as well as to arrange the instructional approach that can be integrated into inclusive classroom 2) to increase the competency of the children in these group. The research processes were to a) study related documents, b) arrange workshops to clarify fundamental issues in developing core curriculum among the researchers and experts in curriculum development, c) arrange workshops to develop the curriculum, submit it to the experts for criticism and editing, d) implement the instructional approach to examine its effectiveness, e) select the schools to participate in the project and arrange training programs for teachers in the selected school, f) implement the instruction approach in the selected schools in different regions. The research results were 1) the core curriculum to enhance the competency of children with autism, attention deficit hyperactivity disorder and learning disability , and to be used as a guideline for teachers, and these group of children in order to arrange classrooms for students with special needs to study with normal students, 2) teaching and learning methods arranged for students with autism, attention deficit, hyperactivity disorder and learning disability to study with normal students can be used as a framework for writing plans to help students with parallel problems by developing teaching materials as part of the instructional approach. However, the details of how to help the students in each skill or content differ according to the demand of development as well as the problems of individual students or group of students. Furthermore; it was found that most of target teacher could implement the instructional approach based on the guideline model developed by the research team. School in each region does not have much difference in their implementation. The good point of the developed instructional model is that teacher can construct a parallel lesson plan. So teacher did not fell that they have to do extra work it was also shown that students in regular classroom enjoyed studying with the developed instructional model as well.Keywords: instructional approach, autism, attention deficit hyperactivity disorder, learning disability
Procedia PDF Downloads 332229 The Methods of Customer Satisfaction Measurement and Its Statistical Analysis towards Sales and Logistic Activities in Food Sector
Authors: Seher Arslankaya, Bahar Uludağ
Abstract:
Meeting the needs and demands of customers and pleasing the customers are important requirements for companies in food sectors where the growth of competition is significantly unpredictable. Customer satisfaction is also one of the key concepts which is mainly driven by wide range of customer preference and expectation upon products and services introduced and delivered to them. In order to meet the customer demands, the companies that engage in food sectors are expected to have a well-managed set of Total Quality Management (TQM), which sets out to improve quality of products and services; to reduce costs and to increase customer satisfaction by restructuring traditional management practices. It aims to increase customer satisfaction by meeting (their) customer expectations and requirements. The achievement would be determined with the help of customer satisfaction surveys, which is done to obtain immediate feedback and to provide quick responses. In addition, the surveys would also assist the making of strategic planning which helps to anticipate customer future needs and expectations. Meanwhile, periodic measurement of customer satisfaction would be a must because with the better understanding of customers perceptions from the surveys (done by questioners), the companies would have a clear idea to identify their own strengths and weaknesses that help the companies keep their loyal customers; to stand in comparison toward their competitors and map out their future progress and improvement. In this study, we propose a survey based on customer satisfaction measurement method and its statistical analysis for sales and logistic activities of food firms. Customer satisfaction would be discussed in details. Furthermore, after analysing the data derived from the questionnaire that applied to customers by using the SPSS software, various results obtained from the application would be presented. By also applying ANOVA test, the study would analysis the existence of meaningful differences between customer demographic proportion and their perceptions. The purpose of this study is also to find out requirements which help to remove the effects that decrease customer satisfaction and produce loyal customers in food industry. For this purpose, the customer complaints are collected. Additionally, comments and suggestions are done according to the obtained results of surveys, which would be useful for the making-process of strategic planning in food industry.Keywords: customer satisfaction measurement and analysis, food industry, SPSS, TQM
Procedia PDF Downloads 249228 Next Generation Radiation Risk Assessment and Prediction Tools Generation Applying AI-Machine (Deep) Learning Algorithms
Authors: Selim M. Khan
Abstract:
Indoor air quality is strongly influenced by the presence of radioactive radon (222Rn) gas. Indeed, exposure to high 222Rn concentrations is unequivocally linked to DNA damage and lung cancer and is a worsening issue in North American and European built environments, having increased over time within newer housing stocks as a function of as yet unclear variables. Indoor air radon concentration can be influenced by a wide range of environmental, structural, and behavioral factors. As some of these factors are quantitative while others are qualitative, no single statistical model can determine indoor radon level precisely while simultaneously considering all these variables across a complex and highly diverse dataset. The ability of AI- machine (deep) learning to simultaneously analyze multiple quantitative and qualitative features makes it suitable to predict radon with a high degree of precision. Using Canadian and Swedish long-term indoor air radon exposure data, we are using artificial deep neural network models with random weights and polynomial statistical models in MATLAB to assess and predict radon health risk to human as a function of geospatial, human behavioral, and built environmental metrics. Our initial artificial neural network with random weights model run by sigmoid activation tested different combinations of variables and showed the highest prediction accuracy (>96%) within the reasonable iterations. Here, we present details of these emerging methods and discuss strengths and weaknesses compared to the traditional artificial neural network and statistical methods commonly used to predict indoor air quality in different countries. We propose an artificial deep neural network with random weights as a highly effective method for assessing and predicting indoor radon.Keywords: radon, radiation protection, lung cancer, aI-machine deep learnng, risk assessment, risk prediction, Europe, North America
Procedia PDF Downloads 96227 Characterization of Transmembrane Proteins with Five Alpha-Helical Regions
Authors: Misty Attwood, Helgi Schioth
Abstract:
Transmembrane proteins are important components in many essential cell processes such as signal transduction, cell-cell signalling, transport of solutes, structural adhesion activities, and protein trafficking. Due to their involvement in diverse critical activities, transmembrane proteins are implicated in different disease pathways and hence are the focus of intense interest in understanding functional activities, their pathogenesis in disease, and their potential as pharmaceutical targets. Further, as the structure and function of proteins are correlated, investigating a group of proteins with the same tertiary structure, i.e., the same number of transmembrane regions, may give understanding about their functional roles and potential as therapeutic targets. In this in silico bioinformatics analysis, we identify and comprehensively characterize the previously unstudied group of proteins with five transmembrane-spanning regions (5TM). We classify nearly 60 5TM proteins in which 31 are members of ten families that contain two or more family members and all members are predicted to contain the 5TM architecture. Furthermore, nine singlet proteins that contain the 5TM architecture without paralogues detected in humans were also identifying, indicating the evolution of single unique proteins with the 5TM structure. Interestingly, more than half of these proteins function in localization activities through movement or tethering of cell components and more than one-third are involved in transport activities, particularly in the mitochondria. Surprisingly, no receptor activity was identified within this family in sharp contrast with other TM families. Three major 5TM families were identified and include the Tweety family, which are pore-forming subunits of the swelling-dependent volume regulated anion channel in astrocytes; the sidoreflexin family that acts as mitochondrial amino acid transporters; and the Yip1 domain family engaged in vesicle budding and intra-Golgi transport. About 30% of the proteins have enhanced expression in the brain, liver, or testis. Importantly, 60% of these proteins are identified as cancer prognostic markers, where they are associated with clinical outcomes of various tumour types, indicating further investigation into the function and expression of these proteins is important. This study provides the first comprehensive analysis of proteins with 5TM regions and provides details of the unique characteristics and application in pharmaceutical development.Keywords: 5TM, cancer prognostic marker, drug targets, transmembrane protein
Procedia PDF Downloads 109226 Presence and Absence: The Use of Photographs in Paris, Texas
Authors: Yi-Ting Wang, Wen-Shu Lai
Abstract:
The subject of this paper is the photography in the 1983 film Paris, Texas, directed by Wim Wenders. Wenders is well known as a film director as well as a photographer. We have found that photography is shown as a photographic element in many of his films. Some of these photographs serve as details within the films, while others play important roles that are relevant to the story. This paper aims to consider photographs in film as a specific type of text, which is the output of both still photography and the film itself. In the film Paris, Texas, three sets of important photographs appear whose symbolic meanings are as dialectical as their text types. The relationship between the existence of these photos and the storyline is both dependent and isolated. The film’s images fly by and progress into other images, while the photos in the film serve a unique narrative function by stopping the continuously flowing images thus provide the viewer a space for imagination and contemplation. They are more than just artistic forms; they also contained multiple meanings. The photographs in Paris, Texas play the role of both presence and absence according to their shifting meanings. There are references to their presence: photographs exist between film time and narrative time, so in terms of the interaction between the characters in the film, photographs are a common symbol of the beginning and end of the characters’ journeys. In terms of the audience, the film’s photographs are a link in the viewing frame structure, through which the creative motivation of the film director can be explored. Photographs also point to the absence of certain objects: the scenes in the photos represent an imaginary map of emotion. The town of Paris, Texas is therefore isolated from the physical presence of the photograph, and is far more abstract than the reality in the film. This paper embraces the ambiguous nature of photography and demonstrates its presence and absence in film with regard to the meaning of text. However, it is worth reflecting that the temporary nature of the interpretation of the film’s photographs is far greater than any other type of photographic text: the characteristics of the text cause the interpretation results to change along with the variations in the interpretation process, which makes their meaning a dynamic process. The photographs’ presence or absence in the context of Paris, Texas also demonstrates the presence and absence of the creator, time, the truth, and the imagination. The film becomes more complete as a result of the revelation of the photographs, while the intertextual connection between these two forms simultaneously provides multiple possibilities for the interpretation of the photographs in the film.Keywords: film, Paris, Texas, photography, Wim Wenders
Procedia PDF Downloads 318