Search results for: Multi-objective sequencing problem
299 Primary Level Teachers’ Response to Gender Representation in Textbook Contents
Authors: Pragya Paneru
Abstract:
This paper explores altogether 10 primary teachers’ views on gender representation in primary level textbooks. Data were collected from the teachers who taught in private schools in the Kailali and Kathmandu districts. This research uses a semi-structured interview method to obtain information regarding teachers’ attitudes toward gender representations in textbook contents. The interview data were analysed by using critical skills of qualitative research. The findings revealed that most of the teachers were unaware and regarded gender issues as insignificant to discuss in primary-level classes. Most of them responded to the questions personally and claimed that there were no gender issues in their classrooms. Some of the teachers connected gender issues with contexts other than textbook representations such as school discrimination in the distribution of salary among male and female teachers, school practices of awarding girls rather than boys as the most disciplined students, following girls’ first rule in the assembly marching, encouraging only girls in the stage shows, and involving students in gender-specific activities such as decorating works for girls and physical tasks for boys. The interview also revealed teachers’ covert gendered attitudes in their remarks. Nevertheless, most of the teachers accepted that gender-biased contents have an impact on learners and this problem can be solved with more gender-centred research in the education field, discussions, and training to increase awareness regarding gender issues. Agreeing with the suggestion of teachers, this paper recommends proper training and awareness regarding how to confront gender issues in textbooks.
Keywords: Content analysis, gender equality, school education, critical awareness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 249298 The Study of the Correlation of Proactive Coping and Retirement Planning: An Example of Senior Civil Servants in Taiwan
Authors: Ya-Hui Lee, Chien-Hung Hsieh, Ching-Yi Lu
Abstract:
Demographic aging is the major problem that Taiwanese society is facing, and retirement life adaptation is the most concerning issue. In recent years, studies have suggested that in order to have successful aging and retirement planning, a view for the future is necessary. In Taiwan, civil servants receive better pensions and retirement benefits than do other industries. Therefore, their retirement preparation is considerably more significant than other senior groups in Taiwan. The purpose of this study is to understand the correlation of proactive coping and retirement planning of senior civil servants in Taiwan. The method is conducted by questionnaire surveys, with 342 valid questionnaires collected. The results of this study are: 1. The background variables of the interviewees, including age, perceived economic statuses, and retirement statuses, are all significantly related to their proactive coping and retirement planning. 2. Regarding age, the interviewees with ages 55 and above have better proactive coping and retirement planning than those with ages 45 and below. 3. In the aspect of perceived economic statuses, the participants who feel “very good” economic statuses have better proactive coping ability and retirement readiness than those who feel “bad” and “very bad”. 4. Retirees have better proactive coping and retirement planning than those who are still working. 5. Monthly income is significant in retirement planning only. The participants’ retirement planning would be better if they have higher incomes. Furthermore, the participants’ retirement planning would be better if their revenue were €1453~€1937, than if their revenue were below €968. 6. There are positive correlations between proactive coping and retirement planning. 7. Proactive coping can predict retirement planning. The result of this study will be provided as references to the Taiwan government for educational retirement planning policies.
Keywords: Civil servants, proactive coping, retirement planning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1768297 Application Reliability Method for Concrete Dams
Authors: Mustapha Kamel Mihoubi, Mohamed Essadik Kerkar
Abstract:
Probabilistic risk analysis models are used to provide a better understanding of the reliability and structural failure of works, including when calculating the stability of large structures to a major risk in the event of an accident or breakdown. This work is interested in the study of the probability of failure of concrete dams through the application of reliability analysis methods including the methods used in engineering. It is in our case, the use of level 2 methods via the study limit state. Hence, the probability of product failures is estimated by analytical methods of the type first order risk method (FORM) and the second order risk method (SORM). By way of comparison, a level three method was used which generates a full analysis of the problem and involves an integration of the probability density function of random variables extended to the field of security using the Monte Carlo simulation method. Taking into account the change in stress following load combinations: normal, exceptional and extreme acting on the dam, calculation of the results obtained have provided acceptable failure probability values which largely corroborate the theory, in fact, the probability of failure tends to increase with increasing load intensities, thus causing a significant decrease in strength, shear forces then induce a shift that threatens the reliability of the structure by intolerable values of the probability of product failures. Especially, in case the increase of uplift in a hypothetical default of the drainage system.
Keywords: Dam, failure, limit-state, Monte Carlo simulation, reliability, probability, simulation, sliding, Taylor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1226296 Convection through Light Weight Timber Constructions with Mineral Wool
Authors: J. Schmidt, O. Kornadt
Abstract:
The major part of light weight timber constructions consists of insulation. Mineral wool is the most commonly used insulation due to its cost efficiency and easy handling. The fiber orientation and porosity of this insulation material enables flowthrough. The air flow resistance is low. If leakage occurs in the insulated bay section, the convective flow may cause energy losses and infiltration of the exterior wall with moisture and particles. In particular the infiltrated moisture may lead to thermal bridges and growth of health endangering mould and mildew. In order to prevent this problem, different numerical calculation models have been developed. All models developed so far have a potential for completion. The implementation of the flow-through properties of mineral wool insulation may help to improve the existing models. Assuming that the real pressure difference between interior and exterior surface is larger than the prescribed pressure difference in the standard test procedure for mineral wool ISO 9053 / EN 29053, measurements were performed using the measurement setup for research on convective moisture transfer “MSRCMT". These measurements show, that structural inhomogeneities of mineral wool effect the permeability only at higher pressure differences, as applied in MSRCMT. Additional microscopic investigations show, that the location of a leak within the construction has a crucial influence on the air flow-through and the infiltration rate. The results clearly indicate that the empirical values for the acoustic resistance of mineral wool should not be used for the calculation of convective transfer mechanisms.Keywords: convection, convective transfer, infiltration, mineralwool, permeability, resistance, leakage
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2144295 The Use of Knowledge Management Systems and ICT Service Desk Management to Minimize the Digital Divide Experienced in the Museum Sector
Authors: Ruel A. Welch
Abstract:
Since the introduction of ServiceNow, the UK’s Science Museum Group’s (SMG) ICT service desk portal, there has not been an analysis of the tools available to SMG staff for Just-in-time knowledge acquisition (Knowledge Management Systems) and reporting ICT incidents with a focus on an aspect of professional identity namely, gender. Therefore, it is important for SMG to investigate the apparent disparities so that solutions can be derived to minimize this digital divide if one exists. This study is conducted in the milieu of UK museums, galleries, arts, academic, charitable, and cultural heritage sector. It is acknowledged at SMG that there are challenges with keeping up with an ever-changing digital landscape. Subsequently, this entails the rapid upskilling of staff and developing an infrastructure that supports just-in-time technological knowledge acquisition and reporting technology related issues. This problem was addressed by analysing ServiceNow ICT incident reports and reports from knowledge articles from a six-month period from February to July. This study found a statistically significant relationship between gender and reporting an ICT incident. There is also a significant relationship between gender and the priority level of ICT incident. Interestingly, there is no statistically significant relationship between gender and reading knowledge articles. Additionally, there is no statistically significant relationship between gender and reporting an ICT incident related to the knowledge article that was read by staff. The knowledge acquired from this study is useful to service desk management practice as it will help to inform the creation of future knowledge articles and ICT incident reporting processes.
Keywords: digital divide, ICT service desk practice, knowledge management systems, workplace learning
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 644294 PM10 Chemical Characteristics in a Background Site at the Universidad Libre Bogotá
Authors: Laura X. Martinez, Andrés F. Rodríguez, Ruth A. Catacoli
Abstract:
One of the most important factors for air pollution is that the concentrations of PM10 maintain a constant trend, with the exception of some places where that frequently surpasses the allowed ranges established by Colombian legislation. The community that surrounds the Universidad Libre Bogotá is inhabited by a considerable number of students and workers, all of whom are possibly being exposed to PM10 for long periods of time while on campus. Thus, the chemical characterization of PM10 found in the ambient air at the Universidad Libre Bogotá was identified as a problem. A Hi-Vol sampler and EPA Test Method 5 were used to determine if the quality of air is adequate for the human respiratory system. Additionally, quartz fiber filters were utilized during sampling. Samples were taken three days a week during a dry period throughout the months of November and December 2015. The gravimetric analysis method was used to determine PM10 concentrations. The chemical characterization includes non-conventional carcinogenic pollutants. Atomic absorption spectrophotometry (AAS) was used for the determination of metals and VOCs were analyzed using the FTIR (Fourier transform infrared spectroscopy) method. In this way, concentrations of PM10, ranging from values of 13 µg/m3 to 66 µg/m3, were obtained; these values were below standard conditions. This evidence concludes that the PM10 concentrations during an exposure period of 24 hours are lower than the values established by Colombian law, Resolution 610 of 2010; however, when comparing these with the limits set by the World Health Organization (WHO), these concentrations could possibly exceed permissible levels.Keywords: Air quality, atomic absorption spectrophotometry, Fourier transform infrared spectroscopy, particulate matter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 914293 Rotation Invariant Fusion of Partial Image Parts in Vista Creation using Missing View Regeneration
Authors: H. B. Kekre, Sudeep D. Thepade
Abstract:
The automatic construction of large, high-resolution image vistas (mosaics) is an active area of research in the fields of photogrammetry [1,2], computer vision [1,4], medical image processing [4], computer graphics [3] and biometrics [8]. Image stitching is one of the possible options to get image mosaics. Vista Creation in image processing is used to construct an image with a large field of view than that could be obtained with a single photograph. It refers to transforming and stitching multiple images into a new aggregate image without any visible seam or distortion in the overlapping areas. Vista creation process aligns two partial images over each other and blends them together. Image mosaics allow one to compensate for differences in viewing geometry. Thus they can be used to simplify tasks by simulating the condition in which the scene is viewed from a fixed position with single camera. While obtaining partial images the geometric anomalies like rotation, scaling are bound to happen. To nullify effect of rotation of partial images on process of vista creation, we are proposing rotation invariant vista creation algorithm in this paper. Rotation of partial image parts in the proposed method of vista creation may introduce some missing region in the vista. To correct this error, that is to fill the missing region further we have used image inpainting method on the created vista. This missing view regeneration method also overcomes the problem of missing view [31] in vista due to cropping, irregular boundaries of partial image parts and errors in digitization [35]. The method of missing view regeneration generates the missing view of vista using the information present in vista itself.Keywords: Vista, Overlap Estimation, Rotation Invariance, Missing View Regeneration.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1724292 The Problems of Legal Regulation of Intellectual Property Rights in Innovation Activities in Russia (Institutional Approach)
Authors: Zhanna Mingaleva, Irina Mirskikh
Abstract:
Part IV of the Civil Code of the Russian Federation dedicated to legal regulation of Intellectual property rights came into force in 2008. It is a first attempt of codification in Intellectual property sphere in Russia. That is why a lot of new norms appeared. The main problem of the Russian Civil Code (part IV) is that many rules (norms of Law) contradict the norms of International Intellectual property Law (i.e. protection of inventions, creations, ideas, know-how, trade secrets, innovations). Intellectual property rights protect innovations and creations and reward innovative and creative activity. Intellectual property rights are international in character and in that respect they fit in rather well with the economic reality of the global economy. Inventors prefer not to take out a patent for inventions because it is a very difficult procedure, it takes a lot of time and is very expensive. That-s why they try to protect their inventions as ideas, know-how, confidential information. An idea is the main element of any object of Intellectual property (creation, invention, innovation, know-how, etc.). But ideas are not protected by Civil Code of Russian Federation. The aim of the paper is to reveal the main problems of legal regulation of Intellectual property in Russia and to suggest possible solutions. The authors of this paper have raised these essential issues through different activities. Through the panel survey, questionnaires which were spread among the participants of intellectual activities the main problems of implementation of innovations, protecting of the ideas and know-how were identified. The implementation of research results will help to solve economic and legal problems of innovations, transfer of innovations and intellectual property.1
Keywords: Innovation activities, intellectual property rights, know-how, patents, indicators of innovation activities
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1518291 The Effect of Response Feedback on Performance of Active Controlled Nonlinear Frames
Authors: M. Mohebbi, K. Shakeri
Abstract:
The effect of different combinations of response feedback on the performance of active control system on nonlinear frames has been studied in this paper. To this end different feedback combinations including displacement, velocity, acceleration and full response feedback have been utilized in controlling the response of an eight story bilinear hysteretic frame which has been subjected to a white noise excitation and controlled by eight actuators which could fully control the frame. For active control of nonlinear frame Newmark nonlinear instantaneous optimal control algorithm has been used which a diagonal matrix has been selected for weighting matrices in performance index. For optimal design of active control system while the objective has been to reduce the maximum drift to below the yielding level, Distributed Genetic Algorithm (DGA) has been used to determine the proper set of weighting matrices. The criteria to assess the effect of each combination of response feedback have been the minimum required control force to reduce the maximum drift to below the yielding drift. The results of numerical simulation show that the performance of active control system is dependent on the type of response feedback where the velocity feedback is more effective in designing optimal control system in comparison with displacement and acceleration feedback. Also using full feedback of response in controller design leads to minimum control force amongst other combinations. Also the distributed genetic algorithm shows acceptable convergence speed in solving the optimization problem of designing active control systems.Keywords: Active control, Distributed genetic algorithms, Response feedback, Weighting matrices.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1407290 Quantification of Biomethane Potential from Anaerobic Digestion of Food Waste at Vaal University of Technology
Authors: Kgomotso Matobole, Pascal Mwenge, Tumisang Seodigeng
Abstract:
The global urbanisation and worldwide economic growth have caused a high rate of food waste generation, resulting in environmental pollution. Food waste disposed on landfills decomposes to produce methane (CH4), a greenhouse gas. Inadequate waste management practices contribute to food waste polluting the environment. Thus effective organic fraction of municipal solid waste (OFMSW) management and treatment are attracting widespread attention in many countries. This problem can be minimised by the employment of anaerobic digestion process, since food waste is rich in organic matter and highly biodegradable, resulting in energy generation and waste volume reduction. The current study investigated the Biomethane Potential (BMP) of the Vaal University of Technology canteen food waste using anaerobic digestion. Tests were performed on canteen food waste, as a substrate, with total solids (TS) of 22%, volatile solids (VS) of 21% and moisture content of 78%. The tests were performed in batch reactors, at a mesophilic temperature of 37 °C, with two different types of inoculum, primary and digested sludge. The resulting CH4 yields for both food waste with digested sludge and primary sludge were equal, being 357 Nml/g VS. This indicated that food waste form this canteen is rich in organic and highly biodegradable. Hence it can be used as a substrate for the anaerobic digestion process. The food waste with digested sludge and primary sludge both fitted the first order kinetic model with k for primary sludge inoculated food waste being 0.278 day-1 with R2 of 0.98, whereas k for digested sludge inoculated food waste being 0.034 day-1, with R2 of 0.847.
Keywords: Anaerobic digestion, biogas, biomethane potential, food waste.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 932289 Influence of Yeast Strains on Microbiological Stability of Wheat Bread
Authors: E. Soboleva, E. Sergachyova, S. G. Davydenko, T. V. Meledina
Abstract:
Problem of food preservation is extremely important for mankind. Viscous damage ("illness") of bread results from development of Bacillus spp. bacteria. High temperature resistant spores of this microorganism are steady against 120°C) and remain in bread during pastries, potentially causing spoilage of the final product. Scientists are interested in further characterization of bread spoiling Bacillus spp. species. Our aim was to find weather yeast Saccharomyces cerevisiae strains that are able to produce natural antimicrobial killer factor can preserve bread illness. By diffusion method, we showed yeast antagonistic activity against spore-forming bacteria. Experimental technological parameters were the same as for bakers' yeasts production on the industrial scale. Risograph test during dough fermentation demonstrated gas production. The major finding of the study was a clear indication of the presence of killer yeast strain antagonistic activity against rope in bread causing bacteria. After demonstrating antagonistic effect of S. cerevisiae on bacteria using solid nutrient medium, we tested baked bread under provocative conditions. We also measured formation of carbon dioxide in the dough, dough-making duration and quality of the final products, when using different strains of S. cerevisiae. It is determined that the use of yeast S. cerevisiae RCAM 01730 killer strain inhibits appearance of rope in bread. Thus, natural yeast antimicrobial killer toxin, produced by some S. cerevisiae strains is an anti-rope in bread protector.Keywords: Bakers' yeasts, rope in bread, Saccharomyces cerevisiae.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1884288 Service Flow in Multilayer Networks: A Method for Evaluating the Layout of Urban Medical Resources
Authors: Guanglin Song
Abstract:
Situated within the context of China's tiered medical treatment system, this study aims to analyze spatial causes of urban healthcare access difficulties from the perspective of the configuration of healthcare facilities. A social network analysis approach is employed to construct a healthcare demand and supply flow network between major residential clusters and various tiers of hospitals in the city. The findings reveal that: 1) There exists overall maldistribution and over-concentration of healthcare resources in the study area, characterized by structural imbalance. 2) The low rate of primary care utilization in the study area is a key factor contributing to congestion at higher-tier hospitals, as excessive reliance on these institutions by neighboring communities exacerbates the problem. 3) Gradual optimization of the healthcare facility layout in the study area, encompassing holistic, local, and individual institutional levels, can enhance systemic efficiency and resource balance. This research proposes a method for evaluating urban healthcare resource distribution structures based on service flows within hierarchical networks. It offers spatially targeted optimization suggestions for promoting the implementation of the tiered healthcare system and alleviating challenges related to accessibility and congestion in seeking medical care. In addition, the study provides some new ideas for researchers and healthcare managers in countries, cities, and healthcare management around the world with similar challenges.
Keywords: Flow of public services, healthcare facilities, spatial planning, urban networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 90287 Distributed System Computing Resource Scheduling Algorithm Based on Deep Reinforcement Learning
Authors: Yitao Lei, Xingxiang Zhai, Burra Venkata Durga Kumar
Abstract:
As the quantity and complexity of computing in large-scale software systems increase, distributed system computing becomes increasingly important. The distributed system realizes high-performance computing by collaboration between different computing resources. If there are no efficient resource scheduling resources, the abuse of distributed computing may cause resource waste and high costs. However, resource scheduling is usually an NP-hard problem, so we cannot find a general solution. However, some optimization algorithms exist like genetic algorithm, ant colony optimization, etc. The large scale of distributed systems makes this traditional optimization algorithm challenging to work with. Heuristic and machine learning algorithms are usually applied in this situation to ease the computing load. As a result, we do a review of traditional resource scheduling optimization algorithms and try to introduce a deep reinforcement learning method that utilizes the perceptual ability of neural networks and the decision-making ability of reinforcement learning. Using the machine learning method, we try to find important factors that influence the performance of distributed system computing and help the distributed system do an efficient computing resource scheduling. This paper surveys the application of deep reinforcement learning on distributed system computing resource scheduling. The research proposes a deep reinforcement learning method that uses a recurrent neural network to optimize the resource scheduling. The paper concludes the challenges and improvement directions for Deep Reinforcement Learning-based resource scheduling algorithms.
Keywords: Resource scheduling, deep reinforcement learning, distributed system, artificial intelligence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 497286 Entropy Generation and Heat Transfer of Cu–Water Nanofluid Mixed Convection in a Cavity
Authors: Mliki Bouchmel, Belgacem Nabil, Abbassi Mohamed Ammar, Geudri Kamel, Omri Ahmed
Abstract:
In this numerical work, mixed convection and entropy generation of Cu–water nanofluid in a lid-driven square cavity have been investigated numerically using the Lattice Boltzmann Method. Horizontal walls of the cavity are adiabatic and vertical walls have constant temperature but different values. The top wall has been considered as moving from left to right at a constant speed, U0. The effects of different parameters such as nanoparticle volume concentration (0–0.05), Rayleigh number (104–106) and Reynolds numbers (1, 10 and 100) on the entropy generation, flow and temperature fields are studied. The results have shown that addition of nanoparticles to the base fluid affects the entropy generation, flow pattern and thermal behavior especially at higher Rayleigh and low Reynolds numbers. For pure fluid as well as nanofluid, the increase of Reynolds number increases the average Nusselt number and the total entropy generation, linearly. The maximum entropy generation occurs in nanofluid at low Rayleigh number and at high Reynolds number. The minimum entropy generation occurs in pure fluid at low Rayleigh and Reynolds numbers. Also at higher Reynolds number, the effect of Cu nanoparticles on enhancement of heat transfer was decreased because the effect of lid-driven cavity was increased. The present results are validated by favorable comparisons with previously published results. The results of the problem are presented in graphical and tabular forms and discussed.Keywords: Entropy generation, mixed convection, nanofluid, lattice Boltzmann method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1951285 Evaluation of the Beach Erosion Process in Varadero, Matanzas, Cuba: Effects of Different Hurricane Trajectories
Authors: Ana Gabriela Diaz, Luis Fermín Córdova, Jr., Roberto Lamazares
Abstract:
The island of Cuba, the largest of the Greater Antilles, is located in the tropical North Atlantic. It is annually affected by numerous weather events, which have caused severe damage to our coastal areas. In the same way that many other coastlines around the world, the beautiful beaches of the Hicacos Peninsula also suffer from erosion. This leads to a structural regression of the coastline. If measures are not taken, the hotels will be exposed to the advance of the sea, and it will be a serious problem for the economy. With the aim of studying the intensity of this type of activity, specialists of group of coastal and marine engineering from CIH, in the framework of the research conducted within the project MEGACOSTAS 2, provide their research to simulate extreme events and assess their impact in coastal areas, mainly regarding the definition of flood volumes and morphodynamic changes in sandy beaches. The main objective of this work is the evaluation of the process of Varadero beach erosion (the coastal sector has an important impact in the country's economy) on the Hicacos Peninsula for different paths of hurricanes. The mathematical model XBeach, which was integrated into the Coastal engineering system introduced by the project of MEGACOSTA 2 to determine the area and the more critical profiles for the path of hurricanes under study, was applied. The results of this project have shown that Center area is the greatest dynamic area in the simulation of the three paths of hurricanes under study, showing high erosion volumes and the greatest average length of regression of the coastline, from 15- 22 m.
Keywords: Beach, erosion, mathematical model, coastal areas.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1220284 Study of the Thermal Performance of Bio-Sourced Materials Used as Thermal Insulation in Buildings under Humid Tropical Climate
Authors: Guarry Montrose, Ted Soubdhan
Abstract:
In the fight against climate change, the energy consuming building sector must also be taken into account to solve this problem. In this case thermal insulation of buildings using bio-based materials is an interesting solution. Therefore, the thermal performance of some materials of this type has been studied. The advantages of these natural materials of plant origin are multiple, biodegradable, low economic cost, renewable and readily available. The use of biobased materials is widespread in the building sector in order to replace conventional insulation materials with natural materials. Vegetable fibers are very important because they have good thermal behaviour and good insulating properties. The aim of using bio-sourced materials is in line with the logic of energy control and environmental protection, the approach is to make the inhabitants of the houses comfortable and reduce their energy consumption (energy efficiency). In this research we will present the results of studies carried out on the thermal conductivity of banana leaves, latan leaves, vetivers fibers, palm kernel fibers, sargassum, coconut leaves, sawdust and bulk sugarcane leaves. The study on thermal conductivity was carried out in two ways, on the one hand using the flash method, and on the other hand a so-called hot box experiment was carried out. We will discuss and highlight a number of influential factors such as moisture and air pockets present in the samples on the thermophysical properties of these materials, in particular thermal conductivity. Finally, the result of a thermal performance test of banana leaves on a roof in Haiti will also be presented in this work.
Keywords: Buildings, insulating properties, natural materials of plant origin, thermal performance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 961283 Hybrid Adaptive Modeling to Enhance Robustness of Real-Time Optimization
Authors: Hussain Syed Asad, Richard Kwok Kit Yuen, Gongsheng Huang
Abstract:
Real-time optimization has been considered an effective approach for improving energy efficient operation of heating, ventilation, and air-conditioning (HVAC) systems. In model-based real-time optimization, model mismatches cannot be avoided. When model mismatches are significant, the performance of the real-time optimization will be impaired and hence the expected energy saving will be reduced. In this paper, the model mismatches for chiller plant on real-time optimization are considered. In the real-time optimization of the chiller plant, simplified semi-physical or grey box model of chiller is always used, which should be identified using available operation data. To overcome the model mismatches associated with the chiller model, hybrid Genetic Algorithms (HGAs) method is used for online real-time training of the chiller model. HGAs combines Genetic Algorithms (GAs) method (for global search) and traditional optimization method (i.e. faster and more efficient for local search) to avoid conventional hit and trial process of GAs. The identification of model parameters is synthesized as an optimization problem; and the objective function is the Least Square Error between the output from the model and the actual output from the chiller plant. A case study is used to illustrate the implementation of the proposed method. It has been shown that the proposed approach is able to provide reliability in decision making, enhance the robustness of the real-time optimization strategy and improve on energy performance.
Keywords: Energy performance, hybrid adaptive modeling, hybrid genetic algorithms, real-time optimization, heating, ventilation, and air-conditioning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1140282 Values as a Predictor of Cyber-bullying Among Secondary School Students
Authors: Bülent Dilmaç, Didem Aydoğan
Abstract:
The use of new technologies such internet (e-mail, chat rooms) and cell phones has steeply increased in recent years. Especially among children and young people, use of technological tools and equipments is widespread. Although many teachers and administrators now recognize the problem of school bullying, few are aware that students are being harassed through electronic communication. Referred to as electronic bullying, cyber bullying, or online social cruelty, this phenomenon includes bullying through email, instant messaging, in a chat room, on a website, or through digital messages or images sent to a cell phone. Cyber bullying is defined as causing deliberate/intentional harm to others using internet or other digital technologies. It has a quantitative research design nd uses relational survey as its method. The participants consisted of 300 secondary school students in the city of Konya, Turkey. 195 (64.8%) participants were female and 105 (35.2%) were male. 39 (13%) students were at grade 1, 187 (62.1%) were at grade 2 and 74 (24.6%) were at grade 3. The “Cyber Bullying Question List" developed by Ar─▒cak (2009) was given to students. Following questions about demographics, a functional definition of cyber bullying was provided. In order to specify students- human values, “Human Values Scale (HVS)" developed by Dilmaç (2007) for secondary school students was administered. The scale consists of 42 items in six dimensions. Data analysis was conducted by the primary investigator of the study using SPSS 14.00 statistical analysis software. Descriptive statistics were calculated for the analysis of students- cyber bullying behaviour and simple regression analysis was conducted in order to test whether each value in the scale could explain cyber bullying behaviour.Keywords: Cyber bullying, Values, Secondary SchoolStudents
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3827281 Q-Map: Clinical Concept Mining from Clinical Documents
Authors: Sheikh Shams Azam, Manoj Raju, Venkatesh Pagidimarri, Vamsi Kasivajjala
Abstract:
Over the past decade, there has been a steep rise in the data-driven analysis in major areas of medicine, such as clinical decision support system, survival analysis, patient similarity analysis, image analytics etc. Most of the data in the field are well-structured and available in numerical or categorical formats which can be used for experiments directly. But on the opposite end of the spectrum, there exists a wide expanse of data that is intractable for direct analysis owing to its unstructured nature which can be found in the form of discharge summaries, clinical notes, procedural notes which are in human written narrative format and neither have any relational model nor any standard grammatical structure. An important step in the utilization of these texts for such studies is to transform and process the data to retrieve structured information from the haystack of irrelevant data using information retrieval and data mining techniques. To address this problem, the authors present Q-Map in this paper, which is a simple yet robust system that can sift through massive datasets with unregulated formats to retrieve structured information aggressively and efficiently. It is backed by an effective mining technique which is based on a string matching algorithm that is indexed on curated knowledge sources, that is both fast and configurable. The authors also briefly examine its comparative performance with MetaMap, one of the most reputed tools for medical concepts retrieval and present the advantages the former displays over the latter.Keywords: Information retrieval (IR), unified medical language system (UMLS), Syntax Based Analysis, natural language processing (NLP), medical informatics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 780280 Numerical Study on the Flow around a Steadily Rotating Spring: Understanding the Propulsion of a Bacterial Flagellum
Authors: Won Yeol Choi, Sangmo Kang
Abstract:
The propulsion of a bacterial flagellum in a viscous fluid has attracted many interests in the field of biological hydrodynamics, but remains yet fully understood and thus still a challenging problem. In this study, therefore, we have numerically investigated the flow around a steadily rotating micro-sized spring to further understand such bacterial flagellum propulsion. Note that a bacterium gains thrust (propulsive force) by rotating the flagellum connected to the body through a bio motor to move forward. For the investigation, we convert the spring model from the micro scale to the macro scale using a similitude law (scale law) and perform simulations on the converted macro-scale model using a commercial software package, CFX v13 (ANSYS). To scrutinize the propulsion characteristics of the flagellum through the simulations, we make parameter studies by changing some flow parameters, such as the pitch, helical radius and rotational speed of the spring and the Reynolds number (or fluid viscosity), expected to affect the thrust force experienced by the rotating spring. Results show that the propulsion characteristics depend strongly on the parameters mentioned above. It is observed that the forward thrust increases in a linear fashion with either of the rotational speed or the fluid viscosity. In addition, the thrust is directly proportional to square of the helical radius and but the thrust force is increased and then decreased based on the peak value to the pitch. Finally, we also present the appropriate flow and pressure fields visualized to support the observations.
Keywords: Fluid viscosity, hydrodynamics, similitude, propulsive force.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1726279 A Finite Element/Finite Volume Method for Dam-Break Flows over Deformable Beds
Authors: Alia Alghosoun, Ashraf Osman, Mohammed Seaid
Abstract:
A coupled two-layer finite volume/finite element method was proposed for solving dam-break flow problem over deformable beds. The governing equations consist of the well-balanced two-layer shallow water equations for the water flow and a linear elastic model for the bed deformations. Deformations in the topography can be caused by a brutal localized force or simply by a class of sliding displacements on the bathymetry. This deformation in the bed is a source of perturbations, on the water surface generating water waves which propagate with different amplitudes and frequencies. Coupling conditions at the interface are also investigated in the current study and two mesh procedure is proposed for the transfer of information through the interface. In the present work a new procedure is implemented at the soil-water interface using the finite element and two-layer finite volume meshes with a conservative distribution of the forces at their intersections. The finite element method employs quadratic elements in an unstructured triangular mesh and the finite volume method uses the Rusanove to reconstruct the numerical fluxes. The numerical coupled method is highly efficient, accurate, well balanced, and it can handle complex geometries as well as rapidly varying flows. Numerical results are presented for several test examples of dam-break flows over deformable beds. Mesh convergence study is performed for both methods, the overall model provides new insight into the problems at minimal computational cost.Keywords: Dam-break flows, deformable beds, finite element method, finite volume method, linear elasticity, Shallow water equations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 918278 Enhance Indoor Environment in Buildings and Its Effect on Improving Occupant's Health
Authors: Imad M. Assali
Abstract:
Recently, the world main problem is a global warming and climate change affecting both outdoor and indoor environments, especially the air quality (AQ) as a result of vast migration of people from rural areas to urban areas. Therefore, cities became more crowded and denser from an irregular population increase, along with increasing urbanization caused many problems for the environment such as increasing the land prices, changes in life style, and the new buildings are not adapted to the climate producing uncomfortable and unhealthy indoor building conditions. As interior environments are the places that create the most intimate relationship with the user. Consequently, the indoor environment quality (IEQ) for buildings became uncomfortable and unhealthy for its occupants. The symptoms commonly associated with poor indoor environment such as itchy, headache, fatigue, and respiratory complaints such as cough and congestion, etc. The symptoms tend to improve over time or even disappear when people are away from the building. Therefore, designing a healthy indoor environment to fulfill human needs is the main concern for architects and interior designer. However, this research explores how occupant expectations and environmental attitudes may influence occupant health and satisfaction within the context of the indoor environment. In doing so, it reviews and contributes to the methods and tools used to evaluate only the indoor environment quality (IEQ) components of building performance. Its main aim is to review the literature on indoor human comfort. This is followed by a review of previous papers published related to human comfort. Finally, this paper will provide possible approaches in design level of healthy buildings.Keywords: Sustainable building, indoor environment quality (IEQ), occupant's health, active system, sick building syndrome (SBS).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1904277 Bridging the Mental Gap between Convolution Approach and Compartmental Modeling in Functional Imaging: Typical Embedding of an Open Two-Compartment Model into the Systems Theory Approach of Indicator Dilution Theory
Authors: Gesine Hellwig
Abstract:
Functional imaging procedures for the non-invasive assessment of tissue microcirculation are highly requested, but require a mathematical approach describing the trans- and intercapillary passage of tracer particles. Up to now, two theoretical, for the moment different concepts have been established for tracer kinetic modeling of contrast agent transport in tissues: pharmacokinetic compartment models, which are usually written as coupled differential equations, and the indicator dilution theory, which can be generalized in accordance with the theory of lineartime- invariant (LTI) systems by using a convolution approach. Based on mathematical considerations, it can be shown that also in the case of an open two-compartment model well-known from functional imaging, the concentration-time course in tissue is given by a convolution, which allows a separation of the arterial input function from a system function being the impulse response function, summarizing the available information on tissue microcirculation. Due to this reason, it is possible to integrate the open two-compartment model into the system-theoretic concept of indicator dilution theory (IDT) and thus results known from IDT remain valid for the compartment approach. According to the long number of applications of compartmental analysis, even for a more general context similar solutions of the so-called forward problem can already be found in the extensively available appropriate literature of the seventies and early eighties. Nevertheless, to this day, within the field of biomedical imaging – not from the mathematical point of view – there seems to be a trench between both approaches, which the author would like to get over by exemplary analysis of the well-known model.
Keywords: Functional imaging, Tracer kinetic modeling, LTIsystem, Indicator dilution theory / convolution approach, Two-Compartment model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1419276 Prediction Modeling of Alzheimer’s Disease and Its Prodromal Stages from Multimodal Data with Missing Values
Authors: M. Aghili, S. Tabarestani, C. Freytes, M. Shojaie, M. Cabrerizo, A. Barreto, N. Rishe, R. E. Curiel, D. Loewenstein, R. Duara, M. Adjouadi
Abstract:
A major challenge in medical studies, especially those that are longitudinal, is the problem of missing measurements which hinders the effective application of many machine learning algorithms. Furthermore, recent Alzheimer's Disease studies have focused on the delineation of Early Mild Cognitive Impairment (EMCI) and Late Mild Cognitive Impairment (LMCI) from cognitively normal controls (CN) which is essential for developing effective and early treatment methods. To address the aforementioned challenges, this paper explores the potential of using the eXtreme Gradient Boosting (XGBoost) algorithm in handling missing values in multiclass classification. We seek a generalized classification scheme where all prodromal stages of the disease are considered simultaneously in the classification and decision-making processes. Given the large number of subjects (1631) included in this study and in the presence of almost 28% missing values, we investigated the performance of XGBoost on the classification of the four classes of AD, NC, EMCI, and LMCI. Using 10-fold cross validation technique, XGBoost is shown to outperform other state-of-the-art classification algorithms by 3% in terms of accuracy and F-score. Our model achieved an accuracy of 80.52%, a precision of 80.62% and recall of 80.51%, supporting the more natural and promising multiclass classification.
Keywords: eXtreme Gradient Boosting, missing data, Alzheimer disease, early mild cognitive impairment, late mild cognitive impairment, multiclass classification, ADNI, support vector machine, random forest.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 960275 Time Series Forecasting Using Various Deep Learning Models
Authors: Jimeng Shi, Mahek Jain, Giri Narasimhan
Abstract:
Time Series Forecasting (TSF) is used to predict the target variables at a future time point based on the learning from previous time points. To keep the problem tractable, learning methods use data from a fixed length window in the past as an explicit input. In this paper, we study how the performance of predictive models change as a function of different look-back window sizes and different amounts of time to predict into the future. We also consider the performance of the recent attention-based transformer models, which had good success in the image processing and natural language processing domains. In all, we compare four different deep learning methods (Recurrent Neural Network (RNN), Long Short-term Memory (LSTM), Gated Recurrent Units (GRU), and Transformer) along with a baseline method. The dataset (hourly) we used is the Beijing Air Quality Dataset from the website of University of California, Irvine (UCI), which includes a multivariate time series of many factors measured on an hourly basis for a period of 5 years (2010-14). For each model, we also report on the relationship between the performance and the look-back window sizes and the number of predicted time points into the future. Our experiments suggest that Transformer models have the best performance with the lowest Mean Absolute Errors (MAE = 14.599, 23.273) and Root Mean Square Errors (RSME = 23.573, 38.131) for most of our single-step and multi-steps predictions. The best size for the look-back window to predict 1 hour into the future appears to be one day, while 2 or 4 days perform the best to predict 3 hours into the future.
Keywords: Air quality prediction, deep learning algorithms, time series forecasting, look-back window.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1174274 Frictional Effects on the Dynamics of a Truncated Double-Cone Gravitational Motor
Authors: Barenten Suciu
Abstract:
In this work, effects of the friction and truncation on the dynamics of a double-cone gravitational motor, self-propelled on a straight V-shaped horizontal rail, are evaluated. Such mechanism has a variable radius of contact, and, on one hand, it is similar to a pulley mechanism that changes the potential energy into the kinetic energy of rotation, but on the other hand, it is similar to a pendulum mechanism that converts the potential energy of the suspended body into the kinetic energy of translation along a circular path. Movies of the self- propelled double-cones, made of S45C carbon steel and wood, along rails made of aluminum alloy, were shot for various opening angles of the rails. Kinematical features of the double-cones were estimated through the slow-motion processing of the recorded movies. Then, a kinematical model is derived under assumption that the distance traveled by the contact points on the rectilinear rails is identical with the distance traveled by the contact points on the truncated conical surface. Additionally, a dynamic model, for this particular contact problem, was proposed and validated against the experimental results. Based on such model, the traction force and the traction torque acting on the double-cone are identified. One proved that the rolling traction force is always smaller than the sliding friction force; i.e., the double-cone is rolling without slipping. Results obtained in this work can be used to achieve the proper design of such gravitational motor.
Keywords: Truncated double-cone, friction, rolling and sliding, dynamic model, gravitational motor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1354273 Toward Understanding and Testing Deep Learning Information Flow in Deep Learning-Based Android Apps
Authors: Jie Zhang, Qianyu Guo, Tieyi Zhang, Zhiyong Feng, Xiaohong Li
Abstract:
The widespread popularity of mobile devices and the development of artificial intelligence (AI) have led to the widespread adoption of deep learning (DL) in Android apps. Compared with traditional Android apps (traditional apps), deep learning based Android apps (DL-based apps) need to use more third-party application programming interfaces (APIs) to complete complex DL inference tasks. However, existing methods (e.g., FlowDroid) for detecting sensitive information leakage in Android apps cannot be directly used to detect DL-based apps as they are difficult to detect third-party APIs. To solve this problem, we design DLtrace, a new static information flow analysis tool that can effectively recognize third-party APIs. With our proposed trace and detection algorithms, DLtrace can also efficiently detect privacy leaks caused by sensitive APIs in DL-based apps. Additionally, we propose two formal definitions to deal with the common polymorphism and anonymous inner-class problems in the Android static analyzer. Using DLtrace, we summarize the non-sequential characteristics of DL inference tasks in DL-based apps and the specific functionalities provided by DL models for such apps. We conduct an empirical assessment with DLtrace on 208 popular DL-based apps in the wild and found that 26.0% of the apps suffered from sensitive information leakage. Furthermore, DLtrace outperformed FlowDroid in detecting and identifying third-party APIs. The experimental results demonstrate that DLtrace expands FlowDroid in understanding DL-based apps and detecting security issues therein.
Keywords: Mobile computing, deep learning apps, sensitive information, static analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 607272 Study of Pipes Scaling of Purified Wastewater Intended for the Irrigation of Agadir Golf Grass
Authors: A. Driouiche, S. Mohareb, A. Hadfi
Abstract:
In Morocco’s Agadir region, the reuse of treated wastewater for irrigation of green spaces has faced the problem of scaling of the pipes of these waters. This research paper aims at studying the phenomenon of scaling caused by the treated wastewater from the Mzar sewage treatment plant. These waters are used in the irrigation of golf turf for the Ocean Golf Resort. Ocean Golf, located about 10 km from the center of the city of Agadir, is one of the most important recreation centers in Morocco. The course is a Belt Collins design with 27 holes, and is quite open with deep challenging bunkers. The formation of solid deposits in the irrigation systems has led to a decrease in their lifetime and, consequently, a loss of load and performance. Thus, the sprinklers used in golf turf irrigation are plugged in the first weeks of operation. To study this phenomenon, the wastewater used for the irrigation of the golf turf was taken and analyzed at various points, and also samples of scale formed in the circuits of the passage of these waters were characterized. This characterization of the scale was performed by X-ray fluorescence spectrometry, X-ray diffraction (XRD), thermogravimetric analysis (TGA), differential thermal analysis (DTA), and scanning electron microscopy (SEM). The results of the physicochemical analysis of the waters show that they are full of bicarbonates (653 mg/L), chloride (478 mg/L), nitrate (412 mg/L), sodium (425 mg/L) and calcium (199mg/L). Their pH is slightly alkaline. The analysis of the scale reveals that it is rich in calcium and phosphorus. It is formed of calcium carbonate (CaCO₃), silica (SiO₂), calcium silicate (Ca₂SiO₄), hydroxylapatite (Ca₁₀P₆O₂₆), calcium carbonate and phosphate (Ca₁₀(PO₄) 6CO₃) and silicate calcium and magnesium (Ca₅MgSi₃O₁₂).
Keywords: Agadir, irrigation, scaling water, wastewater.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 723271 Rayleigh-Bénard-Taylor Convection of Newtonian Nanoliquid
Authors: P. G. Siddheshwar, T. N. Sakshath
Abstract:
In the paper we make linear and non-linear stability analyses of Rayleigh-Bénard convection of a Newtonian nanoliquid in a rotating medium (called as Rayleigh-Bénard-Taylor convection). Rigid-rigid isothermal boundaries are considered for investigation. Khanafer-Vafai-Lightstone single phase model is used for studying instabilities in nanoliquids. Various thermophysical properties of nanoliquid are obtained using phenomenological laws and mixture theory. The eigen boundary value problem is solved for the Rayleigh number using an analytical method by considering trigonometric eigen functions. We observe that the critical nanoliquid Rayleigh number is less than that of the base liquid. Thus the onset of convection is advanced due to the addition of nanoparticles. So, increase in volume fraction leads to advanced onset and thereby increase in heat transport. The amplitudes of convective modes required for estimating the heat transport are determined analytically. The tri-modal standard Lorenz model is derived for the steady state assuming small scale convective motions. The effect of rotation on the onset of convection and on heat transport is investigated and depicted graphically. It is observed that the onset of convection is delayed due to rotation and hence leads to decrease in heat transport. Hence, rotation has a stabilizing effect on the system. This is due to the fact that the energy of the system is used to create the component V. We observe that the amount of heat transport is less in the case of rigid-rigid isothermal boundaries compared to free-free isothermal boundaries.Keywords: Nanoliquid, rigid-rigid, rotation, single-phase.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1009270 Performance Analysis of Chrominance Red and Chrominance Blue in JPEG
Authors: Mamta Garg
Abstract:
While compressing text files is useful, compressing still image files is almost a necessity. A typical image takes up much more storage than a typical text message and without compression images would be extremely clumsy to store and distribute. The amount of information required to store pictures on modern computers is quite large in relation to the amount of bandwidth commonly available to transmit them over the Internet and applications. Image compression addresses the problem of reducing the amount of data required to represent a digital image. Performance of any image compression method can be evaluated by measuring the root-mean-square-error & peak signal to noise ratio. The method of image compression that will be analyzed in this paper is based on the lossy JPEG image compression technique, the most popular compression technique for color images. JPEG compression is able to greatly reduce file size with minimal image degradation by throwing away the least “important" information. In JPEG, both color components are downsampled simultaneously, but in this paper we will compare the results when the compression is done by downsampling the single chroma part. In this paper we will demonstrate more compression ratio is achieved when the chrominance blue is downsampled as compared to downsampling the chrominance red in JPEG compression. But the peak signal to noise ratio is more when the chrominance red is downsampled as compared to downsampling the chrominance blue in JPEG compression. In particular we will use the hats.jpg as a demonstration of JPEG compression using low pass filter and demonstrate that the image is compressed with barely any visual differences with both methods.Keywords: JPEG, Discrete Cosine Transform, Quantization, Color Space Conversion, Image Compression, Peak Signal to Noise Ratio & Compression Ratio.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1679