Search results for: GeoGebra Software
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4819

Search results for: GeoGebra Software

679 Mechanical Properties of Enset Fibers Obtained from Different Breeds of Enset Plant

Authors: Diriba T. Balcha, Boris Kulig, Oliver Hensel, Eyassu Woldesenbet

Abstract:

Enset fiber is agricultural waste and available in a surplus amount in Ethiopia. However, the hypothesized variation in properties of this fiber due to diversity of its plant source breed, fiber position within plant stem and chemical treatment duration had not proven that its application for the development of composite products is problematic. Currently, limited data are known on the functional properties of the fiber as a potential functional fiber. Thus, an effort is made in this study to narrow the knowledge gaps by characterizing it. The experimental design was conducted using Design-Expert software and the tensile test was conducted on Enset fiber from 10 breeds: Dego, Dirbo, Gishera, Itine, Siskela, Neciho, Yesherkinke, Tuzuma, Ankogena, and Kucharkia. The effects of 5% Na-OH surface treatment duration and fiber location along and across the plant pseudostem was also investigated. The test result shows that the rupture stress variation is not significant among the fibers from 10 Enset breeds. However, strain variation is significant among the fibers from 10 Enset breeds that breed Dego fiber has the highest strain before failure. Surface treated fibers showed improved rupture strength and elastic modulus per 24 hours of treatment duration. Also, the result showed that chemical treatment can deteriorate the load-bearing capacity of the fiber. The raw fiber has the higher load-bearing capacity than the treated fiber. And, it was noted that both the rupture stress and strain increase in the top to bottom gradient, whereas there is no significant variation across the stem. Elastic modulus variation both along and across the stem was insignificant. The rupture stress, elastic modulus, and strain result of Enset fiber are 360.11 ± 181.86 MPa, 12.80 ± 6.85 GPa and 0.04 ± 0.02 mm/mm, respectively. These results show that Enset fiber is comparable to other natural fibers such as abaca, banana, and sisal fibers and can be used as alternatives natural fiber for composites application. Besides, the insignificant variation of properties among breeds and across stem is essential for all breeds and all leaf sheath of the Enset fiber plant for fiber extraction. The use of short natural fiber over the long is preferable to reduce the significant variation of properties along the stem or fiber direction. In conclusion, Enset fiber application for composite product design and development is mechanically feasible.

Keywords: Agricultural waste, Chemical treatment, Fiber characteristics, Natural fiber

Procedia PDF Downloads 238
678 The Structure of Financial Regulation: The Regulators Perspective

Authors: Mohamed Aljarallah, Mohamed Nurullah, George Saridakis

Abstract:

This paper aims and objectives are to investigate how the structural change of the financial regulatory bodies affect the financial supervision and how the regulators can design such a structure with taking into account; the Central Bank, the conduct of business and the prudential regulators, it will also consider looking at the structure of the international regulatory bodies and what barriers are found. There will be five questions to be answered; should conduct of business and prudential regulation be separated? Should the financial supervision and financial stability be separated? Should the financial supervision be under the Central Bank? To what extent the politician should intervene in changing the regulatory and supervisory structure? What should be the regulatory and supervisory structure when there is financial conglomerate? Semi structure interview design will be applied. This research sample selection contains a collective of financial regulators and supervisors from the emerged and emerging countries. Moreover, financial regulators and supervisors must be at a senior level at their organisations. Additionally, senior financial regulators and supervisors would come from different authorities and from around the world. For instance, one of the participants comes from the International Bank Settlements, others come from European Central Bank, and an additional one will come from Hong Kong Monetary Authority and others. Such a variety aims to fulfil the aims and objectives of the research and cover the research questions. The analysis process starts with transcription of the interview, using Nvivo software for coding, applying thematic interview to generate the main themes. The major findings of the study are as follow. First, organisational structure changes quite frequently if the mandates are not clear. Second, measuring structural change is difficult, which makes the whole process unclear. Third, effective coordination and communication are what regulators looking for when they change the structure and that requires; openness, trust, and incentive. In addition to that, issues appear during the event of crisis tend to be the reason why the structure change. Also, the development of the market sometime causes a change in the regulatory structure. And, some structural change occurs simply because of the international trend, fashion, or other countries' experiences. Furthermore, when the top management change the structure tends to change. Moreover, the structure change due to the political change, or politicians try to show they are doing something. Finally, fear of being blamed can be a driver of structural change. In conclusion, this research aims to provide an insight from the senior regulators and supervisors from fifty different countries to have a clear understanding of why the regulatory structure keeps changing from time to time through a qualitative approach, namely, semi-structure interview.

Keywords: financial regulation bodies, financial regulatory structure, global financial regulation, financial crisis

Procedia PDF Downloads 145
677 Experimental Analysis of the Influence of Water Mass Flow Rate on the Performance of a CO2 Direct-Expansion Solar Assisted Heat Pump

Authors: Sabrina N. Rabelo, Tiago de F. Paulino, Willian M. Duarte, Samer Sawalha, Luiz Machado

Abstract:

Energy use is one of the main indicators for the economic and social development of a country, reflecting directly in the quality of life of the population. The expansion of energy use together with the depletion of fossil resources and the poor efficiency of energy systems have led many countries in recent years to invest in renewable energy sources. In this context, solar-assisted heat pump has become very important in energy industry, since it can transfer heat energy from the sun to water or another absorbing source. The direct-expansion solar assisted heat pump (DX-SAHP) water heater system operates by receiving solar energy incident in a solar collector, which serves as an evaporator in a refrigeration cycle, and the energy reject by the condenser is used for water heating. In this paper, a DX-SAHP using carbon dioxide as refrigerant (R744) was assembled, and the influence of the variation of the water mass flow rate in the system was analyzed. The parameters such as high pressure, water outlet temperature, gas cooler outlet temperature, evaporator temperature, and the coefficient of performance were studied. The mainly components used to assemble the heat pump were a reciprocating compressor, a gas cooler which is a countercurrent concentric tube heat exchanger, a needle-valve, and an evaporator that is a copper bare flat plate solar collector designed to capture direct and diffuse radiation. Routines were developed in the LabVIEW and CoolProp through MATLAB software’s, respectively, to collect data and calculate the thermodynamics properties. The range of coefficient of performance measured was from 3.2 to 5.34. It was noticed that, with the higher water mass flow rate, the water outlet temperature decreased, and consequently, the coefficient of performance of the system increases since the heat transfer in the gas cooler is higher. In addition, the high pressure of the system and the CO2 gas cooler outlet temperature decreased. The heat pump using carbon dioxide as a refrigerant, especially operating with solar radiation has been proven to be a renewable source in an efficient system for heating residential water compared to electrical heaters reaching temperatures between 40 °C and 80 °C.

Keywords: water mass flow rate, R-744, heat pump, solar evaporator, water heater

Procedia PDF Downloads 176
676 Laminar Periodic Vortex Shedding over a Square Cylinder in Pseudoplastic Fluid Flow

Authors: Shubham Kumar, Chaitanya Goswami, Sudipto Sarkar

Abstract:

Pseudoplastic (n < 1, n being the power index) fluid flow can be found in food, pharmaceutical and process industries and has very complex flow nature. To our knowledge, inadequate research work has been done in this kind of flow even at very low Reynolds numbers. Here, in the present computation, we have considered unsteady laminar flow over a square cylinder in pseudoplastic flow environment. For Newtonian fluid flow, this laminar vortex shedding range lies between Re = 47-180. In this problem, we consider Re = 100 (Re = U∞ a/ ν, U∞ is the free stream velocity of the flow, a is the side of the cylinder and ν is the kinematic viscosity of the fluid). The pseudoplastic fluid range has been chosen from close to the Newtonian fluid (n = 0.8) to very high pseudoplasticity (n = 0.1). The flow domain is constituted using Gambit 2.2.30 and this software is also used to generate mesh and to impose the boundary conditions. For all places, the domain size is considered as 36a × 16a with 280 ×192 grid point in the streamwise and flow normal directions respectively. The domain and the grid points are selected after a thorough grid independent study at n = 1.0. Fine and equal grid spacing is used close to the square cylinder to capture the upper and lower shear layers shed from the cylinder. Away from the cylinder the grid is unequal in size and stretched out in all direction. Velocity inlet (u = U∞), pressure outlet (Neumann condition), symmetry (free-slip boundary condition du/dy = 0, v = 0) at upper and lower domain boundary conditions are used for this simulation. Wall boundary (u = v = 0) is considered on the square cylinder surface. Fully conservative 2-D unsteady Navier-Stokes equations are discretized and then solved by Ansys Fluent 14.5 to understand the flow nature. SIMPLE algorithm written in finite volume method is selected for this purpose which is the default solver in scripted in Fluent. The result obtained for Newtonian fluid flow agrees well with previous work supporting Fluent’s usefulness in academic research. A minute analysis of instantaneous and time averaged flow field is obtained both for Newtonian and pseudoplastic fluid flow. It has been observed that drag coefficient increases continuously with the reduced value of n. Also, the vortex shedding phenomenon changes at n = 0.4 due to flow instability. These are some of the remarkable findings for laminar periodic vortex shedding regime in pseudoplastic flow environment.

Keywords: Ansys Fluent, CFD, periodic vortex shedding, pseudoplastic fluid flow

Procedia PDF Downloads 207
675 Effects of Roasting as Preservative Method on Food Value of the Runner Groundnuts, Arachis hypogaea

Authors: M. Y. Maila, H. P. Makhubele

Abstract:

Roasting is one of the oldest preservation method used in foods such as nuts and seeds. It is a process by which heat is applied to dry foodstuffs without the use of oil or water as a carrier. Groundnut seeds, also known as peanuts when sun dried or roasted, are among the oldest oil crops that are mostly consumed as a snack, after roasting in many parts of South Africa. However, roasting can denature proteins, destroy amino acids, decrease nutritive value and induce undesirable chemical changes in the final product. The aim of this study, therefore, was to evaluate the effect of various roasting times on the food value of the runner groundnut seeds. A constant temperature of 160 °C and various time-intervals (20, 30, 40, 50 and 60 min) were used for roasting groundnut seeds in an oven. Roasted groundnut seeds were then cooled and milled to flour. The milled sundried, raw groundnuts served as reference. The proximate analysis (moisture, energy and crude fats) was performed and the results were determined using standard methods. The antioxidant content was determined using HPLC. Mineral (cobalt, chromium, silicon and iron) contents were determined by first digesting the ash of sundried and roasted seed samples in 3M Hydrochloric acid and then determined by Atomic Absorption Spectrometry. All results were subjected to ANOVA through SAS software. Relative to the reference, roasting time significantly (p ≤ 0.05) reduced moisture (71%–88%), energy (74%) and crude fat (5%–64%) of the runner groundnut seeds, whereas the antioxidant content was significantly (p ≤ 0.05) increased (35%–72%) with increasing roasting time. Similarly, the tested mineral contents of the roasted runner groundnut seeds were also significantly (p ≤ 0.05) reduced at all roasting times: cobalt (21%–83%), chromium (48%–106%) and silicon (58%–77%). However, the iron content was significantly (p ≤ 0.05) unaffected. Generally, the tested runner groundnut seeds had higher food value in the raw state than in the roasted state, except for the antioxidant content. Moisture is a critical factor affecting the shelf life, texture and flavor of the final product. Loss of moisture ensures prolonged shelf life, which contribute to the stability of the roasted peanuts. Also, increased antioxidant content in roasted groundnuts is essential in other health-promoting compounds. In conclusion, the overall reduction in the proximate and mineral contents of the runner groundnuts seeds due to roasting is sufficient to suggest influences of roasting time on the food value of the final product and shelf life.

Keywords: dry roasting, legume, oil source, peanuts

Procedia PDF Downloads 290
674 RA-Apriori: An Efficient and Faster MapReduce-Based Algorithm for Frequent Itemset Mining on Apache Flink

Authors: Sanjay Rathee, Arti Kashyap

Abstract:

Extraction of useful information from large datasets is one of the most important research problems. Association rule mining is one of the best methods for this purpose. Finding possible associations between items in large transaction based datasets (finding frequent patterns) is most important part of the association rule mining. There exist many algorithms to find frequent patterns but Apriori algorithm always remains a preferred choice due to its ease of implementation and natural tendency to be parallelized. Many single-machine based Apriori variants exist but massive amount of data available these days is above capacity of a single machine. Therefore, to meet the demands of this ever-growing huge data, there is a need of multiple machines based Apriori algorithm. For these types of distributed applications, MapReduce is a popular fault-tolerant framework. Hadoop is one of the best open-source software frameworks with MapReduce approach for distributed storage and distributed processing of huge datasets using clusters built from commodity hardware. However, heavy disk I/O operation at each iteration of a highly iterative algorithm like Apriori makes Hadoop inefficient. A number of MapReduce-based platforms are being developed for parallel computing in recent years. Among them, two platforms, namely, Spark and Flink have attracted a lot of attention because of their inbuilt support to distributed computations. Earlier we proposed a reduced- Apriori algorithm on Spark platform which outperforms parallel Apriori, one because of use of Spark and secondly because of the improvement we proposed in standard Apriori. Therefore, this work is a natural sequel of our work and targets on implementing, testing and benchmarking Apriori and Reduced-Apriori and our new algorithm ReducedAll-Apriori on Apache Flink and compares it with Spark implementation. Flink, a streaming dataflow engine, overcomes disk I/O bottlenecks in MapReduce, providing an ideal platform for distributed Apriori. Flink's pipelining based structure allows starting a next iteration as soon as partial results of earlier iteration are available. Therefore, there is no need to wait for all reducers result to start a next iteration. We conduct in-depth experiments to gain insight into the effectiveness, efficiency and scalability of the Apriori and RA-Apriori algorithm on Flink.

Keywords: apriori, apache flink, Mapreduce, spark, Hadoop, R-Apriori, frequent itemset mining

Procedia PDF Downloads 298
673 Fostering Ties and Trusts through Social Interaction within Community Gardening

Authors: Shahida Mohd Sharif, Norsidah Ujang

Abstract:

Recent research has shown that many of the urban population in Kuala Lumpur, especially from the lower-income group, suffer from socio-psychological problems. They are reported as experiencing anxiety, depression, and stress, which is made worst by the recent COVID-19 pandemic. Much of the population was forced to observe the Movement Control Order (MCO), which is part of pandemic mitigation measures, pushing them to live in isolation as the new normal. The study finds the need to strategize for a better approach to help these people coping with the socio-psychological condition, especially the population from the lower-income group. In Kuala Lumpur, as part of the Local Agenda 21 programme, the Kuala Lumpur City Hall has introduced Green Initiative: Urban Farming, which among the approaches is the community garden. The local authority promotes the engagement to be capable of improving the social environment of the participants. Research has demonstrated that social interaction within community gardens can help the members improve their socio-psychological conditions. Therefore, the study explores the residents’ experience from low-cost flats participating in the community gardening initiative from a social attachment perspective. The study will utilise semi-structured interviews to collect the participants’ experience with community gardening and how the social interaction exchange between the members' forms and develop their ties and trust. For a context, the low-cost flats are part of the government social housing program (Program Perumahan Rakyat dan Perumahan Awam). Meanwhile, the community gardening initiative (Projek Kebun Kejiranan Bandar LA21 KL) is part of the local authority initiative to address the participants’ social, environmental, and economic issues. The study will conduct thematic analysis on the collected data and use the ATLAS.ti software for data organization and management purposes. The findings could help other researchers and stakeholders understand the social interaction experience within community gardens and its relation to ties and trusts. The findings could shed some light on how the participants could improve their social environment, and its report could provide the local authority with evidence-based documentation.

Keywords: community gardening participation, lower-income population, social attachment, social interaction

Procedia PDF Downloads 138
672 Guidelines for the Management Process Development of Research Journals in Order to Develop Suan Sunandha Rajabhat University to International Standards

Authors: Araya Yordchim, Rosjana Chandhasa, Suwaree Yordchim

Abstract:

This research aims to study guidelines on the development of management process for research journals in order to develop Suan Sunandha Rajabhat University to international standards. This research investigated affecting elements ranging from the format of the article, evaluation form for research article quality, the process of creating a scholarly journal, satisfaction level of those with knowledge and competency to conduct research, arisen problems, and solutions. Drawing upon the sample size of 40 persons who had knowledge and competency in conducting research and creating scholarly journal articles at an international level, the data for this research were collected using questionnaires as a tool. Through the usage of computer software, data were analyzed by using the statistics in the forms of frequency, percentage, mean, standard deviation, and multiple regression analysis. The majority of participants were civil servants with a doctorate degree, followed by civil servants with a master's degree. Among them, the suitability of the article format was rated at a good level while the evaluation form for research articles quality was assessed at a good level. Based on participants' viewpoints, the process of creating scholarly journals was at a good level, while the satisfaction of those who had knowledge and competency in conducting research was at a satisfactory level. The problems encountered were the difficulty in accessing the website. The solution to the problem was to develop a website with user-friendly accessibility, including setting up a Google scholar profile for the purpose of references counting and the articles being used for reference in real-time. Research article format influenced the level of satisfaction of those who had the knowledge and competency to conduct research with statistical significance at the 0.01 level. The research article quality assessment form (preface section, research article writing section, preparation for research article manuscripts section, and the original article evaluation form for the author) affected the satisfaction of those with knowledge and competency to conduct research with the statistical significance at the level of 0.01. The process of establishing journals had an impact on the satisfaction of those with knowledge and ability to conduct research with statistical significance at the level of .05

Keywords: guidelines, development of management, research journals, international standards

Procedia PDF Downloads 124
671 Change Detection of Water Bodies in Dhaka City: An Analysis Using Geographic Information System and Remote Sensing

Authors: M. Humayun Kabir, Mahamuda Afroze, K. Maudood Elahi

Abstract:

Since the late 1900s, unplanned and rapid urbanization processes have drastically altered the land, reduced water bodies, and decreased vegetation cover in the capital city of Bangladesh, Dhaka. The capitalist modes of urbanization results in the encroachment of the surface water bodies in this city. The main goal of this study is to investigate the change detection of water bodies in Dhaka city, analyzing spatial distribution of water bodies and calculating the changing rate of it. This effort aims to influence public policy for environmental justice initiatives around protecting water bodies for ensuring proper function of the urban ecosystem. This study accomplishes research goal by compiling satellite imageries into GIS software to understand the changes of water bodies in Dhaka city. The work focuses on the late 20th century to early 21st century to analyze this city before and after major infrastructural changes occurred in unplanned manner. The land use of the study area has been classified into four categories, and the areas of the different land use have been calculated using MS Excel and SPSS. The results reveal that the urbanization expanded from central to northern part and major encroachment occurred at the western and eastern part of the city. It has also been found that, in 1988, the total area of water bodies was 8935.38 hectares, and it gradually decreased, and in 1998, 2008, 2017, the total areas of water bodies reached 6065.73, 4853.32, 2077.56 hectares, respectively. Rapid population growth, unplanned urbanization, and industrialization have generated pressure to change the land use pattern in Dhaka city. These expansion processes are engulfing wetland, water bodies, and vegetation cover without considering environmental impact. In order to regain the wetland and surface water bodies, the concern authorities must implement laws and act as a legal instrument in this regard and take action against the violators of it. This research is the synthesis of time series data that provides a complete picture of the water body’s status of Dhaka city that might help to make plans and policies for water body conservation.

Keywords: ecosystem, GIS, industrialization, land use, remote sensing, urbanization

Procedia PDF Downloads 154
670 Comparing Two Unmanned Aerial Systems in Determining Elevation at the Field Scale

Authors: Brock Buckingham, Zhe Lin, Wenxuan Guo

Abstract:

Accurate elevation data is critical in deriving topographic attributes for the precision management of crop inputs, especially water and nutrients. Traditional ground-based elevation data acquisition is time consuming, labor intensive, and often inconvenient at the field scale. Various unmanned aerial systems (UAS) provide the capability of generating digital elevation data from high-resolution images. The objective of this study was to compare the performance of two UAS with different global positioning system (GPS) receivers in determining elevation at the field scale. A DJI Phantom 4 Pro and a DJI Phantom 4 RTK(real-time kinematic) were applied to acquire images at three heights, including 40m, 80m, and 120m above ground. Forty ground control panels were placed in the field, and their geographic coordinates were determined using an RTK GPS survey unit. For each image acquisition using a UAS at a particular height, two elevation datasets were generated using the Pix4D stitching software: a calibrated dataset using the surveyed coordinates of the ground control panels and an uncalibrated dataset without using the surveyed coordinates of the ground control panels. Elevation values for each panel derived from the elevation model of each dataset were compared to the corresponding coordinates of the ground control panels. The coefficient of the determination (R²) and the root mean squared error (RMSE) were used as evaluation metrics to assess the performance of each image acquisition scenario. RMSE values for the uncalibrated elevation dataset were 26.613 m, 31.141 m, and 25.135 m for images acquired at 120 m, 80 m, and 40 m, respectively, using the Phantom 4 Pro UAS. With calibration for the same UAS, the accuracies were significantly improved with RMSE values of 0.161 m, 0.165, and 0.030 m, respectively. The best results showed an RMSE of 0.032 m and an R² of 0.998 for calibrated dataset generated using the Phantom 4 RTK UAS at 40m height. The accuracy of elevation determination decreased as the flight height increased for both UAS, with RMSE values greater than 0.160 m for the datasets acquired at 80 m and 160 m. The results of this study show that calibration with ground control panels improves the accuracy of elevation determination, especially for the UAS with a regular GPS receiver. The Phantom 4 Pro provides accurate elevation data with substantial surveyed ground control panels for the 40 m dataset. The Phantom 4 Pro RTK UAS provides accurate elevation at 40 m without calibration for practical precision agriculture applications. This study provides valuable information on selecting appropriate UAS and flight heights in determining elevation for precision agriculture applications.

Keywords: unmanned aerial system, elevation, precision agriculture, real-time kinematic (RTK)

Procedia PDF Downloads 165
669 The Automatisation of Dictionary-Based Annotation in a Parallel Corpus of Old English

Authors: Ana Elvira Ojanguren Lopez, Javier Martin Arista

Abstract:

The aims of this paper are to present the automatisation procedure adopted in the implementation of a parallel corpus of Old English, as well as, to assess the progress of automatisation with respect to tagging, annotation, and lemmatisation. The corpus consists of an aligned parallel text with word-for-word comparison Old English-English that provides the Old English segment with inflectional form tagging (gloss, lemma, category, and inflection) and lemma annotation (spelling, meaning, inflectional class, paradigm, word-formation and secondary sources). This parallel corpus is intended to fill a gap in the field of Old English, in which no parallel and/or lemmatised corpora are available, while the average amount of corpus annotation is low. With this background, this presentation has two main parts. The first part, which focuses on tagging and annotation, selects the layouts and fields of lexical databases that are relevant for these tasks. Most information used for the annotation of the corpus can be retrieved from the lexical and morphological database Nerthus and the database of secondary sources Freya. These are the sources of linguistic and metalinguistic information that will be used for the annotation of the lemmas of the corpus, including morphological and semantic aspects as well as the references to the secondary sources that deal with the lemmas in question. Although substantially adapted and re-interpreted, the lemmatised part of these databases draws on the standard dictionaries of Old English, including The Student's Dictionary of Anglo-Saxon, An Anglo-Saxon Dictionary, and A Concise Anglo-Saxon Dictionary. The second part of this paper deals with lemmatisation. It presents the lemmatiser Norna, which has been implemented on Filemaker software. It is based on a concordance and an index to the Dictionary of Old English Corpus, which comprises around three thousand texts and three million words. In its present state, the lemmatiser Norna can assign lemma to around 80% of textual forms on an automatic basis, by searching the index and the concordance for prefixes, stems and inflectional endings. The conclusions of this presentation insist on the limits of the automatisation of dictionary-based annotation in a parallel corpus. While the tagging and annotation are largely automatic even at the present stage, the automatisation of alignment is pending for future research. Lemmatisation and morphological tagging are expected to be fully automatic in the near future, once the database of secondary sources Freya and the lemmatiser Norna have been completed.

Keywords: corpus linguistics, historical linguistics, old English, parallel corpus

Procedia PDF Downloads 213
668 Visualization of Chinese Genealogies with Digital Technology: A Case of Genealogy of Wu Clan in the Village of Gaoqian

Authors: Huiling Feng, Jihong Liang, Xiaodong Gong, Yongjun Xu

Abstract:

Recording history is a tradition in ancient China. A record of a dynasty makes a dynastic history; a record of a locality makes a chorography, and a record of a clan makes a genealogy – the three combined together depicts a complete national history of China both macroscopically and microscopically, with genealogy serving as the foundation. Genealogy in ancient China traces back to a family tree or pedigrees in the early and medieval historical times. After Song Dynasty, the civilian society gradually emerged, and the Emperor had to allow people from the same clan to live together and hold the ancestor worship activities, thence compilation of genealogy became popular in the society. Since then, genealogies, regarded as important as ancestor and religious temples in a traditional villages even today, have played a primary role in identification of a clan and maintain local social order. Chinese genealogies are rich in their documentary materials. Take the Genealogy of Wu Clan in Gaoqian as an example. Gaoqian is a small village in Xianju County of Zhejiang Province. The Genealogy of Wu Clan in Gaoqian is composed of a whole set of materials from Foreword to Family Trees, Family Rules, Family Rituals, Family Graces and Glories, Ode to An ancestor’s Portrait, Manual for the Ancestor Temple, documents for great men in the clan, works written by learned men in the clan, the contracts concerning landed property, even notes on tombs and so on. Literally speaking, the genealogy, with detailed information from every aspect recorded in stylistic rules, is indeed the carrier of the entire culture of a clan. However, due to their scarcity in number and difficulties in reading, genealogies seldom fall into the horizons of common people. This paper, focusing on the case of the Genealogy of Wu Clan in the Village of Gaoqian, intends to reproduce a digital Genealogy by use of ICTs, through an in-depth interpretation of the literature and field investigation in Gaoqian Village. Based on this, the paper goes further to explore the general methods in transferring physical genealogies to digital ones and ways in visualizing the clanism culture embedded in the genealogies with a combination of digital technologies such as software in family trees, multimedia narratives, animation design, GIS application and e-book creators.

Keywords: clanism culture, multimedia narratives, genealogy of Wu Clan, GIS

Procedia PDF Downloads 222
667 Implications of Measuring the Progress towards Financial Risk Protection Using Varied Survey Instruments: A Case Study of Ghana

Authors: Jemima C. A. Sumboh

Abstract:

Given the urgency and consensus for countries to move towards Universal Health Coverage (UHC), health financing systems need to be accurately and consistently monitored to provide valuable data to inform policy and practice. Most of the indicators for monitoring UHC, particularly catastrophe and impoverishment, are established based on the impact of out-of-pocket health payments (OOPHP) on households’ living standards, collected through varied household surveys. These surveys, however, vary substantially in survey methods such as the length of the recall period or the number of items included in the survey questionnaire or the farming of questions, potentially influencing the level of OOPHP. Using different survey instruments can provide inaccurate, inconsistent, erroneous and misleading estimates of UHC, subsequently influencing wrong policy decisions. Using data from a household budget survey conducted by the Navrongo Health Research Center in Ghana from May 2017 to December 2018, this study intends to explore the potential implications of using surveys with varied levels of disaggregation of OOPHP data on estimates of financial risk protection. The household budget survey, structured around food and non-food expenditure, compared three OOPHP measuring instruments: Version I (existing questions used to measure OOPHP in household budget surveys), Version II (new questions developed through benchmarking the existing Classification of the Individual Consumption by Purpose (COICOP) OOPHP questions in household surveys) and Version III (existing questions used to measure OOPHP in health surveys integrated into household budget surveys- for this, the demographic and health surveillance (DHS) health survey was used). Version I, II and III contained 11, 44, and 56 health items, respectively. However, the choice of recall periods was held constant across versions. The sample size for Version I, II and III were 930, 1032 and 1068 households, respectively. Financial risk protection will be measured based on the catastrophic and impoverishment methodologies using STATA 15 and Adept Software for each version. It is expected that findings from this study will present valuable contributions to the repository of knowledge on standardizing survey instruments to obtain estimates of financial risk protection that are valid and consistent.

Keywords: Ghana, household budget surveys, measuring financial risk protection, out-of-pocket health payments, survey instruments, universal health coverage

Procedia PDF Downloads 138
666 Teamwork of Teachers in Kindergarten and School Heads Implementing Focused Leadership

Authors: Vilma Zydziunaite, Simona Kersiene

Abstract:

The concept of focused leadership means that the leader gathers the entire community in various ways to communicate and cooperate with each other, to share their knowledge and responsibility, to get involved in problem-solving, to create a safe and trusting environment and to satisfy the needs and interests of each community member. The study's aim is to analyze the teamwork of teachers working in kindergartens and schools and its CEOs by implementing confused leadership. A mixed research design was used for the research study. Quantitative research used the teamwork test "Team-Puls" (2003). Data is processed by the IBM SPSS version 29.0 software package. Semi-structured interviews were used for data collection, and qualitative content analysis was applied for data analysis. The results of quantitative research show that there is no statistically significant difference between the evaluation averages of kindergarten and school teachers. Likewise, the effectiveness and evaluation of teacher teamwork in educational institutions depend on different characteristics and processes, such as the number of participating teachers, the involvement of the institution's administration or the stages of team formation. In the qualitative research, the components of the focused leadership categories applied by the kindergarten and school CEOs emerged. The categories reflect the components of shared leadership. In the study, the sharing of responsibilities and cooperation among teachers and the sharing of knowledge among themselves is distinguished. This shows that the action takes place between the teachers when they participate in the processes voluntarily, according to their wishes or for certain reasons. Distributed leadership components occurs when leadership responsibility is extended beyond the school CEO. The components of servant leadership are expressed when the CEO achieves organizational goals in the service of others. Servant leadership is helping and striving for others, creating a safe environment. The level of the educational institution does not affect working teachers in the evaluation of working in a team. Giving freedom to teachers, the role of the CEO is dividing responsibilities and creating cooperation between teachers as well as ensuring teachers' interests, needs, emotional well-being and professional development.

Keywords: teamwork, school, teacher, school CEO, school environment, mixed research, Team-Puls test, semi-structured interview, questioning survey, qualitative content analysis, focused leadership, teacher leadership

Procedia PDF Downloads 62
665 Comparing the Gap Formation around Composite Restorations in Three Regions of Tooth Using Optical Coherence Tomography (OCT)

Authors: Rima Zakzouk, Yasushi Shimada, Yuan Zhou, Yasunori Sumi, Junji Tagami

Abstract:

Background and Purpose: Swept source optical coherence tomography (OCT) is an interferometric imaging technique that has been recently used in cariology. In spite of progress made in adhesive dentistry, the composite restoration has been failing due to secondary caries which occur due to environmental factors in oral cavities. Therefore, a precise assessment to effective marginal sealing of restoration is highly required. The aim of this study was evaluating gap formation at composite/cavity walls interface with or without phosphoric acid etching using SS-OCT. Materials and Methods: Round tapered cavities (2×2 mm) were prepared in three locations, mid-coronal, cervical, and root of bovine incisors teeth in two groups (SE and PA Groups). While self-etching adhesive (Clearfil SE Bond) was applied for the both groups, Group PA had been already pretreated with phosphoric acid etching (K-Etchant gel). Subsequently, both groups were restored by Estelite Flow Quick Flowable Composite Resin. Following 5000 thermal cycles, three cross-sectionals were obtained from each cavity using OCT at 1310-nm wavelength at 0°, 60°, 120° degrees. Scanning was repeated after two months to monitor the gap progress. Then the average percentage of gap length was calculated using image analysis software, and the difference of mean between both groups was statistically analyzed by t-test. Subsequently, the results were confirmed by sectioning and observing representative specimens under Confocal Laser Scanning Microscope (CLSM). Results: The results showed that pretreatment with phosphoric acid etching, Group PA, led to significantly bigger gaps in mid-coronal and cervical compared to SE group, while in the root cavity no significant difference was observed between both groups. On the other hand, the gaps formed in root’s cavities were significantly bigger than those in mid-coronal and cervical within the same group. This study investigated the effect of phosphoric acid on gap length progress on the composite restorations. In conclusions, phosphoric acid etching treatment did not reduce the gap formation even in different regions of the tooth. Significance: The cervical region of tooth was more exposing to gap formation than mid-coronal region, especially when we added pre-etching treatment.

Keywords: image analysis, optical coherence tomography, phosphoric acid etching, self-etch adhesives

Procedia PDF Downloads 221
664 The Effectiveness of Extracorporeal Shockwave Therapy on Pain and Motor Function in Subjects with Knee Osteoarthritis A Systematic Review and Meta-Analysis of Randomized Clinical Trial

Authors: Vu Hoang Thu Huong

Abstract:

Background and Purpose: The effects of Extracorporeal Shockwave Therapy (ESWT) in the participants with knee osteoarthritis (KOA) were unclear on physical performance although its effects on pain had been investiagted. This study aims to explore the effects of ESWT on pain relief and physical performance on KOA. Methods: The studies with the randomized controlled design to investigate the effects of ESWT on KOA were systematically searched using inclusion and exclusion criteria through seven electronic databases including Pubmed etc. between 1990 and Dec 2022. To summarize those data, visual analog scale (VAS) or pain scores were determined for measure of pain intensity. Range of knee motion, or the scores of physical activities including Lequesne index (LI), Knee Injury and Osteoarthritis Outcome Score (KOOS), and Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) were determined for measure of physical performances. The first evaluate after treatment period was define as the effect of post-treatment period or immediately effect; and the last evaluate was defined as the effect of following period or the end effect in our study. Data analysis was performed using RevMan 5.4.1 software. A significant level was set at p<0.05. Results: Eight studies (number of participant= 499) reporting the ESWT effects on mild-to-moderate severity (Grades I to III Kellgren–Lawrence) of KOA were qualified for meta-analysis. Compared with sham or placebo group, the ESWT group had a significant decrease of VAS rest score (0.90[0.12~1.67] as mean difference [95% confidence interval]) and pain score WOMAC (2.49[1.22~3.76]), and a significant improvement of physical performance with a decrease of the scores of WOMAC activities (8.18[3.97~12.39]), LI (3.47[1.68~5.26]), and KOOS (5.87[1.73~ 10.00]) in the post-treatment period. There were also a significant decrease of WOMAC pain score (2.83[2.12~3.53]) and a significant decrease of the scores of WOMAC activities (9.47[7.65~11.28]) and LI (4.12[2.34 to 5.89]) in the following period. Besides, compared with other treatment groups, ESWT also displayed the improvement in pain and physical performance, but it is not significant. Conclusions: The ESWT was effective and valuable method in pain relief as well as in improving physical activities in the participants with mild-to-moderate KOA. Clinical Relevance: There are the effects of ESWT on pain relief and the improvement of physical performance in the with KOA.

Keywords: knee osteoarthritis, extracorporeal shockwave therapy, pain relief, physical performance, shockwave

Procedia PDF Downloads 88
663 Description of a Structural Health Monitoring and Control System Using Open Building Information Modeling

Authors: Wahhaj Ahmed Farooqi, Bilal Ahmad, Sandra Maritza Zambrano Bernal

Abstract:

In view of structural engineering, monitoring of structural responses over time is of great importance with respect to recent developments of construction technologies. Recently, developments of advanced computing tools have enabled researcher’s better execution of structural health monitoring (SHM) and control systems. In the last decade, building information modeling (BIM) has substantially enhanced the workflow of planning and operating engineering structures. Typically, building information can be stored and exchanged via model files that are based on the Industry Foundation Classes (IFC) standard. In this study a modeling approach for semantic modeling of SHM and control systems is integrated into the BIM methodology using the IFC standard. For validation of the modeling approach, a laboratory test structure, a four-story shear frame structure, is modeled using a conventional BIM software tool. An IFC schema extension is applied to describe information related to monitoring and control of a prototype SHM and control system installed on the laboratory test structure. The SHM and control system is described by a semantic model applying Unified Modeling Language (UML). Subsequently, the semantic model is mapped into the IFC schema. The test structure is composed of four aluminum slabs and plate-to-column connections are fully fixed. In the center of the top story, semi-active tuned liquid column damper (TLCD) is installed. The TLCD is used to reduce effects of structural responses in context of dynamic vibration and displacement. The wireless prototype SHM and control system is composed of wireless sensor nodes. For testing the SHM and control system, acceleration response is automatically recorded by the sensor nodes equipped with accelerometers and analyzed using embedded computing. As a result, SHM and control systems can be described within open BIM, dynamic responses and information of damages can be stored, documented, and exchanged on the formal basis of the IFC standard.

Keywords: structural health monitoring, open building information modeling, industry foundation classes, unified modeling language, semi-active tuned liquid column damper, nondestructive testing

Procedia PDF Downloads 153
662 Development of Vertically Integrated 2D Lake Victoria Flow Models in COMSOL Multiphysics

Authors: Seema Paul, Jesper Oppelstrup, Roger Thunvik, Vladimir Cvetkovic

Abstract:

Lake Victoria is the second largest fresh water body in the world, located in East Africa with a catchment area of 250,000 km², of which 68,800 km² is the actual lake surface. The hydrodynamic processes of the shallow (40–80 m deep) water system are unique due to its location at the equator, which makes Coriolis effects weak. The paper describes a St.Venant shallow water model of Lake Victoria developed in COMSOL Multiphysics software, a general purpose finite element tool for solving partial differential equations. Depth soundings taken in smaller parts of the lake were combined with recent more extensive data to resolve the discrepancies of the lake shore coordinates. The topography model must have continuous gradients, and Delaunay triangulation with Gaussian smoothing was used to produce the lake depth model. The model shows large-scale flow patterns, passive tracer concentration and water level variations in response to river and tracer inflow, rain and evaporation, and wind stress. Actual data of precipitation, evaporation, in- and outflows were applied in a fifty-year simulation model. It should be noted that the water balance is dominated by rain and evaporation and model simulations are validated by Matlab and COMSOL. The model conserves water volume, the celerity gradients are very small, and the volume flow is very slow and irrotational except at river mouths. Numerical experiments show that the single outflow can be modelled by a simple linear control law responding only to mean water level, except for a few instances. Experiments with tracer input in rivers show very slow dispersion of the tracer, a result of the slow mean velocities, in turn, caused by the near-balance of rain with evaporation. The numerical and hydrodynamical model can evaluate the effects of wind stress which is exerted by the wind on the lake surface that will impact on lake water level. Also, model can evaluate the effects of the expected climate change, as manifest in changes to rainfall over the catchment area of Lake Victoria in the future.

Keywords: bathymetry, lake flow and steady state analysis, water level validation and concentration, wind stress

Procedia PDF Downloads 227
661 Deformation Characteristics of Fire Damaged and Rehabilitated Normal Strength Concrete Beams

Authors: Yeo Kyeong Lee, Hae Won Min, Ji Yeon Kang, Hee Sun Kim, Yeong Soo Shin

Abstract:

Fire incidents have been steadily increased over the last year according to national emergency management agency of South Korea. Even though most of the fire incidents with property damage have been occurred in building, rehabilitation has not been properly done with consideration of structure safety. Therefore, this study aims at evaluating rehabilitation effects on fire damaged normal strength concrete beams through experiments and finite element analyses. For the experiments, reinforced concrete beams were fabricated having designed concrete strength of 21 MPa. Two different cover thicknesses were used as 40 mm and 50 mm. After cured, the fabricated beams were heated for 1hour or 2hours according to ISO-834 standard time-temperature curve. Rehabilitation was done by removing the damaged part of cover thickness and filling polymeric mortar into the removed part. Both fire damaged beams and rehabilitated beams were tested with four point loading system to observe structural behaviors and the rehabilitation effect. To verify the experiment, finite element (FE) models for structural analysis were generated using commercial software ABAQUS 6.10-3. For the rehabilitated beam models, integrated temperature-structural analyses were performed in advance to obtain geometries of the fire damaged beams. In addition to the fire damaged beam models, rehabilitated part was added with material properties of polymeric mortar. Three dimensional continuum brick elements were used for both temperature and structural analyses. The same loading and boundary conditions as experiments were implemented to the rehabilitated beam models and non-linear geometrical analyses were performed. Test results showed that maximum loads of the rehabilitated beams were 8~10% higher than those of the non-rehabilitated beams and even 1~6 % higher than those of the non-fire damaged beam. Stiffness of the rehabilitated beams were also larger than that of non-rehabilitated beams but smaller than that of the non-fire damaged beams. In addition, predicted structural behaviors from the analyses also showed good rehabilitation effect and the predicted load-deflection curves were similar to the experimental results. From this study, both experiments and analytical results demonstrated good rehabilitation effect on the fire damaged normal strength concrete beams. For the further, the proposed analytical method can be used to predict structural behaviors of rehabilitated and fire damaged concrete beams accurately without suffering from time and cost consuming experimental process.

Keywords: fire, normal strength concrete, rehabilitation, reinforced concrete beam

Procedia PDF Downloads 508
660 The Grammar of the Content Plane as a Style Marker in Forensic Authorship Attribution

Authors: Dayane de Almeida

Abstract:

This work aims at presenting a study that demonstrates the usability of categories of analysis from Discourse Semiotics – also known as Greimassian Semiotics in authorship cases in forensic contexts. It is necessary to know if the categories examined in semiotic analysis (the ‘grammar’ of the content plane) can distinguish authors. Thus, a study with 4 sets of texts from a corpus of ‘not on demand’ written samples (those texts differ in formality degree, purpose, addressees, themes, etc.) was performed. Each author contributed with 20 texts, separated into 2 groups of 10 (Author1A, Author1B, and so on). The hypothesis was that texts from a single author were semiotically more similar to each other than texts from different authors. The assumptions and issues that led to this idea are as follows: -The features analyzed in authorship studies mostly relate to the expression plane: they are manifested on the ‘surface’ of texts. If language is both expression and content, content would also have to be considered for more accurate results. Style is present in both planes. -Semiotics postulates the content plane is structured in a ‘grammar’ that underlies expression, and that presents different levels of abstraction. This ‘grammar’ would be a style marker. -Sociolinguistics demonstrates intra-speaker variation: an individual employs different linguistic uses in different situations. Then, how to determine if someone is the author of several texts, distinct in nature (as it is the case in most forensic sets), when it is known intra-speaker variation is dependent on so many factors?-The idea is that the more abstract the level in the content plane, the lower the intra-speaker variation, because there will be a greater chance for the author to choose the same thing. If two authors recurrently chose the same options, differently from one another, it means each one’s option has discriminatory power. -Size is another issue for various attribution methods. Since most texts in real forensic settings are short, methods relying only on the expression plane tend to fail. The analysis of the content plane as proposed by greimassian semiotics would be less size-dependable. -The semiotic analysis was performed using the software Corpus Tool, generating tags to allow the counting of data. Then, similarities and differences were quantitatively measured, through the application of the Jaccard coefficient (a statistical measure that compares the similarities and differences between samples). The results showed the hypothesis was confirmed and, hence, the grammatical categories of the content plane may successfully be used in questioned authorship scenarios.

Keywords: authorship attribution, content plane, forensic linguistics, greimassian semiotics, intraspeaker variation, style

Procedia PDF Downloads 243
659 Development of Coir Reinforced Composite for Automotive Parts Application

Authors: Okpala Charles Chikwendu, Ezeanyim Okechukwu Chiedu, Onukwuli Somto Kenneth

Abstract:

The demand for lightweight and fuel-efficient automobiles has led to the use of fiber-reinforced polymer composites in place of traditional metal parts. Coir, a natural fiber, offers qualities such as low cost, good tensile strength, and biodegradability, making it a potential filler material for automotive components. However, poor interfacial adhesion between coir and polymeric matrices has been a challenge. To address poor interfacial adhesion with polymeric matrices due to their moisture content and method of preparation, the extracted coir was chemically treated using NaOH. To develop a side view mirror encasement by investigating the mechanical effect of fiber percentage composition, fiber length and percentage composition of Epoxy in a coir fiber reinforced composite, polyester was adopted as the resin for the mold, while that of the product is Epoxy. Coir served as the filler material for the product. Specimens with varied compositions of fiber loading (15, 30 and 45) %, length (10, 15, 20, 30 and 45) mm, and (55, 70, 85) % weight of epoxy resin were fabricated using hand lay-up technique, while those specimens were later subjected to mechanical tests (Tensile, Flexural and Impact test). The results of the mechanical test showed that the optimal solution for the input factors is coir at 45%, epoxy at 54.543%, and 45mm coir length, which was used for the development of a vehicle’s side view mirror encasement. The optimal solutions for the response parameters are 49.333 Mpa for tensile strength, flexural for 57.118 Mpa, impact strength for 34.787 KJ/M2, young modulus for 4.788 GPa, stress for 4.534 KN, and 20.483 mm for strain. The models that were developed using Design Expert software revealed that the input factors can achieve the response parameters in the system with 94% desirability. The study showed that coir is quite durable for filler material in an epoxy composite for automobile applications and that fiber loading and length have a significant effect on the mechanical behavior of coir fiber-reinforced epoxy composites. The coir's low density, considerable tensile strength, and bio-degradability contribute to its eco-friendliness and potential for reducing the environmental hazards of synthetic automotive components.

Keywords: coir, composite, coir fiber, coconut husk, polymer, automobile, mechanical test

Procedia PDF Downloads 64
658 Choice Analysis of Ground Access to São Paulo/Guarulhos International Airport Using Adaptive Choice-Based Conjoint Analysis (ACBC)

Authors: Carolina Silva Ansélmo

Abstract:

Airports are demand-generating poles that affect the flow of traffic around them. The airport access system must be fast, convenient, and adequately planned, considering its potential users. An airport with good ground access conditions can provide the user with a more satisfactory access experience. When several transport options are available, service providers must understand users' preferences and the expected quality of service. The present study focuses on airport access in a comparative scenario between bus, private vehicle, subway, taxi and urban mobility transport applications to São Paulo/Guarulhos International Airport. The objectives are (i) to identify the factors that influence the choice, (ii) to measure Willingness to Pay (WTP), and (iii) to estimate the market share for each modal. The applied method was Adaptive Choice-based Conjoint Analysis (ACBC) technique using Sawtooth Software. Conjoint analysis, rooted in Utility Theory, is a survey technique that quantifies the customer's perceived utility when choosing alternatives. Assessing user preferences provides insights into their priorities for product or service attributes. An additional advantage of conjoint analysis is its requirement for a smaller sample size compared to other methods. Furthermore, ACBC provides valuable insights into consumers' preferences, willingness to pay, and market dynamics, aiding strategic decision-making to provide a better customer experience, pricing, and market segmentation. In the present research, the ACBC questionnaire had the following variables: (i) access time to the boarding point, (ii) comfort in the vehicle, (iii) number of travelers together, (iv) price, (v) supply power, and (vi) type of vehicle. The case study questionnaire reached 213 valid responses considering the scenario of access from the São Paulo city center to São Paulo/Guarulhos International Airport. As a result, the price and the number of travelers are the most relevant attributes for the sample when choosing airport access. The market share of the selection is mainly urban mobility transport applications, followed by buses, private vehicles, taxis and subways.

Keywords: adaptive choice-based conjoint analysis, ground access to airport, market share, willingness to pay

Procedia PDF Downloads 79
657 Modeling Floodplain Vegetation Response to Groundwater Variability Using ArcSWAT Hydrological Model, Moderate Resolution Imaging Spectroradiometer - Normalised Difference Vegetation Index Data, and Machine Learning

Authors: Newton Muhury, Armando A. Apan, Tek Maraseni

Abstract:

This study modelled the relationships between vegetation response and available water below the soil surface using the Terra’s Moderate Resolution Imaging Spectroradiometer (MODIS) generated Normalised Difference Vegetation Index (NDVI) and soil water content (SWC) data. The Soil & Water Assessment Tool (SWAT) interface known as ArcSWAT was used in ArcGIS for the groundwater analysis. The SWAT model was calibrated and validated in SWAT-CUP software using 10 years (2001-2010) of monthly streamflow data. The average Nash-Sutcliffe Efficiency during the calibration and validation was 0.54 and 0.51, respectively, indicating that the model performances were good. Twenty years (2001-2020) of monthly MODIS NDVI data for three different types of vegetation (forest, shrub, and grass) and soil water content for 43 sub-basins were analysed using the WEKA, machine learning tool with a selection of two supervised machine learning algorithms, i.e., support vector machine (SVM) and random forest (RF). The modelling results show that different types of vegetation response and soil water content vary in the dry and wet season. For example, the model generated high positive relationships (r=0.76, 0.73, and 0.81) between the measured and predicted NDVI values of all vegetation in the study area against the groundwater flow (GW), soil water content (SWC), and the combination of these two variables, respectively, during the dry season. However, these relationships were reduced by 36.8% (r=0.48) and 13.6% (r=0.63) against GW and SWC, respectively, in the wet season. On the other hand, the model predicted a moderate positive relationship (r=0.63) between shrub vegetation type and soil water content during the dry season, which was reduced by 31.7% (r=0.43) during the wet season. Our models also predicted that vegetation in the top location (upper part) of the sub-basin is highly responsive to GW and SWC (r=0.78, and 0.70) during the dry season. The results of this study indicate the study region is suitable for seasonal crop production in dry season. Moreover, the results predicted that the growth of vegetation in the top-point location is highly dependent on groundwater flow in both dry and wet seasons, and any instability or long-term drought can negatively affect these floodplain vegetation communities. This study has enriched our knowledge of vegetation responses to groundwater in each season, which will facilitate better floodplain vegetation management.

Keywords: ArcSWAT, machine learning, floodplain vegetation, MODIS NDVI, groundwater

Procedia PDF Downloads 119
656 A Virtual Set-Up to Evaluate Augmented Reality Effect on Simulated Driving

Authors: Alicia Yanadira Nava Fuentes, Ilse Cervantes Camacho, Amadeo José Argüelles Cruz, Ana María Balboa Verduzco

Abstract:

Augmented reality promises being present in future driving, with its immersive technology let to show directions and maps to identify important places indicating with graphic elements when the car driver requires the information. On the other side, driving is considered a multitasking activity and, for some people, a complex activity where different situations commonly occur that require the immediate attention of the car driver to make decisions that contribute to avoid accidents; therefore, the main aim of the project is the instrumentation of a platform with biometric sensors that allows evaluating the performance in driving vehicles with the influence of augmented reality devices to detect the level of attention in drivers, since it is important to know the effect that it produces. In this study, the physiological sensors EPOC X (EEG), ECG06 PRO and EMG Myoware are joined in the driving test platform with a Logitech G29 steering wheel and the simulation software City Car Driving in which the level of traffic can be controlled, as well as the number of pedestrians that exist within the simulation obtaining a driver interaction in real mode and through a MSP430 microcontroller achieves the acquisition of data for storage. The sensors bring a continuous analog signal in time that needs signal conditioning, at this point, a signal amplifier is incorporated due to the acquired signals having a sensitive range of 1.25 mm/mV, also filtering that consists in eliminating the frequency bands of the signal in order to be interpretative and without noise to convert it from an analog signal into a digital signal to analyze the physiological signals of the drivers, these values are stored in a database. Based on this compilation, we work on the extraction of signal features and implement K-NN (k-nearest neighbor) classification methods and decision trees (unsupervised learning) that enable the study of data for the identification of patterns and determine by classification methods different effects of augmented reality on drivers. The expected results of this project include are a test platform instrumented with biometric sensors for data acquisition during driving and a database with the required variables to determine the effect caused by augmented reality on people in simulated driving.

Keywords: augmented reality, driving, physiological signals, test platform

Procedia PDF Downloads 142
655 Design and Assessment of Base Isolated Structures under Spectrum-Compatible Bidirectional Earthquakes

Authors: Marco Furinghetti, Alberto Pavese, Michele Rinaldi

Abstract:

Concave Surface Slider devices have been more and more used in real applications for seismic protection of both bridge and building structures. Several research activities have been carried out, in order to investigate the lateral response of such a typology of devices, and a reasonably high level of knowledge has been reached. If radial analysis is performed, the frictional force is always aligned with respect to the restoring force, whereas under bidirectional seismic events, a bi-axial interaction of the directions of motion occurs, due to the step-wise projection of the main frictional force, which is assumed to be aligned to the trajectory of the isolator. Nonetheless, if non-linear time history analyses have to be performed, standard codes provide precise rules for the definition of an averagely spectrum-compatible set of accelerograms in radial conditions, whereas for bidirectional motions different combinations of the single components spectra can be found. Moreover, nowadays software for the adjustment of natural accelerograms are available, which lead to a higher quality of spectrum-compatibility and to a smaller dispersion of results for radial motions. In this endeavor a simplified design procedure is defined, for building structures, base-isolated by means of Concave Surface Slider devices. Different case study structures have been analyzed. In a first stage, the capacity curve has been computed, by means of non-linear static analyses on the fixed-base structures: inelastic fiber elements have been adopted and different direction angles of lateral forces have been studied. Thanks to these results, a linear elastic Finite Element Model has been defined, characterized by the same global stiffness of the linear elastic branch of the non-linear capacity curve. Then, non-linear time history analyses have been performed on the base-isolated structures, by applying seven bidirectional seismic events. The spectrum-compatibility of bidirectional earthquakes has been studied, by considering different combinations of single components and adjusting single records: thanks to the proposed procedure, results have shown a small dispersion and a good agreement in comparison to the assumed design values.

Keywords: concave surface slider, spectrum-compatibility, bidirectional earthquake, base isolation

Procedia PDF Downloads 292
654 A Method to Evaluate and Compare Web Information Extractors

Authors: Patricia Jiménez, Rafael Corchuelo, Hassan A. Sleiman

Abstract:

Web mining is gaining importance at an increasing pace. Currently, there are many complementary research topics under this umbrella. Their common theme is that they all focus on applying knowledge discovery techniques to data that is gathered from the Web. Sometimes, these data are relatively easy to gather, chiefly when it comes from server logs. Unfortunately, there are cases in which the data to be mined is the data that is displayed on a web document. In such cases, it is necessary to apply a pre-processing step to first extract the information of interest from the web documents. Such pre-processing steps are performed using so-called information extractors, which are software components that are typically configured by means of rules that are tailored to extracting the information of interest from a web page and structuring it according to a pre-defined schema. Paramount to getting good mining results is that the technique used to extract the source information is exact, which requires to evaluate and compare the different proposals in the literature from an empirical point of view. According to Google Scholar, about 4 200 papers on information extraction have been published during the last decade. Unfortunately, they were not evaluated within a homogeneous framework, which leads to difficulties to compare them empirically. In this paper, we report on an original information extraction evaluation method. Our contribution is three-fold: a) this is the first attempt to provide an evaluation method for proposals that work on semi-structured documents; the little existing work on this topic focuses on proposals that work on free text, which has little to do with extracting information from semi-structured documents. b) It provides a method that relies on statistically sound tests to support the conclusions drawn; the previous work does not provide clear guidelines or recommend statistically sound tests, but rather a survey that collects many features to take into account as well as related work; c) We provide a novel method to compute the performance measures regarding unsupervised proposals; otherwise they would require the intervention of a user to compute them by using the annotations on the evaluation sets and the information extracted. Our contributions will definitely help researchers in this area make sure that they have advanced the state of the art not only conceptually, but from an empirical point of view; it will also help practitioners make informed decisions on which proposal is the most adequate for a particular problem. This conference is a good forum to discuss on our ideas so that we can spread them to help improve the evaluation of information extraction proposals and gather valuable feedback from other researchers.

Keywords: web information extractors, information extraction evaluation method, Google scholar, web

Procedia PDF Downloads 248
653 Optimization of Dez Dam Reservoir Operation Using Genetic Algorithm

Authors: Alireza Nikbakht Shahbazi, Emadeddin Shirali

Abstract:

Since optimization issues of water resources are complicated due to the variety of decision making criteria and objective functions, it is sometimes impossible to resolve them through regular optimization methods or, it is time or money consuming. Therefore, the use of modern tools and methods is inevitable in resolving such problems. An accurate and essential utilization policy has to be determined in order to use natural resources such as water reservoirs optimally. Water reservoir programming studies aim to determine the final cultivated land area based on predefined agricultural models and water requirements. Dam utilization rule curve is also provided in such studies. The basic information applied in water reservoir programming studies generally include meteorological, hydrological, agricultural and water reservoir related data, and the geometric characteristics of the reservoir. The system of Dez dam water resources was simulated applying the basic information in order to determine the capability of its reservoir to provide the objectives of the performed plan. As a meta-exploratory method, genetic algorithm was applied in order to provide utilization rule curves (intersecting the reservoir volume). MATLAB software was used in order to resolve the foresaid model. Rule curves were firstly obtained through genetic algorithm. Then the significance of using rule curves and the decrease in decision making variables in the system was determined through system simulation and comparing the results with optimization results (Standard Operating Procedure). One of the most essential issues in optimization of a complicated water resource system is the increasing number of variables. Therefore a lot of time is required to find an optimum answer and in some cases, no desirable result is obtained. In this research, intersecting the reservoir volume has been applied as a modern model in order to reduce the number of variables. Water reservoir programming studies has been performed based on basic information, general hypotheses and standards and applying monthly simulation technique for a statistical period of 30 years. Results indicated that application of rule curve prevents the extreme shortages and decrease the monthly shortages.

Keywords: optimization, rule curve, genetic algorithm method, Dez dam reservoir

Procedia PDF Downloads 267
652 An Investigation into the Influence of Compression on 3D Woven Preform Thickness and Architecture

Authors: Calvin Ralph, Edward Archer, Alistair McIlhagger

Abstract:

3D woven textile composites continue to emerge as an advanced material for structural applications and composite manufacture due to their bespoke nature, through thickness reinforcement and near net shape capabilities. When 3D woven preforms are produced, they are in their optimal physical state. As 3D weaving is a dry preforming technology it relies on compression of the preform to achieve the desired composite thickness, fibre volume fraction (Vf) and consolidation. This compression of the preform during manufacture results in changes to its thickness and architecture which can often lead to under-performance or changes of the 3D woven composite. Unlike traditional 2D fabrics, the bespoke nature and variability of 3D woven architectures makes it difficult to know exactly how each 3D preform will behave during processing. Therefore, the focus of this study is to investigate the effect of compression on differing 3D woven architectures in terms of structure, crimp or fibre waviness and thickness as well as analysing the accuracy of available software to predict how 3D woven preforms behave under compression. To achieve this, 3D preforms are modelled and compression simulated in Wisetex with varying architectures of binder style, pick density, thickness and tow size. These architectures have then been woven with samples dry compression tested to determine the compressibility of the preforms under various pressures. Additional preform samples were manufactured using Resin Transfer Moulding (RTM) with varying compressive force. Composite samples were cross sectioned, polished and analysed using microscopy to investigate changes in architecture and crimp. Data from dry fabric compression and composite samples were then compared alongside the Wisetex models to determine accuracy of the prediction and identify architecture parameters that can affect the preform compressibility and stability. Results indicate that binder style/pick density, tow size and thickness have a significant effect on compressibility of 3D woven preforms with lower pick density allowing for greater compression and distortion of the architecture. It was further highlighted that binder style combined with pressure had a significant effect on changes to preform architecture where orthogonal binders experienced highest level of deformation, but highest overall stability, with compression while layer to layer indicated a reduction in fibre crimp of the binder. In general, simulations showed a relative comparison to experimental results; however, deviation is evident due to assumptions present within the modelled results.

Keywords: 3D woven composites, compression, preforms, textile composites

Procedia PDF Downloads 136
651 An Assessment of the Role of Actors in the Medical Waste Management Policy-Making Process of Bangladesh

Authors: Md Monirul Islam, Shahaduz Zaman, Mosarraf H. Sarker

Abstract:

Context: Medical waste management (MWM) is a critical sector in Bangladesh due to its impact on human health and the environment. There is a need to assess the current policies and identify the role of policy actors in the policy formulation and implementation process. Research Aim: The study aimed to evaluate the role of policy actors in the medical waste management policy-making process in Bangladesh, identify policy gaps, and provide actionable recommendations for improvement. Methodology: The study adopted a qualitative research method and conducted key informant interviews. The data collected were analyzed using the thematic coding approach through Atlas.ti software. Findings: The study found that policies are formulated at higher administrative levels and implemented in a top-down approach. Higher-level institutions predominantly contribute to policy development, while lower-level institutions focus on implementation. However, due to negligence, ignorance, and lack of coordination, medical waste management receives insufficient attention from the actors. The study recommends the need for immediate strategies, a comprehensive action plan, regular policy updates, and inter-ministerial meetings to enhance medical waste management practices and interventions. Theoretical Importance: The research contributes to evaluating the role of policy actors in medical waste management policymaking and implementation in Bangladesh. It identifies policy gaps and provides actionable recommendations for improvement. Data Collection: The study used key informant interviews as the data collection method. Thirty-six participants were interviewed, including influential policymakers and representatives of various administrative spheres. Analysis Procedures: The data collected was analyzed using the inductive thematic analysis approach. Question Addressed: The study aimed to assess the role of policy actors in medical waste management policymaking and implementation in Bangladesh. Conclusion: In conclusion, the study provides insights into the current medical waste management policy in Bangladesh, the role of policy actors in policy formulation and implementation, and the need for improved strategies and policy updates. The findings of this study can guide future policy-making efforts to enhance medical waste management practices and interventions in Bangladesh.

Keywords: key informant, medical waste management, policy maker, qualitative study

Procedia PDF Downloads 81
650 The Acquisition of /r/ By Setswana-Learning Children

Authors: Keneilwe Matlhaku

Abstract:

Crosslinguistic studies (theoretical and clinical) have shown delays and significant misarticulation in the acquisition of the rhotics. This article provides a detailed analysis of the early development of the rhotic phoneme, an apical trill /r/, by monolingual Setswana (Tswana S30) children of age ranges between 1 and 4 years. The data display the following trends: (1) late acquisition of /r/; (2) a wide range of substitution patterns involving this phoneme (i.e., gliding, coronal stopping, affrication, deletion, lateralization, as well as, substitution to a dental and uvular fricative). The primary focus of the article is on the potential origins of these variations of /r/, even within the same language. Our data comprises naturalistic longitudinal audio recordings of 6 children (2 males and 4 females) whose speech was recorded in their homes over a period of 4 months with no or only minimal disruptions in their daily environments. Phon software (Rose et al. 2013; Rose & MacWhinney 2014) was used to carry out the orthographic and phonetic transcriptions of the children’s data. Phon also enabled the generation of the children’s phonological inventories for comparison with adult target IPA forms. We explain the children’s patterns through current models of phonological emergence (MacWhinney 2015) as well as McAllister Byun, Inkelas & Rose (2016); Rose et al., (2022), which highlight the perceptual and articulatory factors influencing the development of sounds and sound classes. We highlight how the substitution patterns observed in the data can be captured through a consideration of the auditory properties of the target speech sounds, combined with an understanding of the types of articulatory gestures involved in the production of these sounds. These considerations, in turn, highlight some of the most central aspects of the challenges faced by the child toward learning these auditory-articulatory mappings. We provide a cross-linguistic survey of the acquisition of rhotic consonants in a sample of related and unrelated languages in which we show that the variability and volatility in the substitution patterns of /r/ is also brought about by the properties of the children’s ambient languages. Beyond theoretical issues, this article sets an initial foundation for developing speech-language pathology materials and services for Setswana learning children, an emerging area of public service in Botswana.

Keywords: rhotic, apical trill, Phon, phonological emergence, auditory, articulatory, mapping

Procedia PDF Downloads 40