Search results for: collaborative education and design research
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10504

Search results for: collaborative education and design research

394 On the Accuracy of Basic Modal Displacement Method Considering Various Earthquakes

Authors: Seyed Sadegh Naseralavi, Sadegh Balaghi, Ehsan Khojastehfar

Abstract:

Time history seismic analysis is supposed to be the most accurate method to predict the seismic demand of structures. On the other hand, the required computational time of this method toward achieving the result is its main deficiency. While being applied in optimization process, in which the structure must be analyzed thousands of time, reducing the required computational time of seismic analysis of structures makes the optimization algorithms more practical. Apparently, the invented approximate methods produce some amount of errors in comparison with exact time history analysis but the recently proposed method namely, Complete Quadratic Combination (CQC) and Sum Root of the Sum of Squares (SRSS) drastically reduces the computational time by combination of peak responses in each mode. In the present research, the Basic Modal Displacement (BMD) method is introduced and applied towards estimation of seismic demand of main structure. Seismic demand of sampled structure is estimated by calculation of modal displacement of basic structure (in which the modal displacement has been calculated). Shear steel sampled structures are selected as case studies. The error applying the introduced method is calculated by comparison of the estimated seismic demands with exact time history dynamic analysis. The efficiency of the proposed method is demonstrated by application of three types of earthquakes (in view of time of peak ground acceleration).

Keywords: Time history dynamic analysis, basic modal displacement, earthquake induced demands, shear steel structures.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1412
393 Effects of Different Meteorological Variables on Reference Evapotranspiration Modeling: Application of Principal Component Analysis

Authors: Akinola Ikudayisi, Josiah Adeyemo

Abstract:

The correct estimation of reference evapotranspiration (ETₒ) is required for effective irrigation water resources planning and management. However, there are some variables that must be considered while estimating and modeling ETₒ. This study therefore determines the multivariate analysis of correlated variables involved in the estimation and modeling of ETₒ at Vaalharts irrigation scheme (VIS) in South Africa using Principal Component Analysis (PCA) technique. Weather and meteorological data between 1994 and 2014 were obtained both from South African Weather Service (SAWS) and Agricultural Research Council (ARC) in South Africa for this study. Average monthly data of minimum and maximum temperature (°C), rainfall (mm), relative humidity (%), and wind speed (m/s) were the inputs to the PCA-based model, while ETₒ is the output. PCA technique was adopted to extract the most important information from the dataset and also to analyze the relationship between the five variables and ETₒ. This is to determine the most significant variables affecting ETₒ estimation at VIS. From the model performances, two principal components with a variance of 82.7% were retained after the eigenvector extraction. The results of the two principal components were compared and the model output shows that minimum temperature, maximum temperature and windspeed are the most important variables in ETₒ estimation and modeling at VIS. In order words, ETₒ increases with temperature and windspeed. Other variables such as rainfall and relative humidity are less important and cannot be used to provide enough information about ETₒ estimation at VIS. The outcome of this study has helped to reduce input variable dimensionality from five to the three most significant variables in ETₒ modelling at VIS, South Africa.

Keywords: Irrigation, principal component analysis, reference evapotranspiration, Vaalharts.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1048
392 Effect of Mixing Process on Polypropylene Modified Bituminous Concrete Mix Properties

Authors: Noor Zainab Habib, Ibrahim Kamaruddin, Madzalan Napiah, Isa Mohd Tan

Abstract:

This paper presents a research conducted to investigate the effect of mixing process on polypropylene (PP) modified bitumen mixed with well graded aggregate to form modified bituminous concrete mix. Two mode of mixing, namely dry and wet with different concentration of polymer polypropylene was used with 80/100 pen bitumen, to evaluate the bituminous concrete mix properties. Three percentages of polymer varying from 1-3% by the weight of bitumen was used in this study. Three mixes namely control mix, wet mix and dry mix were prepared. Optimum binder content was calculated considering Marshall Stability, flow, air voids and Marshall Quotient at different bitumen content varying from 4% - 6.5% for control, dry and wet mix. Engineering properties thus obtained at the calculated optimum bitumen content revealed that wet mixing process is advantageous in comparison to dry mixing as it increases the stiffness of the mixture with the increase in polymer content in bitumen. Stiffness value for wet mix increases with the increase in polymer content which is beneficial in terms of rutting. 1% PP dry mix also shows enhanced stiffness, with the air void content limited to 4%.The flow behaviour of dry mix doesn't indicate any major difference with the increase in polymer content revealing that polymer acting as an aggregate only without affecting the viscosity of the binder in the mix. Polypropylene (PP) when interacted with 80 pen base bitumen enhances its performance characteristics which were brought about by altered rheological properties of the modified bitumen. The decrease in flow with the increase in binder content reflects the increase in viscosity of binder which induces the plastic flow in the mix. Workability index indicates that wet mix were easy to compact up to desired void ratio in comparison to dry mix samples.

Keywords: Marshall Flow, Marshall Stability, Polymer modified bitumen, Polypropylene, Stiffness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4478
391 Environmental and Technical Modeling of Industrial Solid Waste Management Using Analytical Network Process; A Case Study: Gilan-IRAN

Authors: D. Nouri, M.R. Sabour, M. Ghanbarzadeh Lak

Abstract:

Proper management of residues originated from industrial activities is considered as one of the serious challenges faced by industrial societies due to their potential hazards to the environment. Common disposal methods for industrial solid wastes (ISWs) encompass various combinations of solely management options, i.e. recycling, incineration, composting, and sanitary landfilling. Indeed, the procedure used to evaluate and nominate the best practical methods should be based on environmental, technical, economical, and social assessments. In this paper an environmentaltechnical assessment model is developed using analytical network process (ANP) to facilitate the decision making practice for ISWs generated at Gilan province, Iran. Using the results of performed surveys on industrial units located at Gilan, the various groups of solid wastes in the research area were characterized, and four different ISW management scenarios were studied. The evaluation process was conducted using the above-mentioned model in the Super Decisions software (version 2.0.8) environment. The results indicates that the best ISW management scenario for Gilan province is consist of recycling the metal industries residues, composting the putrescible portion of ISWs, combustion of paper, wood, fabric and polymeric wastes as well as energy extraction in the incineration plant, and finally landfilling the rest of the waste stream in addition with rejected materials from recycling and compost production plants and ashes from the incineration unit.

Keywords: Analytical Network Process, Disposal Scenario, Gilan Province, Industrial Waste.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1939
390 An Optimization Model for the Arrangement of Assembly Areas Considering Time Dynamic Area Requirements

Authors: Michael Zenker, Henrik Prinzhorn, Christian Böning, Tom Strating

Abstract:

Large-scale products are often assembled according to the job-site principle, meaning that during the assembly the product is located at a fixed position, while the area requirements are constantly changing. On one hand, the product itself is growing with each assembly step, whereas varying areas for storage, machines or working areas are temporarily required. This is an important factor when arranging products to be assembled within the factory. Currently, it is common to reserve a fixed area for each product to avoid overlaps or collisions with the other assemblies. Intending to be large enough to include the product and all adjacent areas, this reserved area corresponds to the superposition of the maximum extents of all required areas of the product. In this procedure, the reserved area is usually poorly utilized over the course of the entire assembly process; instead a large part of it remains unused. If the available area is a limited resource, a systematic arrangement of the products, which complies with the dynamic area requirements, will lead to an increased area utilization and productivity. This paper presents the results of a study on the arrangement of assembly objects assuming dynamic, competing area requirements. First, the problem situation is extensively explained, and existing research on associated topics is described and evaluated on the possibility of an adaptation. Then, a newly developed mathematical optimization model is introduced. This model allows an optimal arrangement of dynamic areas, considering logical and practical constraints. Finally, in order to quantify the potential of the developed method, some test series results are presented, showing the possible increase in area utilization.

Keywords: Dynamic area requirements, facility layout problem, optimization model, product assembly.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1040
389 Genome-Wide Analysis of BES1/BZR1 Gene Family in Five Plant Species

Authors: Jafar Ahmadi, Zhohreh Asiaban, Sedigheh Fabriki Ourang

Abstract:

Brassinosteroids (BRs) regulate cell elongation, vascular differentiation, senescence, and stress responses. BRs signal through the BES1/BZR1 family of transcription factors, which regulate hundreds of target genes involved in this pathway. In this research a comprehensive genome-wide analysis was carried out in BES1/BZR1 gene family in Arabidopsis thaliana, Cucumis sativus, Vitis vinifera, Glycin max and Brachypodium distachyon. Specifications of the desired sequences, dot plot and hydropathy plot were analyzed in the protein and genome sequences of five plant species. The maximum amino acid length was attributed to protein sequence Brdic3g with 374aa and the minimum amino acid length was attributed to protein sequence Gm7g with 163aa. The maximum Instability index was attributed to protein sequence AT1G19350 equal with 79.99 and the minimum Instability index was attributed to protein sequence Gm5g equal with 33.22. Aliphatic index of these protein sequences ranged from 47.82 to 78.79 in Arabidopsis thaliana, 49.91 to 57.50 in Vitis vinifera, 55.09 to 82.43 in Glycin max, 54.09 to 54.28 in Brachypodium distachyon 55.36 to 56.83 in Cucumis sativus. Overall, data obtained from our investigation contributes a better understanding of the complexity of the BES1/BZR1 gene family and provides the first step towards directing future experimental designs to perform systematic analysis of the functions of the BES1/BZR1 gene family.

Keywords: BES1/BZR1, Brassinosteroids, Phylogenetic analysis, Transcription factor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2248
388 Isolation and Screening of Laccase Producing Basidiomycetes via Submerged Fermentations

Authors: Mun Yee Chan, Sin Ming Goh, Lisa Gaik Ai Ong

Abstract:

Approximately 10,000 different types of dyes and pigments are being used in various industrial applications yearly, which include the textile and printing industries. However, these dyes are difficult to degrade naturally once they enter the aquatic system. Their high persistency in natural environment poses a potential health hazard to all form of life. Hence, there is a need for alternative dye removal strategy in the environment via bioremediation. In this study, fungi laccase is investigated via commercial agar dyes plates and submerged fermentation to explore the application of fungi laccase in textile dye wastewater treatment. Two locally isolated basidiomycetes were screened for laccase activity using media added with commercial dyes such as 2, 2-azino-bis (3-ethylbenzothiazoline-6-sulfonic acid (ABTS), guaiacol and Remazol Brillant Blue R (RBBR). Isolate TBB3 (1.70±0.06) and EL2 (1.78±0.08) gave the highest results for ABTS plates with the appearance of greenish halo on around the isolates. Submerged fermentation performed on Isolate TBB3 with the productivity 3.9067 U/ml/day, whereas the laccase activity for Isolate EL2 was much lower (0.2097 U/ml/day). As isolate TBB3 showed higher laccase production, it was subjected to molecular characterization by DNA isolation, PCR amplification and sequencing of ITS region of nuclear ribosomal DNA. After being compared with other sequences in National Center for Biotechnology Information (NCBI database), isolate TBB3 is probably from species Trametes hirsutei. Further research work can be performed on this isolate by upscale the production of laccase in order to meet the demands of the requirement for higher enzyme titer for the bioremediation of textile dyes.

Keywords: Bioremediation, dyes, fermentation, laccase.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2175
387 A Program Based on Artistic and Musical Activities to Acquire Educational Concepts for Children with Learning Difficulties

Authors: Ahmed Amin Mousa, Huda Mazeed, Eman Saad

Abstract:

The study aims to identify the extent of effectiveness of the artistic formation program using some types of pastes to reduce the hyperactivity of the kindergarten children with learning difficulties. The researchers have discussed the aforesaid topic, where the research sample included 120 children of ages between 5 to 6 years, from five schools for special needs, learning disability section, Cairo Governorate. The study used the quasi-empirical method, which depends on designing one group using the pre& post application measurements for the group to validate both, hypothesis and effectiveness of the program. The variables of the study were specified as follows; artistic formation program using Paper Mache as an independent variable, and its effect on the skills of kindergarten child with learning disabilities, as a dependent variable. The researchers utilized the application of an artistic formation program consisting of artistic and musical skills for kindergarten children with learning disabilities. The tools of the study, designed by the researchers, included: observation card used for recording the culling paper using pulp molding skills for kindergarten children with learning difficulties during practicing the artistic formation activity. Additionally, there was a program utilizing Artistic and Musical Activities for kindergarten children with learning disabilities to acquire educational concepts. The study was composed of 20 lessons for fine art activities and 20 lessons for musical activities, with obligation of giving the musical lesson with art lesson in one session to cast on the kindergarten child some educational concepts.

Keywords: musical activities, developing skills, early childhood, educational concepts, learning difficulties

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 525
386 Spatial Query Localization Method in Limited Reference Point Environment

Authors: Victor Krebss

Abstract:

Task of object localization is one of the major challenges in creating intelligent transportation. Unfortunately, in densely built-up urban areas, localization based on GPS only produces a large error, or simply becomes impossible. New opportunities arise for the localization due to the rapidly emerging concept of a wireless ad-hoc network. Such network, allows estimating potential distance between these objects measuring received signal level and construct a graph of distances in which nodes are the localization objects, and edges - estimates of the distances between pairs of nodes. Due to the known coordinates of individual nodes (anchors), it is possible to determine the location of all (or part) of the remaining nodes of the graph. Moreover, road map, available in digital format can provide localization routines with valuable additional information to narrow node location search. However, despite abundance of well-known algorithms for solving the problem of localization and significant research efforts, there are still many issues that currently are addressed only partially. In this paper, we propose localization approach based on the graph mapped distances on the digital road map data basis. In fact, problem is reduced to distance graph embedding into the graph representing area geo location data. It makes possible to localize objects, in some cases even if only one reference point is available. We propose simple embedding algorithm and sample implementation as spatial queries over sensor network data stored in spatial database, allowing employing effectively spatial indexing, optimized spatial search routines and geometry functions.

Keywords: Intelligent Transportation System, Sensor Network, Localization, Spatial Query, GIS, Graph Embedding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1529
385 Metallurgy of Friction Welding of Porous Stainless Steel-Solid Iron Billets

Authors: S. D. El Wakil

Abstract:

The research work reported here was aimed at investigating the feasibility of joining high-porosity stainless steel discs and wrought iron bars by friction welding. The sound friction-welded joints were then subjected to a metallurgical investigation and an analysis of failure resulting from tensile loading. Discs having 50 mm diameter and 10 mm thickness were produced by loose sintering of stainless steel powder at a temperature of 1350 oC in an argon atmosphere for one hour. Minor machining was then carried out to control the dimensions of the discs, and the density of each disc could then be determined. The level of porosity was calculated and was found to be about 40% in all of those discs. Solid wrought iron bars were also machined to facilitate tensile testing of the joints produced by friction welding. Using our previously gained experience, the porous stainless steel disc and the wrought iron tube were successfully friction welded. SEM was employed to examine the fracture surface after a tensile test of the joint in order to determine the type of failure. It revealed that the failure did not occur in the joint, but rather in the in the porous metal in the area adjacent to the joint. The load carrying capacity was actually determined by the strength of the porous metal and not by that of the welded joint. Macroscopic and microscopic metallographic examinations were also performed and showed that the welded joint involved a dense heat-affected zone where the porous metal underwent densification at elevated temperature, explaining and supporting the findings of the SEM study.

Keywords: Fracture of friction-welded joints, metallurgy of friction welding, solid-porous structures, strength of joint.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1150
384 Information Overload, Information Literacy and Use of Technology by Students

Authors: Elena Krelja Kurelović, Jasminka Tomljanović, Vlatka Davidović

Abstract:

The development of web technologies and mobile devices makes creating, accessing, using and sharing information or communicating with each other simpler every day. However, while the amount of information constantly increasing it is becoming harder to effectively organize and find quality information despite the availability of web search engines, filtering and indexing tools. Although digital technologies have overall positive impact on students’ lives, frequent use of these technologies and digital media enriched with dynamic hypertext and hypermedia content, as well as multitasking, distractions caused by notifications, calls or messages; can decrease the attention span, make thinking, memorizing and learning more difficult, which can lead to stress and mental exhaustion. This is referred to as “information overload”, “information glut” or “information anxiety”. Objective of this study is to determine whether students show signs of information overload and to identify the possible predictors. Research was conducted using a questionnaire developed for the purpose of this study. The results show that students frequently use technology (computers, gadgets and digital media), while they show moderate level of information literacy. They have sometimes experienced symptoms of information overload. According to the statistical analysis, higher frequency of technology use and lower level of information literacy are correlated with larger information overload. The multiple regression analysis has confirmed that the combination of these two independent variables has statistically significant predictive capacity for information overload. Therefore, the information science teachers should pay attention to improving the level of students’ information literacy and educate them about the risks of excessive technology use.

Keywords: Information overload, technology use, digital media, information literacy, students.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2674
383 Factors Related to Working Behavior

Authors: Charawee Butbumrung

Abstract:

This paper aimed to study the factors that relate to working behavior of employees at Pakkred Municipality, Nonthaburi Province. A questionnaire was utilized as the tool in collecting information. Descriptive statistics included frequency, percentage, mean and standard deviation. Independent- sample t- test, analysis of variance and Pearson Correlation were also used. The findings of this research revealed that the majority of the respondents were female, between 25- 35 years old, married, with a Bachelor degree. The average monthly salary of respondents was between 8,001- 12,000 Baht, and having about 4-7 years of working experience. Regarding the overall working motivation factors, the findings showed that interrelationship, respect, and acceptance were ranked as highly important factors, whereas motivation, remunerations & welfare, career growth, and working conditions were ranked as moderately important factors. Also, overall working behavior was ranked as high. The hypotheses testing revealed that different genders had a different working behavior and had a different way of working as a team, which was significant at the 0.05 confidence level, Moreover, there was a difference among employees with different monthly salary in working behavior, problem- solving and decision making, which all were significant at the 0.05 confidence level. Employees with different years of working experience were found to have work working behavior both individual and as a team at the statistical significance level of 0.01 and 0.05. The result of testing the relationship between motivation in overall working revealed that interrelationship, respect and acceptance from others, career growth, and working conditions related to working behavior at a moderate level, while motivation in performing duties and remunerations and welfares related to working behavior towards the same direction at a low level, with a statistical significance of 0.01.

Keywords: Employees of Pakkred Municipality, Factors, Nonthaburi Province, Working Behavior.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1569
382 Pre and Post IFRS Loss Avoidance in France and the United Kingdom

Authors: T. Miková

Abstract:

This paper analyzes the effect of a single uniform accounting rule on reporting quality by investigating the influence of IFRS on earnings management. This paper examines whether earnings management is reduced after IFRS adoption through the use of “loss avoidance thresholds”, a method that has been verified in earlier studies. This paper concentrates on two European countries: one that represents the continental code law tradition with weak protection of investors (France) and one that represents the Anglo-American common law tradition, which typically implies a strong enforcement system (the United Kingdom).

The research investigates a sample of 526 companies (6822 firm-year observations) during the years 2000 – 2013. The results are different for the two jurisdictions. This study demonstrates that a single set of accounting standards contributes to better reporting quality and reduces the pervasiveness of earnings management in France. In contrast, there is no evidence that a reduction in earnings management followed the implementation of IFRS in the United Kingdom. Due to the fact that IFRS benefit France but not the United Kingdom, other political and economic factors, such legal system or capital market strength, must play a significant role in influencing the comparability and transparency cross-border companies’ financial statements. Overall, the result suggests that IFRS moderately contribute to the accounting quality of reported financial statements and bring benefit for stakeholders, though the role played by other economic factors cannot be discounted.

Keywords: Accounting Standards, Earnings Management, International Financial Reporting Standards, Loss Avoidance, Reporting Quality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3120
381 Discovery of Quantified Hierarchical Production Rules from Large Set of Discovered Rules

Authors: Tamanna Siddiqui, M. Afshar Alam

Abstract:

Automated discovery of Rule is, due to its applicability, one of the most fundamental and important method in KDD. It has been an active research area in the recent past. Hierarchical representation allows us to easily manage the complexity of knowledge, to view the knowledge at different levels of details, and to focus our attention on the interesting aspects only. One of such efficient and easy to understand systems is Hierarchical Production rule (HPRs) system. A HPR, a standard production rule augmented with generality and specificity information, is of the following form: Decision If < condition> Generality Specificity . HPRs systems are capable of handling taxonomical structures inherent in the knowledge about the real world. This paper focuses on the issue of mining Quantified rules with crisp hierarchical structure using Genetic Programming (GP) approach to knowledge discovery. The post-processing scheme presented in this work uses Quantified production rules as initial individuals of GP and discovers hierarchical structure. In proposed approach rules are quantified by using Dempster Shafer theory. Suitable genetic operators are proposed for the suggested encoding. Based on the Subsumption Matrix(SM), an appropriate fitness function is suggested. Finally, Quantified Hierarchical Production Rules (HPRs) are generated from the discovered hierarchy, using Dempster Shafer theory. Experimental results are presented to demonstrate the performance of the proposed algorithm.

Keywords: Knowledge discovery in database, quantification, dempster shafer theory, genetic programming, hierarchy, subsumption matrix.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1516
380 Identification of Spam Keywords Using Hierarchical Category in C2C E-commerce

Authors: Shao Bo Cheng, Yong-Jin Han, Se Young Park, Seong-Bae Park

Abstract:

Consumer-to-Consumer (C2C) E-commerce has been growing at a very high speed in recent years. Since identical or nearly-same kinds of products compete one another by relying on keyword search in C2C E-commerce, some sellers describe their products with spam keywords that are popular but are not related to their products. Though such products get more chances to be retrieved and selected by consumers than those without spam keywords, the spam keywords mislead the consumers and waste their time. This problem has been reported in many commercial services like ebay and taobao, but there have been little research to solve this problem. As a solution to this problem, this paper proposes a method to classify whether keywords of a product are spam or not. The proposed method assumes that a keyword for a given product is more reliable if the keyword is observed commonly in specifications of products which are the same or the same kind as the given product. This is because that a hierarchical category of a product in general determined precisely by a seller of the product and so is the specification of the product. Since higher layers of the hierarchical category represent more general kinds of products, a reliable degree is differently determined according to the layers. Hence, reliable degrees from different layers of a hierarchical category become features for keywords and they are used together with features only from specifications for classification of the keywords. Support Vector Machines are adopted as a basic classifier using the features, since it is powerful, and widely used in many classification tasks. In the experiments, the proposed method is evaluated with a golden standard dataset from Yi-han-wang, a Chinese C2C E-commerce, and is compared with a baseline method that does not consider the hierarchical category. The experimental results show that the proposed method outperforms the baseline in F1-measure, which proves that spam keywords are effectively identified by a hierarchical category in C2C E-commerce.

Keywords: Spam Keyword, E-commerce, keyword features, spam filtering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2501
379 Participation in Co-Curricular Activities of Undergraduate Nursing Students Attending the Leadership Promoting Program Based on Self-Directed Learning Approach

Authors: Porntipa Taksin, Jutamas Wongchan, Amornrat Karamee

Abstract:

The researchers’ experience of student affairs in 2011-2013, we found that few undergraduate nursing students become student association members who participated in co-curricular activities, they have limited skill of self-directed-learning and leadership. We developed “A Leadership Promoting Program” using Self-Directed Learning concept. The program included six activities: 1) Breaking the ice, Decoding time, Creative SMO, Know me-Understand you, Positive thinking, and Creative dialogue, which include four aspects of these activities: decision-making, implementation, benefits, and evaluation. The one-group, pretest-posttest quasi-experimental research was designed to examine the effects of the program on participation in co-curricular activities. Thirty five students participated in the program. All were members of the board of undergraduate nursing student association of Boromarajonani College of Nursing, Chonburi. All subjects completed the questionnaire about participation in the activities at beginning and at the end of the program. Data were analyzed using descriptive statistics and dependent t-test. The results showed that the posttest scores of all four aspects mean were significantly higher than the pretest scores (t=3.30, p<.01). Three aspects had high mean scores, Benefits (Mean = 3.24, S.D. = 0.83), Decision-making (Mean = 3.21, S.D. = 0.59), and Implementation (Mean=3.06, S.D.=0.52). However, scores on evaluation falls in moderate scale (Mean = 2.68, S.D. = 1.13). Therefore, the Leadership Promoting Program based on Self-Directed Learning Approach could be a method to improve students’ participation in co-curricular activities and leadership.

Keywords: Participation in co-curricular activities, undergraduate nursing students, leadership promoting program, self-directed learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1470
378 Corporate Social Responsibility Reporting, State Ownership, and Corporate Performance in China: Proof from Longitudinal Data of Publicly Traded Enterprises from 2006 to 2020

Authors: Wanda Luen-Wun Siu, Xiaowen Zhang

Abstract:

This paper offered the primary methodical proof on how Corporate Social Responsibility (CSR) reporting related to enterprise earnings in listed firms in China in light of most evidence focusing on cross-sectional data or data in a short span of time. Using full economic and business panel data on China’s publicly listed enterprises from 2006 to 2020 over two decades in the China Stock Market & Accounting Research database, we found initial evidence of significant direct relations between CSR reporting and firm corporate performance in both state-owned and privately-owned firms over this period, supporting the stakeholder theory. Results also revealed that state-owned enterprises performed as well as private enterprises in the current period. But private enterprises performed better than state-owned enterprises in the subsequent years. Moreover, the release of social responsibility reports had the more significant impact on the financial performance of state-owned and private enterprises in the current period than in the subsequent periods. Specifically, CSR release was not significantly associated to the financial performance of state-owned enterprises on the lag of the first, second, and third periods. But it had an impact on the lag of the first, second, and third periods among private enterprises. Such findings suggested that CSR reporting helped improve the corporate financial performance of state-owned and private enterprises in the current period, but this kind of effect was more significant among private enterprises in the lag periods.

Keywords: China’s Listed Firm, CSR reporting, financial performance, panel analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 360
377 Exploring Influence Range of Tainan City Using Electronic Toll Collection Big Data

Authors: Chen Chou, Feng-Tyan Lin

Abstract:

Big Data has been attracted a lot of attentions in many fields for analyzing research issues based on a large number of maternal data. Electronic Toll Collection (ETC) is one of Intelligent Transportation System (ITS) applications in Taiwan, used to record starting point, end point, distance and travel time of vehicle on the national freeway. This study, taking advantage of ETC big data, combined with urban planning theory, attempts to explore various phenomena of inter-city transportation activities. ETC, one of government's open data, is numerous, complete and quick-update. One may recall that living area has been delimited with location, population, area and subjective consciousness. However, these factors cannot appropriately reflect what people’s movement path is in daily life. In this study, the concept of "Living Area" is replaced by "Influence Range" to show dynamic and variation with time and purposes of activities. This study uses data mining with Python and Excel, and visualizes the number of trips with GIS to explore influence range of Tainan city and the purpose of trips, and discuss living area delimited in current. It dialogues between the concepts of "Central Place Theory" and "Living Area", presents the new point of view, integrates the application of big data, urban planning and transportation. The finding will be valuable for resource allocation and land apportionment of spatial planning.

Keywords: Big Data, ITS, influence range, living area, central place theory, visualization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 970
376 Detecting Fake News: A Natural Language Processing, Reinforcement Learning, and Blockchain Approach

Authors: Ashly Joseph, Jithu Paulose

Abstract:

In an era where misleading information may quickly circulate on digital news channels, it is crucial to have efficient and trustworthy methods to detect and reduce the impact of misinformation. This research proposes an innovative framework that combines Natural Language Processing (NLP), Reinforcement Learning (RL), and Blockchain technologies to precisely detect and minimize the spread of false information in news articles on social media. The framework starts by gathering a variety of news items from different social media sites and performing preprocessing on the data to ensure its quality and uniformity. NLP methods are utilized to extract complete linguistic and semantic characteristics, effectively capturing the subtleties and contextual aspects of the language used. These features are utilized as input for a RL model. This model acquires the most effective tactics for detecting and mitigating the impact of false material by modeling the intricate dynamics of user engagements and incentives on social media platforms. The integration of blockchain technology establishes a decentralized and transparent method for storing and verifying the accuracy of information. The Blockchain component guarantees the unchangeability and safety of verified news records, while encouraging user engagement for detecting and fighting false information through an incentive system based on tokens. The suggested framework seeks to provide a thorough and resilient solution to the problems presented by misinformation in social media articles.

Keywords: Natural Language Processing, Reinforcement Learning, Blockchain, fake news mitigation, misinformation detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 46
375 The Formation of Mutual Understanding in Conversation: An Embodied Approach

Authors: Haruo Okabayashi

Abstract:

The mutual understanding in conversation is very important for human relations. This study investigates the mental function of the formation of mutual understanding between two people in conversation using the embodied approach. Forty people participated in this study. They are divided into pairs randomly. Four conversation situations between two (make/listen to fun or pleasant talk, make/listen to regrettable talk) are set for four minutes each, and the finger plethysmogram (200 Hz) of each participant is measured. As a result, the attractors of the participants who reported “I did not understand my partner” show the collapsed shape, which means the fluctuation of their rhythm is too small to match their partner’s rhythm, and their cross correlation is low. The autonomic balance of both persons tends to resonate during conversation, and both LLEs tend to resonate, too. In human history, in order for human beings as weak mammals to live, they may have been with others; that is, they have brought about resonating characteristics, which is called self-organization. However, the resonant feature sometimes collapses, depending on the lifestyle that the person was formed by himself after birth. It is difficult for people who do not have a lifestyle of mutual gaze to resonate their biological signal waves with others’. These people have features such as anxiety, fatigue, and confusion tendency. Mutual understanding is thought to be formed as a result of cooperation between the features of self-organization of the persons who are talking and the lifestyle indicated by mutual gaze. Such an entanglement phenomenon is called a nonlinear relation. By this research, it is found that the formation of mutual understanding is expressed by the rhythm of a biological signal showing a nonlinear relationship.

Keywords: Embodied approach, finger plethysmogram, mutual understanding, nonlinear phenomenon.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1289
374 Automatic Removal of Ocular Artifacts using JADE Algorithm and Neural Network

Authors: V Krishnaveni, S Jayaraman, A Gunasekaran, K Ramadoss

Abstract:

The ElectroEncephaloGram (EEG) is useful for clinical diagnosis and biomedical research. EEG signals often contain strong ElectroOculoGram (EOG) artifacts produced by eye movements and eye blinks especially in EEG recorded from frontal channels. These artifacts obscure the underlying brain activity, making its visual or automated inspection difficult. The goal of ocular artifact removal is to remove ocular artifacts from the recorded EEG, leaving the underlying background signals due to brain activity. In recent times, Independent Component Analysis (ICA) algorithms have demonstrated superior potential in obtaining the least dependent source components. In this paper, the independent components are obtained by using the JADE algorithm (best separating algorithm) and are classified into either artifact component or neural component. Neural Network is used for the classification of the obtained independent components. Neural Network requires input features that exactly represent the true character of the input signals so that the neural network could classify the signals based on those key characters that differentiate between various signals. In this work, Auto Regressive (AR) coefficients are used as the input features for classification. Two neural network approaches are used to learn classification rules from EEG data. First, a Polynomial Neural Network (PNN) trained by GMDH (Group Method of Data Handling) algorithm is used and secondly, feed-forward neural network classifier trained by a standard back-propagation algorithm is used for classification and the results show that JADE-FNN performs better than JADEPNN.

Keywords: Auto Regressive (AR) Coefficients, Feed Forward Neural Network (FNN), Joint Approximation Diagonalisation of Eigen matrices (JADE) Algorithm, Polynomial Neural Network (PNN).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1879
373 A Technical Perspective on Roadway Safety in Eastern Province: Data Evaluation and Spatial Analysis

Authors: Muhammad Farhan, Sayed Faruque, Amr Mohammed, Sami Osman, Omar Al-Jabari, Abdul Almojil

Abstract:

Saudi Arabia in recent years has seen drastic increase in traffic related crashes. With population of over 29 million, Saudi Arabia is considered as a fast growing and emerging economy. The rapid population increase and economic growth has resulted in rapid expansion of transportation infrastructure, which has led to increase in road crashes. Saudi Ministry of Interior reported more than 7,000 people killed and 68,000 injured in 2011 ranking Saudi Arabia to be one of the worst worldwide in traffic safety. The traffic safety issues in the country also result in distress to road users and cause and economic loss exceeding 3.7 billion Euros annually. Keeping this in view, the researchers in Saudi Arabia are investigating ways to improve traffic safety conditions in the country. This paper presents a multilevel approach to collect traffic safety related data required to do traffic safety studies in the region. Two highway corridors including King Fahd Highway 39 kilometre and Gulf Cooperation Council Highway 42 kilometre long connecting the cities of Dammam and Khobar were selected as a study area. Traffic data collected included traffic counts, crash data, travel time data, and speed data. The collected data was analysed using geographic information system to evaluate any correlation. Further research is needed to investigate the effectiveness of traffic safety related data when collected in a concerted effort.

Keywords: Crash Data, Data Collection, Traffic Safety.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2341
372 A Vehicle Monitoring System Based on the LoRa Technique

Authors: Chao-Linag Hsieh, Zheng-Wei Ye, Chen-Kang Huang, Yeun-Chung Lee, Chih-Hong Sun, Tzai-Hung Wen, Jehn-Yih Juang, Joe-Air Jiang

Abstract:

Air pollution and climate warming become more and more intensified in many areas, especially in urban areas. Environmental parameters are critical information to air pollution and weather monitoring. Thus, it is necessary to develop a suitable air pollution and weather monitoring system for urban areas. In this study, a vehicle monitoring system (VMS) based on the IoT technique is developed. Cars are selected as the research tool because it can reach a greater number of streets to collect data. The VMS can monitor different environmental parameters, including ambient temperature and humidity, and air quality parameters, including PM2.5, NO2, CO, and O3. The VMS can provide other information, including GPS signals and the vibration information through driving a car on the street. Different sensor modules are used to measure the parameters and collect the measured data and transmit them to a cloud server through the LoRa protocol. A user interface is used to show the sensing data storing at the cloud server. To examine the performance of the system, a researcher drove a Nissan x-trail 1998 to the area close to the Da’an District office in Taipei to collect monitoring data. The collected data are instantly shown on the user interface. The four kinds of information are provided by the interface: GPS positions, weather parameters, vehicle information, and air quality information. With the VMS, users can obtain the information regarding air quality and weather conditions when they drive their car to an urban area. Also, government agencies can make decisions on traffic planning based on the information provided by the proposed VMS.

Keywords: Vehicle, monitoring system, LoRa, smart city.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3085
371 Comparative Analysis of the Third Generation of Research Data for Evaluation of Solar Energy Potential

Authors: Claudineia Brazil, Elison Eduardo Jardim Bierhals, Luciane Teresa Salvi, Rafael Haag

Abstract:

Renewable energy sources are dependent on climatic variability, so for adequate energy planning, observations of the meteorological variables are required, preferably representing long-period series. Despite the scientific and technological advances that meteorological measurement systems have undergone in the last decades, there is still a considerable lack of meteorological observations that form series of long periods. The reanalysis is a system of assimilation of data prepared using general atmospheric circulation models, based on the combination of data collected at surface stations, ocean buoys, satellites and radiosondes, allowing the production of long period data, for a wide gamma. The third generation of reanalysis data emerged in 2010, among them is the Climate Forecast System Reanalysis (CFSR) developed by the National Centers for Environmental Prediction (NCEP), these data have a spatial resolution of 0.50 x 0.50. In order to overcome these difficulties, it aims to evaluate the performance of solar radiation estimation through alternative data bases, such as data from Reanalysis and from meteorological satellites that satisfactorily meet the absence of observations of solar radiation at global and/or regional level. The results of the analysis of the solar radiation data indicated that the reanalysis data of the CFSR model presented a good performance in relation to the observed data, with determination coefficient around 0.90. Therefore, it is concluded that these data have the potential to be used as an alternative source in locations with no seasons or long series of solar radiation, important for the evaluation of solar energy potential.

Keywords: Climate, reanalysis, renewable energy, solar radiation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 891
370 Decision-Making Strategies on Smart Dairy Farms: A Review

Authors: L. Krpalkova, N. O' Mahony, A. Carvalho, S. Campbell, G. Corkery, E. Broderick, J. Walsh

Abstract:

Farm management and operations will drastically change due to access to real-time data, real-time forecasting and tracking of physical items in combination with Internet of Things (IoT) developments to further automate farm operations. Dairy farms have embraced technological innovations and procured vast amounts of permanent data streams during the past decade; however, the integration of this information to improve the whole farm decision-making process does not exist. It is now imperative to develop a system that can collect, integrate, manage, and analyze on-farm and off-farm data in real-time for practical and relevant environmental and economic actions. The developed systems, based on machine learning and artificial intelligence, need to be connected for useful output, a better understanding of the whole farming issue and environmental impact. Evolutionary Computing (EC) can be very effective in finding the optimal combination of sets of some objects and finally, in strategy determination. The system of the future should be able to manage the dairy farm as well as an experienced dairy farm manager with a team of the best agricultural advisors. All these changes should bring resilience and sustainability to dairy farming as well as improving and maintaining good animal welfare and the quality of dairy products. This review aims to provide an insight into the state-of-the-art of big data applications and EC in relation to smart dairy farming and identify the most important research and development challenges to be addressed in the future. Smart dairy farming influences every area of management and its uptake has become a continuing trend.

Keywords: Big data, evolutionary computing, cloud, precision technologies

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 737
369 A Data Hiding Model with High Security Features Combining Finite State Machines and PMM method

Authors: Souvik Bhattacharyya, Gautam Sanyal

Abstract:

Recent years have witnessed the rapid development of the Internet and telecommunication techniques. Information security is becoming more and more important. Applications such as covert communication, copyright protection, etc, stimulate the research of information hiding techniques. Traditionally, encryption is used to realize the communication security. However, important information is not protected once decoded. Steganography is the art and science of communicating in a way which hides the existence of the communication. Important information is firstly hidden in a host data, such as digital image, video or audio, etc, and then transmitted secretly to the receiver.In this paper a data hiding model with high security features combining both cryptography using finite state sequential machine and image based steganography technique for communicating information more securely between two locations is proposed. The authors incorporated the idea of secret key for authentication at both ends in order to achieve high level of security. Before the embedding operation the secret information has been encrypted with the help of finite-state sequential machine and segmented in different parts. The cover image is also segmented in different objects through normalized cut.Each part of the encoded secret information has been embedded with the help of a novel image steganographic method (PMM) on different cuts of the cover image to form different stego objects. Finally stego image is formed by combining different stego objects and transmit to the receiver side. At the receiving end different opposite processes should run to get the back the original secret message.

Keywords: Cover Image, Finite state sequential machine, Melaymachine, Pixel Mapping Method (PMM), Stego Image, NCUT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2252
368 Biodiversity and Climate Change: Consequences for Norway Spruce Mountain Forests in Slovakia

Authors: Jozef Mindas, Jaroslav Skvarenina, Jana Skvareninova

Abstract:

Study of the effects of climate change on Norway Spruce (Picea abies) forests has mainly focused on the diversity of tree species diversity of tree species as a result of the ability of species to tolerate temperature and moisture changes as well as some effects of disturbance regime changes. The tree species’ diversity changes in spruce forests due to climate change have been analyzed via gap model. Forest gap model is a dynamic model for calculation basic characteristics of individual forest trees. Input ecological data for model calculations have been taken from the permanent research plots located in primeval forests in mountainous regions in Slovakia. The results of regional scenarios of the climatic change for the territory of Slovakia have been used, from which the values are according to the CGCM3.1 (global) model, KNMI and MPI (regional) models. Model results for conditions of the climate change scenarios suggest a shift of the upper forest limit to the region of the present subalpine zone, in supramontane zone. N. spruce representation will decrease at the expense of beech and precious broadleaved species (Acer sp., Sorbus sp., Fraxinus sp.). The most significant tree species diversity changes have been identified for the upper tree line and current belt of dwarf pine (Pinus mugo) occurrence. The results have been also discussed in relation to most important disturbances (wind storms, snow and ice storms) and phenological changes which consequences are little known. Special discussion is focused on biomass production changes in relation to carbon storage diversity in different carbon pools.

Keywords: Biodiversity, climate change, Norway spruce forests, gap model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1630
367 Social Security Reform and Management: The Case of Three Member Territories of the Organisation of Eastern Caribbean States

Authors: Cleopatra Gittens

Abstract:

It has been recognized that some social security and national insurance systems in the Eastern Caribbean are experiencing ageing populations and economic and other crises that will present a financial challenge of being unable to pay pension benefits in fifteen to twenty years. This has implications for the fiscal and economic positions of the countries themselves. Hence, organizations would need to address the issue urgently. The study adds to the body of knowledge on social security systems and social security reforms in Small Island Developing States (SIDS). It also makes recommendations for the types of reforms that social security systems in other SIDS can implement given their special circumstances. Secondary research is used to gather financial and other related information on three social security schemes in the Eastern Caribbean. Actuarial and financial reports and other documents of the social security systems are analysed to obtain financial and static data on each of the schemes. The findings show that the three schemes studied are experiencing steady increases in benefit expenditure versus contributions and increasing pensioner to insured ratios. The schemes will deplete their reserves between 2038 and 2050. Two of the schemes have increased their retirement age while the other has not embarked on any reforms. One scheme has made changes to its contribution percentages. Due to their small size, small populations and other unique circumstances, the social security schemes in the identified territories are not likely to be able to take advantage of all of the reform initiatives that the developed world embarked on when faced with similar problems. These schemes will need to make incremental changes that align with the timeframes recommended by the actuarial studies.

Keywords: Pension benefits, pension, Small Island Developing States, Social Security Reform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 145
366 Application of Voltage Stability Indices for Proper Placement of STATCOM under Load Increase Scenario

Authors: A. S. Telang, P. P. Bedekar

Abstract:

In today’s world, electrical energy has become an indispensable component of all aspects of modern human life. Reliability, security and stability are the key aspects of any power system. Failure to meet any of these three aspects results into a great impediment to modern life. Modern power systems are being subjected to heavily stressed conditions leading to voltage stability problems. If the voltage stability problems are not mitigated properly through proper voltage stability assessment methods, cascading events may occur which may lead to voltage collapse or blackout events. Modern FACTS devices like STATCOM are one of the measures to overcome the blackout problems. As these devices are very costly, they must be installed properly at suitable locations, mostly at weak bus. Line voltage stability indices such as FVSI, Lmn and LQP play important role for identification of a weak bus. This paper presents evaluation of these line stability indices for the assessment of reliable information about the closeness of the power system to voltage collapse. PSAT is a user-friendly MATLAB toolbox, of which CPF is an important feature which has been extensively used for the placement of STATCOM to assess the stability. Novelty of the present research work lies in that the active and reactive load has been changed simultaneously at all the load buses under consideration. MATLAB code has been developed for the same and tested successfully on various standard IEEE test systems. The results for standard IEEE14 bus test system, specifically, are presented in this paper.

Keywords: Voltage stability analysis, voltage collapse, PSAT, CPF, VSI, FVSI, Lmn, LQP.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1770
365 Analysis and Control of Camera Type Weft Straightener

Authors: Jae-Yong Lee, Gyu-Hyun Bae, Yun-Soo Chung, Dae-Sub Kim, Jae-Sung Bae

Abstract:

In general, fabric is heat-treated using a stenter machine in order to dry and fix its shape. It is important to shape before the heat treatment because it is difficult to revert back once the fabric is formed. To produce the product of right shape, camera type weft straightener has been applied recently to capture and process fabric images quickly. It is more powerful in determining the final textile quality rather than photo-sensor. Positioning in front of a stenter machine, weft straightener helps to spread fabric evenly and control the angle between warp and weft constantly as right angle by handling skew and bow rollers. To process this tricky procedure, the structural analysis should be carried out in advance, based on which, its control technology can be drawn. A structural analysis is to figure out the specific contact/slippage characteristics between fabric and roller. We already examined the applicability of camera type weft straightener to plain weave fabric and found its possibility and the specific working condition of machine and rollers. In this research, we aimed to explore another applicability of camera type weft straightener. Namely, we tried to figure out camera type weft straightener can be used for fabrics. To find out the optimum condition, we increased the number of rollers. The analysis is done by ANSYS software using Finite Element Analysis method. The control function is demonstrated by experiment. In conclusion, the structural analysis of weft straightener is done to identify a specific characteristic between roller and fabrics. The control of skew and bow roller is done to decrease the error of the angle between warp and weft. Finally, it is proved that camera type straightener can also be used for the special fabrics.

Keywords: Camera type weft straightener, structure analysis, control, skew and bow roller.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1441