Search results for: learning structure
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4512

Search results for: learning structure

372 The Estimation of Bird Diversity Loss and Gain as an Impact of Oil Palm Plantation: Study Case in KJNP Estate Riau Province

Authors: Yanto Santosa, Catharina Yudea

Abstract:

The rapid growth of oil palm industry in Indonesia raised many negative accusations from various parties, who said that oil palm plantation is damaging the environment and biodiversity, including birds. Since research on oil palm plantation impacts on bird diversity is still limited, this study needs to be developed in order to gain further learning and understanding. Data on bird diversity were collected in March 2018 in KJNP Estate, Riau Province using strip transect method on five different land cover types (young, intermediate, and old growth of oil palm plantation, high conservation value area, and crops field or the baseline). The observations were conducted simultaneously, with three repetitions. The result shows that the baseline has 19 species of birds and land cover after the oil palm plantation has 39 species. HCV (high conservation value) area has the highest increase in diversity value. Oil palm plantation has changed the composition of bird species. The highest similarity index is shown by young growth oil palm land cover with total score 0.65, meanwhile the lowest similarity index with total score 0.43 is shown by HCV area. Overall, the existence of oil palm plantation made a positive impact by increasing bird species diversity, with total 23 species gained and 3 species lost.

Keywords: Bird diversity, crops field, impact of oil palm plantation, KJNP estate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 777
371 Outsourcing the Front End of Innovation

Authors: B. Likar, K. Širok

Abstract:

The paper presents a new method for efficient innovation process management. Even though the innovation management methods, tools and knowledge are well established and documented in literature, most of the companies still do not manage it efficiently. Especially in SMEs the front end of innovation - problem identification, idea creation and selection - is often not optimally performed. Our eMIPS methodology represents a sort of "umbrella methodology" - a well-defined set of procedures, which can be dynamically adapted to the concrete case in a company. In daily practice, various methods (e.g. for problem identification and idea creation) can be applied, depending on the company's needs. It is based on the proactive involvement of the company's employees supported by the appropriate methodology and external experts. The presented phases are performed via a mixture of face-to-face activities (workshops) and online (eLearning) activities taking place in eLearning Moodle environment and using other e-communication channels. One part of the outcomes is an identified set of opportunities and concrete solutions ready for implementation. The other also very important result is connected to innovation competences for the participating employees related with concrete tools and methods for idea management. In addition, the employees get a strong experience for dynamic, efficient and solution oriented managing of the invention process. The eMIPS also represents a way of establishing or improving the innovation culture in the organization. The first results in a pilot company showed excellent results regarding the motivation of participants and also as to the results achieved.

Keywords: Creativity, distance learning, front end, innovation, problem.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2198
370 Q-Map: Clinical Concept Mining from Clinical Documents

Authors: Sheikh Shams Azam, Manoj Raju, Venkatesh Pagidimarri, Vamsi Kasivajjala

Abstract:

Over the past decade, there has been a steep rise in the data-driven analysis in major areas of medicine, such as clinical decision support system, survival analysis, patient similarity analysis, image analytics etc. Most of the data in the field are well-structured and available in numerical or categorical formats which can be used for experiments directly. But on the opposite end of the spectrum, there exists a wide expanse of data that is intractable for direct analysis owing to its unstructured nature which can be found in the form of discharge summaries, clinical notes, procedural notes which are in human written narrative format and neither have any relational model nor any standard grammatical structure. An important step in the utilization of these texts for such studies is to transform and process the data to retrieve structured information from the haystack of irrelevant data using information retrieval and data mining techniques. To address this problem, the authors present Q-Map in this paper, which is a simple yet robust system that can sift through massive datasets with unregulated formats to retrieve structured information aggressively and efficiently. It is backed by an effective mining technique which is based on a string matching algorithm that is indexed on curated knowledge sources, that is both fast and configurable. The authors also briefly examine its comparative performance with MetaMap, one of the most reputed tools for medical concepts retrieval and present the advantages the former displays over the latter.

Keywords: Information retrieval (IR), unified medical language system (UMLS), Syntax Based Analysis, natural language processing (NLP), medical informatics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 767
369 Assessing the Sheltering Response in the Middle East: Studying Syrian Camps in Jordan

Authors: Lara A. Alshawawreh, R. Sean Smith, John B. Wood

Abstract:

This study focuses on the sheltering response in the Middle East, specifically through reviewing two Syrian refugee camps in Jordan, involving Zaatari and Azraq. Zaatari camp involved the rapid deployment of tents and shelters over a very short period of time and Azraq was purpose built and pre-planned over a longer period. At present, both camps collectively host more than 133,000 occupants. Field visits were taken to both camps and the main issues and problems in the sheltering response were highlighted through focus group discussions with camp occupants and inspection of shelter habitats. This provided both subjective and objective research data sources. While every case has its own significance and deployment to meet humanitarian needs, there are some common requirements irrespective of geographical region. The results suggest that there is a gap in the suitability of the required habitat needs and what has been provided. It is recommended that the global international response and support could be improved in relation to the habitat form, construction type, layout, function and critically the cultural aspects. Services, health and hygiene are key elements to the shelter habitat provision. The study also identified the amendments to shelters undertaken by the beneficiaries providing insight into their key main requirements. The outcomes from this study could provide an important learning opportunity to develop improved habitat response for future shelters.

Keywords: Culture, post-disaster, refugees, shelters.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1194
368 Deployment of Beyond 4G Wireless Communication Networks with Carrier Aggregation

Authors: Bahram Khan, Anderson Rocha Ramos, Rui R. Paulo, Fernando J. Velez

Abstract:

With the growing demand for a new blend of applications, the users dependency on the internet is increasing day by day. Mobile internet users are giving more attention to their own experiences, especially in terms of communication reliability, high data rates and service stability on move. This increase in the demand is causing saturation of existing radio frequency bands. To address these challenges, researchers are investigating the best approaches, Carrier Aggregation (CA) is one of the newest innovations, which seems to fulfill the demands of the future spectrum, also CA is one the most important feature for Long Term Evolution - Advanced (LTE-Advanced). For this purpose to get the upcoming International Mobile Telecommunication Advanced (IMT-Advanced) mobile requirements (1 Gb/s peak data rate), the CA scheme is presented by 3GPP, which would sustain a high data rate using widespread frequency bandwidth up to 100 MHz. Technical issues such as aggregation structure, its implementations, deployment scenarios, control signal techniques, and challenges for CA technique in LTE-Advanced, with consideration of backward compatibility, are highlighted in this paper. Also, performance evaluation in macro-cellular scenarios through a simulation approach is presented, which shows the benefits of applying CA, low-complexity multi-band schedulers in service quality, system capacity enhancement and concluded that enhanced multi-band scheduler is less complex than the general multi-band scheduler, which performs better for a cell radius longer than 1800 m (and a PLR threshold of 2%).

Keywords: Component carrier, carrier aggregation, LTE-Advanced, scheduling, spectrum management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 547
367 User-Perceived Quality Factors for Certification Model of Web-Based System

Authors: Jamaiah H. Yahaya, Aziz Deraman, Abdul Razak Hamdan, Yusmadi Yah Jusoh

Abstract:

One of the most essential issues in software products is to maintain it relevancy to the dynamics of the user’s requirements and expectation. Many studies have been carried out in quality aspect of software products to overcome these problems. Previous software quality assessment models and metrics have been introduced with strengths and limitations. In order to enhance the assurance and buoyancy of the software products, certification models have been introduced and developed. From our previous experiences in certification exercises and case studies collaborating with several agencies in Malaysia, the requirements for user based software certification approach is identified and demanded. The emergence of social network applications, the new development approach such as agile method and other varieties of software in the market have led to the domination of users over the software. As software become more accessible to the public through internet applications, users are becoming more critical in the quality of the services provided by the software. There are several categories of users in web-based systems with different interests and perspectives. The classifications and metrics are identified through brain storming approach with includes researchers, users and experts in this area. The new paradigm in software quality assessment is the main focus in our research. This paper discusses the classifications of users in web-based software system assessment and their associated factors and metrics for quality measurement. The quality model is derived based on IEEE structure and FCM model. The developments are beneficial and valuable to overcome the constraints and improve the application of software certification model in future.

Keywords: Software certification model, user centric approach, software quality factors, metrics and measurements, web-based system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2138
366 Bridge Health Monitoring: A Review

Authors: Mohammad Bakhshandeh

Abstract:

Structural Health Monitoring (SHM) is a crucial and necessary practice that plays a vital role in ensuring the safety and integrity of critical structures, and in particular, bridges. The continuous monitoring of bridges for signs of damage or degradation through Bridge Health Monitoring (BHM) enables early detection of potential problems, allowing for prompt corrective action to be taken before significant damage occurs. Although all monitoring techniques aim to provide accurate and decisive information regarding the remaining useful life, safety, integrity, and serviceability of bridges, understanding the development and propagation of damage is vital for maintaining uninterrupted bridge operation. Over the years, extensive research has been conducted on BHM methods, and experts in the field have increasingly adopted new methodologies. In this article, we provide a comprehensive exploration of the various BHM approaches, including sensor-based, non-destructive testing (NDT), model-based, and artificial intelligence (AI)-based methods. We also discuss the challenges associated with BHM, including sensor placement and data acquisition, data analysis and interpretation, cost and complexity, and environmental effects, through an extensive review of relevant literature and research studies. Additionally, we examine potential solutions to these challenges and propose future research ideas to address critical gaps in BHM.

Keywords: Structural health monitoring, bridge health monitoring, sensor-based methods, machine-learning algorithms, model-based techniques, sensor placement, data acquisition, data analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 232
365 Development of High Strength Self Curing Concrete Using Super Absorbing Polymer

Authors: K. Bala Subramanian, A. Siva, S. Swaminathan, Arul. M. G. Ajin

Abstract:

Concrete is an essential building material which is widely used in construction industry all over the world due to its compressible strength. Curing of concrete plays a vital role in durability and other performance necessities. Improper curing can affect the concrete performance and durability easily. When areas like scarcity of water, structures is not accessible by humans external curing cannot be performed, so we opt for internal curing. Internal curing (or) self curing plays a major role in developing the concrete pore structure and microstructure. The concept of internal curing is to enhance the hydration process to maintain the temperature uniformly. The evaporation of water in the concrete is reduced by self curing agent (Super Absorbing Polymer – SAP) there by increasing the water retention capacity of the concrete. The research work was carried out to reduce water, which is prime material used for concrete in the construction industry. Concrete curing plays a major role in developing hydration process. Concept of self curing will reduce the evaporation of water from concrete. Self curing will increase water retention capacity as compared to the conventional concrete. Proper self curing (or) internal curing increases the strength, durability and performance of concrete. Super absorbing Polymer (SAP) used as internal curing agent. In this study 0.2% to 0.4% of SAP was varied in different grade of high strength concrete. In the experiment replacement of cement by silica fumes with 5%, 10% and 15% are studied. It is found that replacement of silica fumes by 10 % gives more strength and durability when compared to others.

Keywords: Compressive Strength, High strength Concrete Rapid chloride permeability, Super Absorbing Polymer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3214
364 Forecasting Stock Price Manipulation in Capital Market

Authors: F. Rahnamay Roodposhti, M. Falah Shams, H. Kordlouie

Abstract:

The aim of the article is extending and developing econometrics and network structure based methods which are able to distinguish price manipulation in Tehran stock exchange. The principal goal of the present study is to offer model for approximating price manipulation in Tehran stock exchange. In order to do so by applying separation method a sample consisting of 397 companies accepted at Tehran stock exchange were selected and information related to their price and volume of trades during years 2001 until 2009 were collected and then through performing runs test, skewness test and duration correlative test the selected companies were divided into 2 sets of manipulated and non manipulated companies. In the next stage by investigating cumulative return process and volume of trades in manipulated companies, the date of starting price manipulation was specified and in this way the logit model, artificial neural network, multiple discriminant analysis and by using information related to size of company, clarity of information, ratio of P/E and liquidity of stock one year prior price manipulation; a model for forecasting price manipulation of stocks of companies present in Tehran stock exchange were designed. At the end the power of forecasting models were studied by using data of test set. Whereas the power of forecasting logit model for test set was 92.1%, for artificial neural network was 94.1% and multi audit analysis model was 90.2%; therefore all of the 3 aforesaid models has high power to forecast price manipulation and there is no considerable difference among forecasting power of these 3 models.

Keywords: Price Manipulation, Liquidity, Size of Company, Floating Stock, Information Clarity

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2839
363 Prediction of Optimum Cutting Parameters to obtain Desired Surface in Finish Pass end Milling of Aluminium Alloy with Carbide Tool using Artificial Neural Network

Authors: Anjan Kumar Kakati, M. Chandrasekaran, Amitava Mandal, Amit Kumar Singh

Abstract:

End milling process is one of the common metal cutting operations used for machining parts in manufacturing industry. It is usually performed at the final stage in manufacturing a product and surface roughness of the produced job plays an important role. In general, the surface roughness affects wear resistance, ductility, tensile, fatigue strength, etc., for machined parts and cannot be neglected in design. In the present work an experimental investigation of end milling of aluminium alloy with carbide tool is carried out and the effect of different cutting parameters on the response are studied with three-dimensional surface plots. An artificial neural network (ANN) is used to establish the relationship between the surface roughness and the input cutting parameters (i.e., spindle speed, feed, and depth of cut). The Matlab ANN toolbox works on feed forward back propagation algorithm is used for modeling purpose. 3-12-1 network structure having minimum average prediction error found as best network architecture for predicting surface roughness value. The network predicts surface roughness for unseen data and found that the result/prediction is better. For desired surface finish of the component to be produced there are many different combination of cutting parameters are available. The optimum cutting parameter for obtaining desired surface finish, to maximize tool life is predicted. The methodology is demonstrated, number of problems are solved and algorithm is coded in Matlab®.

Keywords: End milling, Surface roughness, Neural networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2154
362 Effect of Organic-waste Compost Addition on Leaching of Mineral Nitrogen from Arable Land and Plant Production

Authors: Jakub Elbl, Lukas Plošek, Antonín Kintl, Jaroslav Záhora, Jitka Přichystalová, Jaroslav Hynšt

Abstract:

Application of compost in agriculture is very desirable worldwide. In the Czech Republic, compost is the most often used to improve soil structure and increase the content of soil organic matter, but the effects of compost addition on the fate of mineral nitrogen are only scarcely described. This paper deals with possibility of using combined application of compost, mineral and organic fertilizers to reduce the leaching of mineral nitrogen from arable land. To demonstrate the effect of compost addition on leaching of mineral nitrogen, we performed the pot experiment. As a model crop, Lactuca sativa L. was used and cultivated for 35 days in climate chamber in thoroughly homogenized arable soil. Ten variants of the experiment were prepared; two control variants (pure arable soil and arable soil with added compost), four variants with different doses of mineral and organic fertilizers and four variants of the same doses of mineral and organic fertilizers with the addition of compos. The highest decrease of mineral nitrogen leaching was observed by the simultaneous applications of soluble humic substances and compost to soil samples, about 417% in comparison with the control variant. Application of these organic compounds also supported microbial activity and nitrogen immobilization documented by the highest soil respiration and by the highest value of the index of nitrogen availability. The production of plant biomass after this application was not the highest due to microbial competition for the nutrients in soil, but was 24% higher in comparison with the control variant. To support these promising results the experiment should be repeated in field conditions.

Keywords: Nitrogen, Compost, Salad, Arable land.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2060
361 Unpacking Chilean Preservice Teachers’ Beliefs on Practicum Experiences through Digital Stories

Authors: Claudio Díaz, Mabel Ortiz

Abstract:

An EFL teacher education programme in Chile takes five years to train a future teacher of English. Preservice teachers are prepared to learn an advanced level of English and teach the language from 5th to 12th grade in the Chilean educational system. In the context of their first EFL Methodology course in year four, preservice teachers have to create a five-minute digital story that starts from a critical incident they have experienced as teachers-to-be during their observations or interventions in the schools. A critical incident can be defined as a happening, a specific incident or event either observed by them or involving them. The happening sparks their thinking and may make them subsequently think differently about the particular event. When they create their digital stories, preservice teachers put technology, teaching practice and theory together to narrate a story that is complemented by still images, moving images, text, sound effects and music. The story should be told as a personal narrative, which explains the critical incident. This presentation will focus on the creation process of 50 Chilean preservice teachers’ digital stories highlighting the critical incidents they started their stories. It will also unpack preservice teachers’ beliefs and reflections when approaching their teaching practices in schools. These beliefs will be coded and categorized through content analysis to evidence preservice teachers’ most rooted conceptions about English teaching and learning in Chilean schools. The findings seem to indicate that preservice teachers’ beliefs are strongly mediated by contextual and affective factors.

Keywords: Beliefs, Digital stories, Preservice teachers, Practicum.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1437
360 Applying Kinect on the Development of a Customized 3D Mannequin

Authors: Shih-Wen Hsiao, Rong-Qi Chen

Abstract:

In the field of fashion design, 3D Mannequin is a kind of assisting tool which could rapidly realize the design concepts. While the concept of 3D Mannequin is applied to the computer added fashion design, it will connect with the development and the application of design platform and system. Thus, the situation mentioned above revealed a truth that it is very critical to develop a module of 3D Mannequin which would correspond with the necessity of fashion design. This research proposes a concrete plan that developing and constructing a system of 3D Mannequin with Kinect. In the content, ergonomic measurements of objective human features could be attained real-time through the implement with depth camera of Kinect, and then the mesh morphing can be implemented through transformed the locations of the control-points on the model by inputting those ergonomic data to get an exclusive 3D mannequin model. In the proposed methodology, after the scanned points from the Kinect are revised for accuracy and smoothening, a complete human feature would be reconstructed by the ICP algorithm with the method of image processing. Also, the objective human feature could be recognized to analyze and get real measurements. Furthermore, the data of ergonomic measurements could be applied to shape morphing for the division of 3D Mannequin reconstructed by feature curves. Due to a standardized and customer-oriented 3D Mannequin would be generated by the implement of subdivision, the research could be applied to the fashion design or the presentation and display of 3D virtual clothes. In order to examine the practicality of research structure, a system of 3D Mannequin would be constructed with JAVA program in this study. Through the revision of experiments the practicability-contained research result would come out.

Keywords: 3D Mannequin, kinect scanner, interactive closest point, shape morphing, subdivision.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2053
359 Effect of Reynolds Number on Flow past a Square Cylinder in Presence of Upstream and Downstream Flat Plate at Small Gap Spacing

Authors: Shams-ul-Islam, Raheela Manzoor, Zhou Chao Ying

Abstract:

A two-dimensional numerical study for flow past a square cylinder in presence of flat plate both at upstream and downstream position is carried out using the single-relaxation-time lattice Boltzmann method for gap spacing 0.5 and 1. We select Reynolds numbers from 80 to 200. The wake structure mechanism within gap spacing and near wake region, vortex structures around and behind the main square cylinder in presence of flat plate are studied and compared with flow pattern around a single square cylinder. The results are obtained in form of vorticity contour, streamlines, power spectra analysis, time trace analysis of drag and lift coefficients. Four different types of flow patterns were observed in both configurations, named as (i) Quasi steady flow (QSF), (ii) steady flow (SF), (iii) shear layer reattachment (SLR), (iv) single bluff body (SBB). It is observed that upstream flat plate plays a vital role in significant drag reduction. On the other hand, rate of suppression of vortex shedding is high for downstream flat plate case at low Reynolds numbers. The reduction in mean drag force and root mean square value of drag force for upstream flat plate case are89.1% and 86.3% at (Re, g) = (80, 0.5d) and (120, 1d) and reduction for downstream flat plate case for mean drag force and root mean square value of drag force are 11.10% and 97.6% obtained at (180, 1d) and (180, 0.5d).

Keywords: Detached flat plates, drag and lift coefficients, Reynolds numbers, square cylinder, Strouhal number.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2169
358 Rank-Based Chain-Mode Ensemble for Binary Classification

Authors: Chongya Song, Kang Yen, Alexander Pons, Jin Liu

Abstract:

In the field of machine learning, the ensemble has been employed as a common methodology to improve the performance upon multiple base classifiers. However, the true predictions are often canceled out by the false ones during consensus due to a phenomenon called “curse of correlation” which is represented as the strong interferences among the predictions produced by the base classifiers. In addition, the existing practices are still not able to effectively mitigate the problem of imbalanced classification. Based on the analysis on our experiment results, we conclude that the two problems are caused by some inherent deficiencies in the approach of consensus. Therefore, we create an enhanced ensemble algorithm which adopts a designed rank-based chain-mode consensus to overcome the two problems. In order to evaluate the proposed ensemble algorithm, we employ a well-known benchmark data set NSL-KDD (the improved version of dataset KDDCup99 produced by University of New Brunswick) to make comparisons between the proposed and 8 common ensemble algorithms. Particularly, each compared ensemble classifier uses the same 22 base classifiers, so that the differences in terms of the improvements toward the accuracy and reliability upon the base classifiers can be truly revealed. As a result, the proposed rank-based chain-mode consensus is proved to be a more effective ensemble solution than the traditional consensus approach, which outperforms the 8 ensemble algorithms by 20% on almost all compared metrices which include accuracy, precision, recall, F1-score and area under receiver operating characteristic curve.

Keywords: Consensus, curse of correlation, imbalanced classification, rank-based chain-mode ensemble.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 717
357 Supplier Selection Using Sustainable Criteria in Sustainable Supply Chain Management

Authors: Richa Grover, Rahul Grover, V. Balaji Rao, Kavish Kejriwal

Abstract:

Selection of suppliers is a crucial problem in the supply chain management. On top of that, sustainable supplier selection is the biggest challenge for the organizations. Environment protection and social problems have been of concern to society in recent years, and the traditional supplier selection does not consider about this factor; therefore, this research work focuses on introducing sustainable criteria into the structure of supplier selection criteria. Sustainable Supply Chain Management (SSCM) is the management and administration of material, information, and money flows, as well as coordination among business along the supply chain. All three dimensions - economic, environmental, and social - of sustainable development needs to be taken care of. Purpose of this research is to maximize supply chain profitability, maximize social wellbeing of supply chain and minimize environmental impacts. Problem statement is selection of suppliers in a sustainable supply chain network by ranking the suppliers against sustainable criteria identified. The aim of this research is twofold: To find out what are the sustainable parameters that can be applied to the supply chain, and to determine how these parameters can effectively be used in supplier selection. Multicriteria decision making tools will be used to rank both criteria and suppliers. AHP Analysis will be used to find out ratings for the criteria identified. It is a technique used for efficient decision making. TOPSIS will be used to find out rating for suppliers and then ranking them. TOPSIS is a MCDM problem solving method which is based on the principle that the chosen option should have the maximum distance from the negative ideal solution (NIS) and the minimum distance from the ideal solution.

Keywords: Sustainable supply chain management, supplier selection, MCDM tools, AHP analysis, TOPSIS method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3476
356 A Numerical Study on Semi-Active Control of a Bridge Deck under Seismic Excitation

Authors: A. Yanik, U. Aldemir

Abstract:

This study investigates the benefits of implementing the semi-active devices in relation to passive viscous damping in the context of seismically isolated bridge structures. Since the intrinsically nonlinear nature of semi-active devices prevents the direct evaluation of Laplace transforms, frequency response functions are compiled from the computed time history response to sinusoidal and pulse-like seismic excitation. A simple semi-active control policy is used in regard to passive linear viscous damping and an optimal non-causal semi-active control strategy. The control strategy requires optimization. Euler-Lagrange equations are solved numerically during this procedure. The optimal closed-loop performance is evaluated for an idealized controllable dash-pot. A simplified single-degree-of-freedom model of an isolated bridge is used as numerical example. Two bridge cases are investigated. These cases are; bridge deck without the isolation bearing and bridge deck with the isolation bearing. To compare the performances of the passive and semi-active control cases, frequency dependent acceleration, velocity and displacement response transmissibility ratios Ta(w), Tv(w), and Td(w) are defined. To fully investigate the behavior of the structure subjected to the sinusoidal and pulse type excitations, different damping levels are considered. Numerical results showed that, under the effect of external excitation, bridge deck with semi-active control showed better structural performance than the passive bridge deck case.

Keywords: Bridge structures, passive control, seismic, semi-active control, viscous damping.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 754
355 Modeling of Surface Roughness for Flow over a Complex Vegetated Surface

Authors: Wichai Pattanapol, Sarah J. Wakes, Michael J. Hilton, Katharine J.M. Dickinson

Abstract:

Turbulence modeling of large-scale flow over a vegetated surface is complex. Such problems involve large scale computational domains, while the characteristics of flow near the surface are also involved. In modeling large scale flow, surface roughness including vegetation is generally taken into account by mean of roughness parameters in the modified law of the wall. However, the turbulence structure within the canopy region cannot be captured with this method, another method which applies source/sink terms to model plant drag can be used. These models have been developed and tested intensively but with a simple surface geometry. This paper aims to compare the use of roughness parameter, and additional source/sink terms in modeling the effect of plant drag on wind flow over a complex vegetated surface. The RNG k-ε turbulence model with the non-equilibrium wall function was tested with both cases. In addition, the k-ω turbulence model, which is claimed to be computationally stable, was also investigated with the source/sink terms. All numerical results were compared to the experimental results obtained at the study site Mason Bay, Stewart Island, New Zealand. In the near-surface region, it is found that the results obtained by using the source/sink term are more accurate than those using roughness parameters. The k-ω turbulence model with source/sink term is more appropriate as it is more accurate and more computationally stable than the RNG k-ε turbulence model. At higher region, there is no significant difference amongst the results obtained from all simulations.

Keywords: CFD, canopy flow, surface roughness, turbulence models.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2942
354 Cross Signal Identification for PSG Applications

Authors: Carmen Grigoraş, Victor Grigoraş, Daniela Boişteanu

Abstract:

The standard investigational method for obstructive sleep apnea syndrome (OSAS) diagnosis is polysomnography (PSG), which consists of a simultaneous, usually overnight recording of multiple electro-physiological signals related to sleep and wakefulness. This is an expensive, encumbering and not a readily repeated protocol, and therefore there is need for simpler and easily implemented screening and detection techniques. Identification of apnea/hypopnea events in the screening recordings is the key factor for the diagnosis of OSAS. The analysis of a solely single-lead electrocardiographic (ECG) signal for OSAS diagnosis, which may be done with portable devices, at patient-s home, is the challenge of the last years. A novel artificial neural network (ANN) based approach for feature extraction and automatic identification of respiratory events in ECG signals is presented in this paper. A nonlinear principal component analysis (NLPCA) method was considered for feature extraction and support vector machine for classification/recognition. An alternative representation of the respiratory events by means of Kohonen type neural network is discussed. Our prospective study was based on OSAS patients of the Clinical Hospital of Pneumology from Iaşi, Romania, males and females, as well as on non-OSAS investigated human subjects. Our computed analysis includes a learning phase based on cross signal PSG annotation.

Keywords: Artificial neural networks, feature extraction, obstructive sleep apnea syndrome, pattern recognition, signalprocessing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1530
353 A Consumption-Based Hybrid Life Cycle Assessment of Carbon Footprints in California: High Footprints in Small Urban Households

Authors: Jukka Heinonen

Abstract:

Higher density reduces distances, private car dependency and thus reduces greenhouse gas emissions (GHGs). As a result, increased density has been given a central role among urban development targets. However, it is not just travel behavior that changes along with density. Rather, the consumption patterns, or overall lifestyles, change along with changing urban structure, particularly with changing housing types and consumption opportunities. Furthermore, elevated consumption of services, more frequent flying and less intra-household sharing have been shown to potentially outweigh the gains from reduced driving in more dense urban settlements. In this study, the geography of carbon footprints (CFs) in California is analyzed paying close attention to the household size differences and the resulting economies-of-scale advantages and disadvantages. A hybrid life cycle assessment (LCA) framework is employed together with consumer expenditure data to assess the CFs. According to the study, small urban households have the highest CFs in California. Their transport related emissions are significantly lower than those of the residents of less urbanized areas, but higher emissions from other consumption categories, together with the low degree of sharing of goods, overweigh the gains. Two functional units, per capita and per household, are used to analyze the CFs and to demonstrate the importance of household size. The lifestyle impacts visible through the consumption data are also discussed. The study suggests that there are still significant gaps in our understanding of the premises of low-carbon human settlements.

Keywords: Carbon footprint, life cycle assessment, consumption, lifestyle, household size, economies-of-scale.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1215
352 Structure of the Working Time of Nurses in Emergency Departments in Polish Hospitals

Authors: Jadwiga Klukow, Anna Ksykiewicz-Dorota

Abstract:

An analysis of the distribution of nurses’ working time constitutes vital information for the management in planning employment. The objective of the study was to analyze the distribution of nurses’ working time in an emergency department. The study was conducted in an emergency department of a teaching hospital in Lublin, in Southeast Poland. The catalogue of activities performed by nurses was compiled by means of continuous observation. Identified activities were classified into four groups: Direct care, indirect care, coordination of work in the department and personal activities. Distribution of nurses’ working time was determined by work sampling observation (Tippett) at random intervals. The research project was approved by the Research Ethics Committee by the Medical University of Lublin (Protocol 0254/113/2010). On average, nurses spent 31% of their working time on direct care, 47% on indirect care, 12% on coordinating work in the department and 10% on personal activities. The most frequently performed direct care tasks were diagnostic activities – 29.23% and treatment-related activities – 27.69%. The study has provided information on the complexity of performed activities and utilization of nurses’ working time. Enhancing the effectiveness of nursing actions requires working out a strategy for improved management of the time nurses spent at work. Increasing the involvement of auxiliary staff and optimizing communication processes within the team may lead to reduction of the time devoted to indirect care for the benefit of direct care.

Keywords: Emergency nurses, nursing care, workload, work sampling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1479
351 Structural Characteristics of HPDSP Concrete on Beam Column Joints

Authors: Sushil Kumar Swar, Sanjay Kumar Sharma, Hari Krishan Sharma, Sushil Kumar

Abstract:

The seriously damaged structures during earthquakes show the need and importance of design of reinforced concrete structures with high ductility. Reinforced concrete beam-column joints have an important function in all structures. Under seismic excitation, the beam column joint region is subjected to horizontal and vertical shear forces whose magnitude is many times higher than the adjacent beam and column. Strength and ductility of structures depends mainly on proper detailing of the reinforcement in beamcolumn joints and the old structures were found ductility deficient. DSP materials are obtained by using high quantities of super plasticizers and high volumes of micro silica. In the case of High Performance Densified Small Particle Concrete (HPDSPC), since concrete is dense even at the micro-structure level, tensile strain would be much higher than that of the conventional SFRC, SIFCON & SIMCON. This in turn will improve cracking behaviour, ductility and energy absorption capacity of composites in addition to durability. The fine fibers used in our mix are 0.3mm diameter and 10 mm which can be easily placed with high percentage. These fibers easily transfer stresses and act as a composite concrete unit to take up extremely high loads with high compressive strength. HPDSPC placed in the beam column joints helps in safety of human life due to prolonged failure.

Keywords: High Performance Densified Small Particle Concrete (HPDSPC), Steel Fıber Reinforced Concrete (SFRC), Slurry Infiltrated Concrete (SIFCON), Slurry Infiltrated Mat Concrete (SIMCON).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2153
350 Improving Topic Quality of Scripts by Using Scene Similarity Based Word Co-Occurrence

Authors: Yunseok Noh, Chang-Uk Kwak, Sun-Joong Kim, Seong-Bae Park

Abstract:

Scripts are one of the basic text resources to understand broadcasting contents. Topic modeling is the method to get the summary of the broadcasting contents from its scripts. Generally, scripts represent contents descriptively with directions and speeches, and provide scene segments that can be seen as semantic units. Therefore, a script can be topic modeled by treating a scene segment as a document. Because scene segments consist of speeches mainly, however, relatively small co-occurrences among words in the scene segments are observed. This causes inevitably the bad quality of topics by statistical learning method. To tackle this problem, we propose a method to improve topic quality with additional word co-occurrence information obtained using scene similarities. The main idea of improving topic quality is that the information that two or more texts are topically related can be useful to learn high quality of topics. In addition, more accurate topical representations lead to get information more accurate whether two texts are related or not. In this paper, we regard two scene segments are related if their topical similarity is high enough. We also consider that words are co-occurred if they are in topically related scene segments together. By iteratively inferring topics and determining semantically neighborhood scene segments, we draw a topic space represents broadcasting contents well. In the experiments, we showed the proposed method generates a higher quality of topics from Korean drama scripts than the baselines.

Keywords: Broadcasting contents, generalized P´olya urn model, scripts, text similarity, topic model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1809
349 Software Product Quality Evaluation Model with Multiple Criteria Decision Making Analysis

Authors: C. Ardil

Abstract:

This paper presents a software product quality evaluation model based on the ISO/IEC 25010 quality model. The evaluation characteristics and sub characteristics were identified from the ISO/IEC 25010 quality model. The multidimensional structure of the quality model is based on characteristics such as functional suitability, performance efficiency, compatibility, usability, reliability, security, maintainability, and portability, and associated sub characteristics. Random numbers are generated to establish the decision maker’s importance weights for each sub characteristics. Also, random numbers are generated to establish the decision matrix of the decision maker’s final scores for each software product against each sub characteristics. Thus, objective criteria importance weights and index scores for datasets were obtained from the random numbers. In the proposed model, five different software product quality evaluation datasets under three different weight vectors were applied to multiple criteria decision analysis method, preference analysis for reference ideal solution (PARIS) for comparison, and sensitivity analysis procedure. This study contributes to provide a better understanding of the application of MCDMA methods and ISO/IEC 25010 quality model guidelines in software product quality evaluation process.

Keywords: ISO/IEC 25010 quality model, multiple criteria decisions making, multiple criteria decision making analysis, MCDMA, PARIS, Software Product Quality Evaluation Model, Software Product Quality Evaluation, Software Evaluation, Software Selection, Software

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 430
348 Methodology for Quantifying the Meaning of Information in Biological Systems

Authors: Richard L. Summers

Abstract:

The advanced computational analysis of biological systems is becoming increasingly dependent upon an understanding of the information-theoretic structure of the materials, energy and interactive processes that comprise those systems. The stability and survival of these living systems is fundamentally contingent upon their ability to acquire and process the meaning of information concerning the physical state of its biological continuum (biocontinuum). The drive for adaptive system reconciliation of a divergence from steady state within this biocontinuum can be described by an information metric-based formulation of the process for actionable knowledge acquisition that incorporates the axiomatic inference of Kullback-Leibler information minimization driven by survival replicator dynamics. If the mathematical expression of this process is the Lagrangian integrand for any change within the biocontinuum then it can also be considered as an action functional for the living system. In the direct method of Lyapunov, such a summarizing mathematical formulation of global system behavior based on the driving forces of energy currents and constraints within the system can serve as a platform for the analysis of stability. As the system evolves in time in response to biocontinuum perturbations, the summarizing function then conveys information about its overall stability. This stability information portends survival and therefore has absolute existential meaning for the living system. The first derivative of the Lyapunov energy information function will have a negative trajectory toward a system steady state if the driving force is dissipating. By contrast, system instability leading to system dissolution will have a positive trajectory. The direction and magnitude of the vector for the trajectory then serves as a quantifiable signature of the meaning associated with the living system’s stability information, homeostasis and survival potential.

Keywords: Semiotic meaning, Shannon information, Lyapunov, living systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 493
347 Simulation of Complex-Shaped Particle Breakage Using the Discrete Element Method

Authors: Felix Platzer, Eric Fimbinger

Abstract:

In Discrete Element Method (DEM) simulations, the breakage behavior of particles can be simulated based on different principles. In the case of large, complex-shaped particles that show various breakage patterns depending on the scenario leading to the failure and often only break locally instead of fracturing completely, some of these principles do not lead to realistic results. The reason for this is that in said cases, the methods in question, such as the Particle Replacement Method (PRM) or Voronoi Fracture, replace the initial particle (that is intended to break) into several sub-particles when certain breakage criteria are reached, such as exceeding the fracture energy. That is why those methods are commonly used for the simulation of materials that fracture completely instead of breaking locally. That being the case, when simulating local failure, it is advisable to pre-build the initial particle from sub-particles that are bonded together. The dimensions of these sub-particles  consequently define the minimum size of the fracture results. This structure of bonded sub-particles enables the initial particle to break at the location of the highest local loads – due to the failure of the bonds in those areas – with several sub-particle clusters being the result of the fracture, which can again also break locally. In this project, different methods for the generation and calibration of complex-shaped particle conglomerates using bonded particle modeling (BPM) to enable the ability to depict more realistic fracture behavior were evaluated based on the example of filter cake. The method that proved suitable for this purpose and which furthermore  allows efficient and realistic simulation of breakage behavior of complex-shaped particles applicable to industrial-sized simulations is presented in this paper.

Keywords: Bonded particle model (BPM), DEM, filter cake, particle breakage, particle fracture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 383
346 Information Filtering using Index Word Selection based on the Topics

Authors: Takeru YOKOI, Hidekazu YANAGIMOTO, Sigeru OMATU

Abstract:

We have proposed an information filtering system using index word selection from a document set based on the topics included in a set of documents. This method narrows down the particularly characteristic words in a document set and the topics are obtained by Sparse Non-negative Matrix Factorization. In information filtering, a document is often represented with the vector in which the elements correspond to the weight of the index words, and the dimension of the vector becomes larger as the number of documents is increased. Therefore, it is possible that useless words as index words for the information filtering are included. In order to address the problem, the dimension needs to be reduced. Our proposal reduces the dimension by selecting index words based on the topics included in a document set. We have applied the Sparse Non-negative Matrix Factorization to the document set to obtain these topics. The filtering is carried out based on a centroid of the learning document set. The centroid is regarded as the user-s interest. In addition, the centroid is represented with a document vector whose elements consist of the weight of the selected index words. Using the English test collection MEDLINE, thus, we confirm the effectiveness of our proposal. Hence, our proposed selection can confirm the improvement of the recommendation accuracy from the other previous methods when selecting the appropriate number of index words. In addition, we discussed the selected index words by our proposal and we found our proposal was able to select the index words covered some minor topics included in the document set.

Keywords: Information Filtering, Sparse NMF, Index wordSelection, User Profile, Chi-squared Measure

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1445
345 Analyzing the Shearing-Layer Concept Applied to Urban Green System

Authors: S. Pushkar, O. Verbitsky

Abstract:

Currently, green rating systems are mainly utilized for correctly sizing mechanical and electrical systems, which have short lifetime expectancies. In these systems, passive solar and bio-climatic architecture, which have long lifetime expectancies, are neglected. Urban rating systems consider buildings and services in addition to neighborhoods and public transportation as integral parts of the built environment. The main goal of this study was to develop a more consistent point allocation system for urban building standards by using six different lifetime shearing layers: Site, Structure, Skin, Services, Space, and Stuff, each reflecting distinct environmental damages. This shearing-layer concept was applied to internationally well-known rating systems: Leadership in Energy and Environmental Design (LEED) for Neighborhood Development, BRE Environmental Assessment Method (BREEAM) for Communities and Comprehensive Assessment System for Building Environmental Efficiency (CASBEE) for Urban Development. The results showed that LEED for Neighborhood Development and BREEAM for Communities focused on long-lifetime-expectancy building designs, whereas CASBEE for Urban Development gave equal importance to the Building and Service Layers. Moreover, although this rating system was applied using a building-scale assessment, “Urban Area + Buildings” focuses on a short-lifetime-expectancy system design, neglecting to improve the architectural design by considering bioclimatic and passive solar aspects.

Keywords: Green rating system, passive solar architecture, shearing-layer concept, urban community.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1965
344 Evaluation of Stormwater Quantity and Quality Control through Constructed Mini Wet Pond: A Case Study

Authors: Y. S. Liew, K. A. Puteh Ariffin, M. A. Mohd Nor

Abstract:

One of the Best Management Practices (BMPs) promoted in Urban Stormwater Management Manual for Malaysia (MSMA) published by the Department of Irrigation and Drainage (DID) in 2001 is through the construction of wet ponds in new development projects for water quantity and quality control. Therefore, this paper aims to demonstrate a case study on evaluation of a constructed mini wet pond located at Sekolah Rendah Kebangsaan Seksyen 2, Puchong, Selangor, Malaysia in both stormwater quantity and quality aspect particularly to reduce the peak discharge by temporary storing and gradual release of stormwater runoff from an outlet structure or other release mechanism. The evaluation technique will be using InfoWorks Collection System (CS) as the numerical modeling approach for water quantity aspect. Statistical test by comparing the correlation coefficient (R2), mean error (ME), mean absolute error (MAE) and root mean square error (RMSE) were used to evaluate the model in simulating the peak discharge changes. Results demonstrated that there will be a reduction in peak flow at 11 % to 15% and time to peak flow is slower by 5 minutes through a wet pond. For water quality aspect, a survey on biological indicator of water quality carried out depicts that the pond is within the range of rather clean to clean water with the score of 5.3. This study indicates that a constructed wet pond with wetland facilities is able to help in managing water quantity and stormwater generated pollution at source, towards achieving ecologically sustainable development in urban areas.

Keywords: Wet pond, Retention Facilities, Best Management Practices (BMP), Urban Stormwater Management Manual for Malaysia (MSMA).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2515
343 Being a Lay Partner in Jesuit Higher Education in the Philippines: A Grounded Theory Application

Authors: Janet B. Badong-Badilla

Abstract:

In Jesuit universities, laypersons, who come from the same or different faith backgrounds or traditions, are considered as collaborators in mission. The Jesuits themselves support the contributions of the lay partners in realizing the mission of the Society of Jesus and recognize the important role that they play in education. This study aims to investigate and generate particular notions and understandings of lived experiences of being a lay partner in Jesuit universities in the Philippines, particularly those involved in higher education. Using the qualitative approach as introduced by grounded theorist Barney Glaser, the lay partners’ concept of being a partner, as lived in higher education, is generated systematically from the data collected in the field primarily through in-depth interviews, field notes and observations. Glaser’s constant comparative method of analysis of data is used going through the phases of open coding, theoretical coding, and selective coding from memoing to theoretical sampling to sorting and then writing. In this study, Glaser’s grounded theory as a methodology will provide a substantial insight into and articulation of the layperson’s actual experience of being a partner of the Jesuits in education. Such articulation provides a phenomenological approach or framework to an understanding of the meaning and core characteristics of Jesuit-Lay partnership in Jesuit educational institution of higher learning in the country. This study is expected to provide a framework or model for lay partnership in academic institutions that have the same practice of having lay partners in mission.

Keywords: Grounded theory, Jesuit mission in higher education, lay partner, lived experience.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1054