Search results for: social information
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5118

Search results for: social information

408 Upgraded Rough Clustering and Outlier Detection Method on Yeast Dataset by Entropy Rough K-Means Method

Authors: P. Ashok, G. M. Kadhar Nawaz

Abstract:

Rough set theory is used to handle uncertainty and incomplete information by applying two accurate sets, Lower approximation and Upper approximation. In this paper, the rough clustering algorithms are improved by adopting the Similarity, Dissimilarity–Similarity and Entropy based initial centroids selection method on three different clustering algorithms namely Entropy based Rough K-Means (ERKM), Similarity based Rough K-Means (SRKM) and Dissimilarity-Similarity based Rough K-Means (DSRKM) were developed and executed by yeast dataset. The rough clustering algorithms are validated by cluster validity indexes namely Rand and Adjusted Rand indexes. An experimental result shows that the ERKM clustering algorithm perform effectively and delivers better results than other clustering methods. Outlier detection is an important task in data mining and very much different from the rest of the objects in the clusters. Entropy based Rough Outlier Factor (EROF) method is seemly to detect outlier effectively for yeast dataset. In rough K-Means method, by tuning the epsilon (ᶓ) value from 0.8 to 1.08 can detect outliers on boundary region and the RKM algorithm delivers better results, when choosing the value of epsilon (ᶓ) in the specified range. An experimental result shows that the EROF method on clustering algorithm performed very well and suitable for detecting outlier effectively for all datasets. Further, experimental readings show that the ERKM clustering method outperformed the other methods.

Keywords: Clustering, Entropy, Outlier, Rough K-Means, validity index.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1383
407 Comparative Evaluation of Accuracy of Selected Machine Learning Classification Techniques for Diagnosis of Cancer: A Data Mining Approach

Authors: Rajvir Kaur, Jeewani Anupama Ginige

Abstract:

With recent trends in Big Data and advancements in Information and Communication Technologies, the healthcare industry is at the stage of its transition from clinician oriented to technology oriented. Many people around the world die of cancer because the diagnosis of disease was not done at an early stage. Nowadays, the computational methods in the form of Machine Learning (ML) are used to develop automated decision support systems that can diagnose cancer with high confidence in a timely manner. This paper aims to carry out the comparative evaluation of a selected set of ML classifiers on two existing datasets: breast cancer and cervical cancer. The ML classifiers compared in this study are Decision Tree (DT), Support Vector Machine (SVM), k-Nearest Neighbor (k-NN), Logistic Regression, Ensemble (Bagged Tree) and Artificial Neural Networks (ANN). The evaluation is carried out based on standard evaluation metrics Precision (P), Recall (R), F1-score and Accuracy. The experimental results based on the evaluation metrics show that ANN showed the highest-level accuracy (99.4%) when tested with breast cancer dataset. On the other hand, when these ML classifiers are tested with the cervical cancer dataset, Ensemble (Bagged Tree) technique gave better accuracy (93.1%) in comparison to other classifiers.

Keywords: Artificial neural networks, breast cancer, cancer dataset, classifiers, cervical cancer, F-score, logistic regression, machine learning, precision, recall, support vector machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1520
406 Rotation Invariant Fusion of Partial Image Parts in Vista Creation using Missing View Regeneration

Authors: H. B. Kekre, Sudeep D. Thepade

Abstract:

The automatic construction of large, high-resolution image vistas (mosaics) is an active area of research in the fields of photogrammetry [1,2], computer vision [1,4], medical image processing [4], computer graphics [3] and biometrics [8]. Image stitching is one of the possible options to get image mosaics. Vista Creation in image processing is used to construct an image with a large field of view than that could be obtained with a single photograph. It refers to transforming and stitching multiple images into a new aggregate image without any visible seam or distortion in the overlapping areas. Vista creation process aligns two partial images over each other and blends them together. Image mosaics allow one to compensate for differences in viewing geometry. Thus they can be used to simplify tasks by simulating the condition in which the scene is viewed from a fixed position with single camera. While obtaining partial images the geometric anomalies like rotation, scaling are bound to happen. To nullify effect of rotation of partial images on process of vista creation, we are proposing rotation invariant vista creation algorithm in this paper. Rotation of partial image parts in the proposed method of vista creation may introduce some missing region in the vista. To correct this error, that is to fill the missing region further we have used image inpainting method on the created vista. This missing view regeneration method also overcomes the problem of missing view [31] in vista due to cropping, irregular boundaries of partial image parts and errors in digitization [35]. The method of missing view regeneration generates the missing view of vista using the information present in vista itself.

Keywords: Vista, Overlap Estimation, Rotation Invariance, Missing View Regeneration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1702
405 The Problems of Legal Regulation of Intellectual Property Rights in Innovation Activities in Russia (Institutional Approach)

Authors: Zhanna Mingaleva, Irina Mirskikh

Abstract:

Part IV of the Civil Code of the Russian Federation dedicated to legal regulation of Intellectual property rights came into force in 2008. It is a first attempt of codification in Intellectual property sphere in Russia. That is why a lot of new norms appeared. The main problem of the Russian Civil Code (part IV) is that many rules (norms of Law) contradict the norms of International Intellectual property Law (i.e. protection of inventions, creations, ideas, know-how, trade secrets, innovations). Intellectual property rights protect innovations and creations and reward innovative and creative activity. Intellectual property rights are international in character and in that respect they fit in rather well with the economic reality of the global economy. Inventors prefer not to take out a patent for inventions because it is a very difficult procedure, it takes a lot of time and is very expensive. That-s why they try to protect their inventions as ideas, know-how, confidential information. An idea is the main element of any object of Intellectual property (creation, invention, innovation, know-how, etc.). But ideas are not protected by Civil Code of Russian Federation. The aim of the paper is to reveal the main problems of legal regulation of Intellectual property in Russia and to suggest possible solutions. The authors of this paper have raised these essential issues through different activities. Through the panel survey, questionnaires which were spread among the participants of intellectual activities the main problems of implementation of innovations, protecting of the ideas and know-how were identified. The implementation of research results will help to solve economic and legal problems of innovations, transfer of innovations and intellectual property.1

Keywords: Innovation activities, intellectual property rights, know-how, patents, indicators of innovation activities

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1493
404 Evaluation of the Effect of Nursing Services Provided in a Correctional Institution on the Physical Health Levels and Health Behaviors of Female Inmates

Authors: Şenay Pehli̇van, Gülümser Kublay

Abstract:

Female inmates placed in a Correctional Institution (CI) have more physical health problems than other women and their male counterparts. Thus, they require more health care services in the CI and nursing services in particular. CI nurses also have the opportunity to teach behaviors which will protect and improve their health to these women who are difficult to reach in the community. The aim of this study was to evaluate effect of nursing services provided in a CI on the physical health levels and health behaviors of female inmates. The study has a quasi-experimental design. The study was done in Female Closed CI in Ankara, Turkey. The study was conducted on 30 female inmates. Before the implementation of nursing interventions in the initial phase of the study, female inmates were evaluated in terms of physical health problems and health behavior using forms, a physical examination, medical history, health files (file containing medical information related to prisons) and the Omaha System (OS). Findings obtained from evaluations were grouped and symptoms-findings were expressed with OS diagnosis codes. Knowledge, behavior and status scores of prisoners in relation to health problems were determined. After the implementation of the nursing interventions, female inmates were evaluated in terms of physical health problems and health behavior using OS. The research data were collected using the Female Evaluation Form developed by the researcher and the OS. It was found that knowledge, behavior and status scores of prisoners significantly increased after the implementation of nursing interventions (p < 0.05).

Keywords: Correctional institution, correctional nursing, prison nursing, female inmates, physical health problems, health behaviors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1438
403 Hierarchies Based On the Number of Cooperating Systems of Finite Automata on Four-Dimensional Input Tapes

Authors: Makoto Sakamoto, Yasuo Uchida, Makoto Nagatomo, Takao Ito, Tsunehiro Yoshinaga, Satoshi Ikeda, Masahiro Yokomichi, Hiroshi Furutani

Abstract:

In theoretical computer science, the Turing machine has played a number of important roles in understanding and exploiting basic concepts and mechanisms in computing and information processing [20]. It is a simple mathematical model of computers [9]. After that, M.Blum and C.Hewitt first proposed two-dimensional automata as a computational model of two-dimensional pattern processing, and investigated their pattern recognition abilities in 1967 [7]. Since then, a lot of researchers in this field have been investigating many properties about automata on a two- or three-dimensional tape. On the other hand, the question of whether processing fourdimensional digital patterns is much more difficult than two- or threedimensional ones is of great interest from the theoretical and practical standpoints. Thus, the study of four-dimensional automata as a computasional model of four-dimensional pattern processing has been meaningful [8]-[19],[21]. This paper introduces a cooperating system of four-dimensional finite automata as one model of four-dimensional automata. A cooperating system of four-dimensional finite automata consists of a finite number of four-dimensional finite automata and a four-dimensional input tape where these finite automata work independently (in parallel). Those finite automata whose input heads scan the same cell of the input tape can communicate with each other, that is, every finite automaton is allowed to know the internal states of other finite automata on the same cell it is scanning at the moment. In this paper, we mainly investigate some accepting powers of a cooperating system of eight- or seven-way four-dimensional finite automata. The seven-way four-dimensional finite automaton is an eight-way four-dimensional finite automaton whose input head can move east, west, south, north, up, down, or in the fu-ture, but not in the past on a four-dimensional input tape.

Keywords: computational complexity, cooperating system, finite automaton, four-dimension, hierarchy, multihead.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1864
402 The DAQ Debugger for iFDAQ of the COMPASS Experiment

Authors: Y. Bai, M. Bodlak, V. Frolov, S. Huber, V. Jary, I. Konorov, D. Levit, J. Novy, D. Steffen, O. Subrt, M. Virius

Abstract:

In general, state-of-the-art Data Acquisition Systems (DAQ) in high energy physics experiments must satisfy high requirements in terms of reliability, efficiency and data rate capability. This paper presents the development and deployment of a debugging tool named DAQ Debugger for the intelligent, FPGA-based Data Acquisition System (iFDAQ) of the COMPASS experiment at CERN. Utilizing a hardware event builder, the iFDAQ is designed to be able to readout data at the average maximum rate of 1.5 GB/s of the experiment. In complex softwares, such as the iFDAQ, having thousands of lines of code, the debugging process is absolutely essential to reveal all software issues. Unfortunately, conventional debugging of the iFDAQ is not possible during the real data taking. The DAQ Debugger is a tool for identifying a problem, isolating the source of the problem, and then either correcting the problem or determining a way to work around it. It provides the layer for an easy integration to any process and has no impact on the process performance. Based on handling of system signals, the DAQ Debugger represents an alternative to conventional debuggers provided by most integrated development environments. Whenever problem occurs, it generates reports containing all necessary information important for a deeper investigation and analysis. The DAQ Debugger was fully incorporated to all processes in the iFDAQ during the run 2016. It helped to reveal remaining software issues and improved significantly the stability of the system in comparison with the previous run. In the paper, we present the DAQ Debugger from several insights and discuss it in a detailed way.

Keywords: DAQ debugger, data acquisition system, FPGA, system signals, Qt framework.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 874
401 The Applications of Toyota Production System to Reduce Wastes in Agricultural Products Packing Process: A Study of Onion Packing Plant

Authors: Paisarn Larpsomboonchai

Abstract:

Agro-industry is one of major industries that have strong impacts on national economic incomes, growth, stability, and sustainable development. Moreover, this industry also has strong influences on social, cultural and political issues. Furthermore, this industry, as producing primary and secondary products, is facing challenges from such diverse factors such as demand inconsistency, intense international competition, technological advancements and new competitors. In order to maintain and to improve industry’s competitiveness in both domestics and international markets, science and technology are key factors. Besides hard sciences and technologies, modern industrial engineering concepts such as Just in Time (JIT) Total Quality Management (TQM), Quick Response (QR), Supply Chain Management (SCM) and Lean can be very effective to support to increase efficiency and effectiveness of these agricultural products on world stage. Onion is one of Thailand’s major export products which bring back national incomes. But, it is also facing challenges in many ways. This paper focused its interests in onion packing process and its related activities such as storage and shipment from one of major packing plant and storage in Mae Wang District, Chiang Mai, Thailand, by applying Toyota Production System (TPS) or Lean concepts, to improve process capability throughout the entire packing and distribution process which will be profitable for the whole onion supply chain. And it will be beneficial to other related agricultural products in Thailand and other ASEAN countries.

Keywords: Lean Concepts, Lean in Agro-industries Activities, Packing Process, Toyota Production System (TPS), Waste Reduction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2197
400 ORank: An Ontology Based System for Ranking Documents

Authors: Mehrnoush Shamsfard, Azadeh Nematzadeh, Sarah Motiee

Abstract:

Increasing growth of information volume in the internet causes an increasing need to develop new (semi)automatic methods for retrieval of documents and ranking them according to their relevance to the user query. In this paper, after a brief review on ranking models, a new ontology based approach for ranking HTML documents is proposed and evaluated in various circumstances. Our approach is a combination of conceptual, statistical and linguistic methods. This combination reserves the precision of ranking without loosing the speed. Our approach exploits natural language processing techniques for extracting phrases and stemming words. Then an ontology based conceptual method will be used to annotate documents and expand the query. To expand a query the spread activation algorithm is improved so that the expansion can be done in various aspects. The annotated documents and the expanded query will be processed to compute the relevance degree exploiting statistical methods. The outstanding features of our approach are (1) combining conceptual, statistical and linguistic features of documents, (2) expanding the query with its related concepts before comparing to documents, (3) extracting and using both words and phrases to compute relevance degree, (4) improving the spread activation algorithm to do the expansion based on weighted combination of different conceptual relationships and (5) allowing variable document vector dimensions. A ranking system called ORank is developed to implement and test the proposed model. The test results will be included at the end of the paper.

Keywords: Document ranking, Ontology, Spread activation algorithm, Annotation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1864
399 A Remote Sensing Approach for Vulnerability and Environmental Change in Apodi Valley Region, Northeast Brazil

Authors: Mukesh Singh Boori, Venerando Eustáquio Amaro

Abstract:

The objective of this study was to improve our understanding of vulnerability and environmental change; it's causes basically show the intensity, its distribution and human-environment effect on the ecosystem in the Apodi Valley Region, This paper is identify, assess and classify vulnerability and environmental change in the Apodi valley region using a combined approach of landscape pattern and ecosystem sensitivity. Models were developed using the following five thematic layers: Geology, geomorphology, soil, vegetation and land use/cover, by means of a Geographical Information Systems (GIS)-based on hydro-geophysical parameters. In spite of the data problems and shortcomings, using ESRI-s ArcGIS 9.3 program, the vulnerability score, to classify, weight and combine a number of 15 separate land cover classes to create a single indicator provides a reliable measure of differences (6 classes) among regions and communities that are exposed to similar ranges of hazards. Indeed, the ongoing and active development of vulnerability concepts and methods have already produced some tools to help overcome common issues, such as acting in a context of high uncertainties, taking into account the dynamics and spatial scale of asocial-ecological system, or gathering viewpoints from different sciences to combine human and impact-based approaches. Based on this assessment, this paper proposes concrete perspectives and possibilities to benefit from existing commonalities in the construction and application of assessment tools.

Keywords: Vulnerability, Land use/cover, Ecosystem, Remotesensing, GIS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2924
398 The Necessity of Biomass Application for Developing Combined Heat and Power (CHP)with Biogas Fuel: Case Study

Authors: F. Amin Salehi, L. Sharp, M. A. Abdoli, D.E.Cotton, K.Rezapour

Abstract:

The daily increase of organic waste materials resulting from different activities in the country is one of the main factors for the pollution of environment. Today, with regard to the low level of the output of using traditional methods, the high cost of disposal waste materials and environmental pollutions, the use of modern methods such as anaerobic digestion for the production of biogas has been prevailing. The collected biogas from the process of anaerobic digestion, as a renewable energy source similar to natural gas but with a less methane and heating value is usable. Today, with the help of technologies of filtration and proper preparation, access to biogas with features fully similar to natural gas has become possible. At present biogas is one of the main sources of supplying electrical and thermal energy and also an appropriate option to be used in four stroke engine, diesel engine, sterling engine, gas turbine, gas micro turbine and fuel cell to produce electricity. The use of biogas for different reasons which returns to socio-economic and environmental advantages has been noticed in CHP for the production of energy in the world. The production of biogas from the technology of anaerobic digestion and its application in CHP power plants in Iran can not only supply part of the energy demands in the country, but it can materialize moving in line with the sustainable development. In this article, the necessity of the development of CHP plants with biogas fuels in the country will be dealt based on studies performed from the economic, environmental and social aspects. Also to prove the importance of the establishment of these kinds of power plants from the economic point of view, necessary calculations has been done as a case study for a CHP power plant with a biogas fuel.

Keywords: Anaerobic Digestion, Biogas, CHP, Organic Wastes

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1917
397 Spatial Indeterminacy: Destabilization of Dichotomies in Modern and Contemporary Architecture

Authors: Adrian Lo

Abstract:

Since the advent of modern architecture, notions of free plan and transparency have proliferated well into current trends. The movement’s notion of a spatially homogeneous, open and limitless ‘free plan’ contrasts with the spatially heterogeneous ‘series of rooms’ defined by load bearing walls, which in turn triggered new notions of transparency created by vast expanses of glazed walls. Similarly, transparency was also dichotomized as something that was physical or optical, as well as something conceptual, akin to spatial organization. As opposed to merely accepting the duality and possible incompatibility of these dichotomies, this paper seeks to ask how can space be both literally and phenomenally transparent, as well as exhibit both homogeneous and heterogeneous qualities? This paper explores this potential destabilization or blurring of spatial phenomena by dissecting the transparent layers and volumes of a series of selected case studies to investigate how different architects have devised strategies of spatial ambiguity and interpenetration. Projects by Peter Eisenman, Sou Fujimoto, and SANAA will be discussed and analyzed to show how the superimposition of geometries and spaces achieve different conditions of layering, transparency, and interstitiality. Their particular buildings will be explored to reveal various innovative kinds of spatial interpenetration produced through the articulate relations of the elements of architecture, which challenge conventional perceptions of interior and exterior whereby visual homogeneity blurs with spatial heterogeneity. The results show how spatial conceptions such as interpenetration and transparency have the ability to subvert not only inside-outside dialectics, but could also produce multiple degrees of interiority within complex and indeterminate spatial dimensions in constant flux as well as present alternative forms of social interaction.

Keywords: interpenetration, literal and phenomenal transparency, spatial heterogeneity, visual homogeneity

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 488
396 An Empirical Study about RFID Acceptance- Focus on the Employees in Korea -

Authors: Mi Sook Lee

Abstract:

The number of the companies accepting RFID in Korea has been increased continuously due to the domestic development of information technology. The acceptance of RFID by companies in Korea enabled them to do business with many global enterprises in a much more efficient and effective way. According to a survey[33, p76], many companies in Korea have used RFID for inventory or distribution manages. But, the use of RFID in the companies in Korea is in the early stages and its potential value hasn-t fully been realized yet. At this time, it would be very important to investigate the factors that affect RFID acceptance. For this study, many previous studies were referenced and some RFID experts were interviewed. Through the pilot test, four factors were selected - Security Trust, Employee Knowledge, Partner Influence, Service Provider Trust - affecting RFID acceptance and an extended technology acceptance model(e-TAM) was presented with those factors. The proposed model was empirically tested using data collected from employees in companies or public enterprises. In order to analyze some relationships between exogenous variables and four variables in TAM, structural equation modeling(SEM) was developed and SPSS12.0 and AMOS 7.0 were used for analyses. The results are summarized as follows: 1) security trust perceived by employees positively influences on perceived usefulness and perceived ease of use; 2) employee-s knowledge on RFID positively influences on only perceived ease of use; 3) a partner-s influence for RFID acceptance positively influences on only perceived usefulness; 4) service provider trust very positively influences on perceived usefulness and perceived ease of use 5) the relationships between TAM variables are the same as the previous studies.

Keywords: RFID, TAM, Security Trust, Employee Knowledge, Partner Influence, Service Provider Trust.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1781
395 Combining ASTER Thermal Data and Spatial-Based Insolation Model for Identification of Geothermal Active Areas

Authors: Khalid Hussein, Waleed Abdalati, Pakorn Petchprayoon, Khaula Alkaabi

Abstract:

In this study, we integrated ASTER thermal data with an area-based spatial insolation model to identify and delineate geothermally active areas in Yellowstone National Park (YNP). Two pairs of L1B ASTER day- and nighttime scenes were used to calculate land surface temperature. We employed the Emissivity Normalization Algorithm which separates temperature from emissivity to calculate surface temperature. We calculated the incoming solar radiation for the area covered by each of the four ASTER scenes using an insolation model and used this information to compute temperature due to solar radiation. We then identified the statistical thermal anomalies using land surface temperature and the residuals calculated from modeled temperatures and ASTER-derived surface temperatures. Areas that had temperatures or temperature residuals greater than 2σ and between 1σ and 2σ were considered ASTER-modeled thermal anomalies. The areas identified as thermal anomalies were in strong agreement with the thermal areas obtained from the YNP GIS database. Also the YNP hot springs and geysers were located within areas identified as anomalous thermal areas. The consistency between our results and known geothermally active areas indicate that thermal remote sensing data, integrated with a spatial-based insolation model, provides an effective means for identifying and locating areas of geothermal activities over large areas and rough terrain.

Keywords: Thermal remote sensing, insolation model, land surface temperature, geothermal anomalies.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 991
394 Extracting Terrain Points from Airborne Laser Scanning Data in Densely Forested Areas

Authors: Ziad Abdeldayem, Jakub Markiewicz, Kunal Kansara, Laura Edwards

Abstract:

Airborne Laser Scanning (ALS) is one of the main technologies for generating high-resolution digital terrain models (DTMs). DTMs are crucial to several applications, such as topographic mapping, flood zone delineation, geographic information systems (GIS), hydrological modelling, spatial analysis, etc. Laser scanning system generates irregularly spaced three-dimensional cloud of points. Raw ALS data are mainly ground points (that represent the bare earth) and non-ground points (that represent buildings, trees, cars, etc.). Removing all the non-ground points from the raw data is referred to as filtering. Filtering heavily forested areas is considered a difficult and challenging task as the canopy stops laser pulses from reaching the terrain surface. This research presents an approach for removing non-ground points from raw ALS data in densely forested areas. Smoothing splines are exploited to interpolate and fit the noisy ALS data. The presented filter utilizes a weight function to allocate weights for each point of the data. Furthermore, unlike most of the methods, the presented filtering algorithm is designed to be automatic. Three different forested areas in the United Kingdom are used to assess the performance of the algorithm. The results show that the generated DTMs from the filtered data are accurate (when compared against reference terrain data) and the performance of the method is stable for all the heavily forested data samples. The average root mean square error (RMSE) value is 0.35 m.

Keywords: Airborne laser scanning, digital terrain models, filtering, forested areas.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 693
393 Air Dispersion Model for Prediction Fugitive Landfill Gaseous Emission Impact in Ambient Atmosphere

Authors: Moustafa Osman Mohammed

Abstract:

This paper will explore formation of HCl aerosol at atmospheric boundary layers and encourages the uptake of environmental modeling systems (EMSs) as a practice evaluation of gaseous emissions (“framework measures”) from small and medium-sized enterprises (SMEs). The conceptual model predicts greenhouse gas emissions to ecological points beyond landfill site operations. It focuses on incorporation traditional knowledge into baseline information for both measurement data and the mathematical results, regarding parameters influence model variable inputs. The paper has simplified parameters of aerosol processes based on the more complex aerosol process computations. The simple model can be implemented to both Gaussian and Eulerian rural dispersion models. Aerosol processes considered in this study were (i) the coagulation of particles, (ii) the condensation and evaporation of organic vapors, and (iii) dry deposition. The chemical transformation of gas-phase compounds is taken into account photochemical formulation with exposure effects according to HCl concentrations as starting point of risk assessment. The discussion set out distinctly aspect of sustainability in reflection inputs, outputs, and modes of impact on the environment. Thereby, models incorporate abiotic and biotic species to broaden the scope of integration for both quantification impact and assessment risks. The later environmental obligations suggest either a recommendation or a decision of what is a legislative should be achieved for mitigation measures of landfill gas (LFG) ultimately.

Keywords: Air dispersion model, landfill management, spatial analysis, environmental impact and risk assessment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1526
392 Stochastic Optimization of a Vendor-Managed Inventory Problem in a Two-Echelon Supply Chain

Authors: Bita Payami-Shabestari, Dariush Eslami

Abstract:

The purpose of this paper is to develop a multi-product economic production quantity model under vendor management inventory policy and restrictions including limited warehouse space, budget, and number of orders, average shortage time and maximum permissible shortage. Since the “costs” cannot be predicted with certainty, it is assumed that data behave under uncertain environment. The problem is first formulated into the framework of a bi-objective of multi-product economic production quantity model. Then, the problem is solved with three multi-objective decision-making (MODM) methods. Then following this, three methods had been compared on information on the optimal value of the two objective functions and the central processing unit (CPU) time with the statistical analysis method and the multi-attribute decision-making (MADM). The results are compared with statistical analysis method and the MADM. The results of the study demonstrate that augmented-constraint in terms of optimal value of the two objective functions and the CPU time perform better than global criteria, and goal programming. Sensitivity analysis is done to illustrate the effect of parameter variations on the optimal solution. The contribution of this research is the use of random costs data in developing a multi-product economic production quantity model under vendor management inventory policy with several constraints.

Keywords: Economic production quantity, random cost, supply chain management, vendor-managed inventory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 651
391 Web-Based Cognitive Writing Instruction (WeCWI): A Theoretical-and-Pedagogical e-Framework for Language Development

Authors: Boon Yih Mah

Abstract:

Web-based Cognitive Writing Instruction (WeCWI)’s contribution towards language development can be divided into linguistic and non-linguistic perspectives. In linguistic perspective, WeCWI focuses on the literacy and language discoveries, while the cognitive and psychological discoveries are the hubs in non-linguistic perspective. In linguistic perspective, WeCWI draws attention to free reading and enterprises, which are supported by the language acquisition theories. Besides, the adoption of process genre approach as a hybrid guided writing approach fosters literacy development. Literacy and language developments are interconnected in the communication process; hence, WeCWI encourages meaningful discussion based on the interactionist theory that involves input, negotiation, output, and interactional feedback. Rooted in the elearning interaction-based model, WeCWI promotes online discussion via synchronous and asynchronous communications, which allows interactions happened among the learners, instructor, and digital content. In non-linguistic perspective, WeCWI highlights on the contribution of reading, discussion, and writing towards cognitive development. Based on the inquiry models, learners’ critical thinking is fostered during information exploration process through interaction and questioning. Lastly, to lower writing anxiety, WeCWI develops the instructional tool with supportive features to facilitate the writing process. To bring a positive user experience to the learner, WeCWI aims to create the instructional tool with different interface designs based on two different types of perceptual learning style.

Keywords: WeCWI, literacy discovery, language discovery, cognitive discovery, psychological discovery.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3214
390 Time Series Simulation by Conditional Generative Adversarial Net

Authors: Rao Fu, Jie Chen, Shutian Zeng, Yiping Zhuang, Agus Sudjianto

Abstract:

Generative Adversarial Net (GAN) has proved to be a powerful machine learning tool in image data analysis and generation. In this paper, we propose to use Conditional Generative Adversarial Net (CGAN) to learn and simulate time series data. The conditions include both categorical and continuous variables with different auxiliary information. Our simulation studies show that CGAN has the capability to learn different types of normal and heavy-tailed distributions, as well as dependent structures of different time series. It also has the capability to generate conditional predictive distributions consistent with training data distributions. We also provide an in-depth discussion on the rationale behind GAN and the neural networks as hierarchical splines to establish a clear connection with existing statistical methods of distribution generation. In practice, CGAN has a wide range of applications in market risk and counterparty risk analysis: it can be applied to learn historical data and generate scenarios for the calculation of Value-at-Risk (VaR) and Expected Shortfall (ES), and it can also predict the movement of the market risk factors. We present a real data analysis including a backtesting to demonstrate that CGAN can outperform Historical Simulation (HS), a popular method in market risk analysis to calculate VaR. CGAN can also be applied in economic time series modeling and forecasting. In this regard, we have included an example of hypothetical shock analysis for economic models and the generation of potential CCAR scenarios by CGAN at the end of the paper.

Keywords: Conditional Generative Adversarial Net, market and credit risk management, neural network, time series.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1155
389 How Children Synchronize with Their Teacher: Evidence from a Real-World Elementary School Classroom

Authors: Reiko Yamamoto

Abstract:

This paper reports on how synchrony occurs between children and their teacher, and what prevents or facilitates synchrony. The aim of the experiment conducted in this study was to precisely analyze their movements and synchrony and reveal the process of synchrony in a real-world classroom. Specifically, the experiment was conducted for around 20 minutes during an English as a foreign language (EFL) lesson. The participants were 11 fourth-grade school children and their classroom teacher in a public elementary school in Japan. Previous researchers assert that synchrony causes the state of flow in a class. For checking the level of flow, Short Flow State Scale (SFSS) was adopted. The experimental procedure had four steps: 1) The teacher read aloud the first half of an English storybook to the children. Both the teacher and the children were at their own desks. 2) The children were subjected to an SFSS check. 3) The teacher read aloud the remaining half of the storybook to the children. She made the children remove their desks before reading. 4) The children were again subjected to an SFSS check. The movements of all participants were recorded with a video camera. From the movement analysis, it was found that the children synchronized better with the teacher in Step 3 than in Step 1, and that the teacher’s movement became free and outstanding without a desk. This implies that the desk acted as a barrier between the children and the teacher. Removal of this barrier resulted in the children’s reactions becoming synchronized with those of the teacher. The SFSS results proved that the children experienced more flow without a barrier than with a barrier. Apparently, synchrony is what caused flow or social emotions in the classroom. The main conclusion is that synchrony leads to cognitive outcomes such as children’s academic performance in EFL learning.

Keywords: Movement synchrony, teacher–child relationships, English as a foreign language, EFL learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 663
388 Mistranslation in Cross Cultural Communication: A Discourse Analysis on Former President Bush’s Speech in 2001

Authors: Lowai Abed

Abstract:

The differences in languages play a big role in cross-cultural communication. If meanings are not translated accurately, the risk can be crucial not only on an interpersonal level, but also on the international and political levels. The use of metaphorical language by politicians can cause great confusion, often leading to statements being misconstrued. In these situations, it is the translators who struggle to put forward the intended meaning with clarity and this makes translation an important field to study and analyze when it comes to cross-cultural communication. Owing to the growing importance of language and the power of translation in politics, this research analyzes part of President Bush’s speech in 2001 in which he used the word “Crusade” which caused his statement to be misconstrued. The research uses a discourse analysis of cross-cultural communication literature which provides answers supported by historical, linguistic, and communicative perspectives. The first finding indicates that the word ‘crusade’ carries different meaning and significance in the narratives of the Western world when compared to the Middle East. The second one is that, linguistically, maintaining cultural meanings through translation is quite difficult and challenging. Third, when it comes to the cross-cultural communication perspective, the common and frequent usage of literal translation is a sign of poor strategies being followed in translation training. Based on the example of Bush’s speech, this paper hopes to highlight the weak practices in translation in cross-cultural communication which are still commonly used across the world. Translation studies have to take issues such as this seriously and attempt to find a solution. In every language, there are words and phrases that have cultural, historical and social meanings that are woven into the language. Literal translation is not the solution for this problem because that strategy is unable to convey these meanings in the target language.

Keywords: Crusade, metaphor, mistranslation, war in terror.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 805
387 A Mixing Matrix Estimation Algorithm for Speech Signals under the Under-Determined Blind Source Separation Model

Authors: Jing Wu, Wei Lv, Yibing Li, Yuanfan You

Abstract:

The separation of speech signals has become a research hotspot in the field of signal processing in recent years. It has many applications and influences in teleconferencing, hearing aids, speech recognition of machines and so on. The sounds received are usually noisy. The issue of identifying the sounds of interest and obtaining clear sounds in such an environment becomes a problem worth exploring, that is, the problem of blind source separation. This paper focuses on the under-determined blind source separation (UBSS). Sparse component analysis is generally used for the problem of under-determined blind source separation. The method is mainly divided into two parts. Firstly, the clustering algorithm is used to estimate the mixing matrix according to the observed signals. Then the signal is separated based on the known mixing matrix. In this paper, the problem of mixing matrix estimation is studied. This paper proposes an improved algorithm to estimate the mixing matrix for speech signals in the UBSS model. The traditional potential algorithm is not accurate for the mixing matrix estimation, especially for low signal-to noise ratio (SNR).In response to this problem, this paper considers the idea of an improved potential function method to estimate the mixing matrix. The algorithm not only avoids the inuence of insufficient prior information in traditional clustering algorithm, but also improves the estimation accuracy of mixing matrix. This paper takes the mixing of four speech signals into two channels as an example. The results of simulations show that the approach in this paper not only improves the accuracy of estimation, but also applies to any mixing matrix.

Keywords: Clustering algorithm, potential function, speech signal, the UBSS model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 650
386 Varieties of Capitalism and Small Business CSR: A Comparative Overview

Authors: S. Looser, W. Wehrmeyer

Abstract:

Given the limited research on Small and Mediumsized Enterprises’ (SMEs) contribution to Corporate Social Responsibility (CSR) and even scarcer research on Swiss SMEs, this paper helps to fill these gaps by enabling the identification of supranational SME parameters. Thus, the paper investigates the current state of SME practices in Switzerland and across 15 other countries. Combining the degree to which SMEs demonstrate an explicit (or business case) approach or see CSR as an implicit moral activity with the assessment of their attributes for “variety of capitalism” defines the framework of this comparative analysis. To outline Swiss small business CSR patterns in particular, 40 SME owner-managers were interviewed. A secondary data analysis of studies from different countries laid groundwork for this comparative overview of small business CSR. The paper identifies Swiss small business CSR as driven by norms, values, and by the aspiration to contribute to society, thus, as an implicit part of the day-to-day business. Similar to most Central European, Mediterranean, Nordic, and Asian countries, explicit CSR is still very rare in Swiss SMEs. Astonishingly, also British and American SMEs follow this pattern in spite of their strong and distinctly liberal market economies. Though other findings show that nationality matters this research concludes that SME culture and an informal CSR agenda are strongly formative and superseding even forces of market economies, nationally cultural patterns, and language. Hence, classifications of countries by their market system, as found in the comparative capitalism literature, do not match the CSR practices in SMEs as they do not mirror the peculiarities of their business. This raises questions on the universality and generalisability of unmediated, explicit management concepts, especially in the context of small firms.

Keywords: CSR, comparative study, cultures of capitalism, Small and Medium-sized Enterprises.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2242
385 Investigating the Effectiveness of a 3D Printed Composite Mold

Authors: Peng Hao Wang, Garam Kim, Ronald Sterkenburg

Abstract:

In composite manufacturing, the fabrication of tooling and tooling maintenance contributes to a large portion of the total cost. However, as the applications of composite materials continue to increase, there is also a growing demand for more tooling. The demand for more tooling places heavy emphasis on the industry’s ability to fabricate high quality tools while maintaining the tool’s cost effectiveness. One of the popular techniques of tool fabrication currently being developed utilizes additive manufacturing technology known as 3D printing. The popularity of 3D printing is due to 3D printing’s ability to maintain low material waste, low cost, and quick fabrication time. In this study, a team of Purdue University School of Aviation and Transportation Technology (SATT) faculty and students investigated the effectiveness of a 3D printed composite mold. A steel valve cover from an aircraft reciprocating engine was modeled utilizing 3D scanning and computer-aided design (CAD) to create a 3D printed composite mold. The mold was used to fabricate carbon fiber versions of the aircraft reciprocating engine valve cover. The carbon fiber valve covers were evaluated for dimensional accuracy and quality while the 3D printed composite mold was evaluated for durability and dimensional stability. The data collected from this study provided valuable information in the understanding of 3D printed composite molds, potential improvements for the molds, and considerations for future tooling design.

Keywords: Additive manufacturing, carbon fiber, composite tooling, molds.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 878
384 A Finite Element/Finite Volume Method for Dam-Break Flows over Deformable Beds

Authors: Alia Alghosoun, Ashraf Osman, Mohammed Seaid

Abstract:

A coupled two-layer finite volume/finite element method was proposed for solving dam-break flow problem over deformable beds. The governing equations consist of the well-balanced two-layer shallow water equations for the water flow and a linear elastic model for the bed deformations. Deformations in the topography can be caused by a brutal localized force or simply by a class of sliding displacements on the bathymetry. This deformation in the bed is a source of perturbations, on the water surface generating water waves which propagate with different amplitudes and frequencies. Coupling conditions at the interface are also investigated in the current study and two mesh procedure is proposed for the transfer of information through the interface. In the present work a new procedure is implemented at the soil-water interface using the finite element and two-layer finite volume meshes with a conservative distribution of the forces at their intersections. The finite element method employs quadratic elements in an unstructured triangular mesh and the finite volume method uses the Rusanove to reconstruct the numerical fluxes. The numerical coupled method is highly efficient, accurate, well balanced, and it can handle complex geometries as well as rapidly varying flows. Numerical results are presented for several test examples of dam-break flows over deformable beds. Mesh convergence study is performed for both methods, the overall model provides new insight into the problems at minimal computational cost.

Keywords: Dam-break flows, deformable beds, finite element method, finite volume method, linear elasticity, Shallow water equations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 882
383 Seismic Vulnerability Assessment of Masonry Buildings in Seismic Prone Regions: The Case of Annaba City, Algeria

Authors: Allaeddine Athmani, Abdelhacine Gouasmia, Tiago Ferreira, Romeu Vicente

Abstract:

Seismic vulnerability assessment of masonry buildings is a fundamental issue even for moderate to low seismic hazard regions. This fact is even more important when dealing with old structures such as those located in Annaba city (Algeria), which the majority of dates back to the French colonial era from 1830. This category of buildings is in high risk due to their highly degradation state, heterogeneous materials and intrusive modifications to structural and non-structural elements. Furthermore, they are usually shelter a dense population, which is exposed to such risk. In order to undertake a suitable seismic risk mitigation strategies and reinforcement process for such structures, it is essential to estimate their seismic resistance capacity at a large scale. In this sense, two seismic vulnerability index methods and damage estimation have been adapted and applied to a pilot-scale building area located in the moderate seismic hazard region of Annaba city: The first one based on the EMS-98 building typologies, and the second one derived from the Italian GNDT approach. To perform this task, the authors took the advantage of an existing data survey previously performed for other purposes. The results obtained from the application of the two methods were integrated and compared using a geographic information system tool (GIS), with the ultimate goal of supporting the city council of Annaba for the implementation of risk mitigation and emergency planning strategies.

Keywords: Annaba city, EMS98 concept, GNDT method, old city center, seismic vulnerability index, unreinforced masonry buildings.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1613
382 Error Detection and Correction for Onboard Satellite Computers Using Hamming Code

Authors: Rafsan Al Mamun, Md. Motaharul Islam, Rabana Tajrin, Nabiha Noor, Shafinaz Qader

Abstract:

In an attempt to enrich the lives of billions of people by providing proper information, security and a way of communicating with others, the need for efficient and improved satellites is constantly growing. Thus, there is an increasing demand for better error detection and correction (EDAC) schemes, which are capable of protecting the data onboard the satellites. The paper is aimed towards detecting and correcting such errors using a special algorithm called the Hamming Code, which uses the concept of parity and parity bits to prevent single-bit errors onboard a satellite in Low Earth Orbit. This paper focuses on the study of Low Earth Orbit satellites and the process of generating the Hamming Code matrix to be used for EDAC using computer programs. The most effective version of Hamming Code generated was the Hamming (16, 11, 4) version using MATLAB, and the paper compares this particular scheme with other EDAC mechanisms, including other versions of Hamming Codes and Cyclic Redundancy Check (CRC), and the limitations of this scheme. This particular version of the Hamming Code guarantees single-bit error corrections as well as double-bit error detections. Furthermore, this version of Hamming Code has proved to be fast with a checking time of 5.669 nanoseconds, that has a relatively higher code rate and lower bit overhead compared to the other versions and can detect a greater percentage of errors per length of code than other EDAC schemes with similar capabilities. In conclusion, with the proper implementation of the system, it is quite possible to ensure a relatively uncorrupted satellite storage system.

Keywords: Bit-flips, Hamming code, low earth orbit, parity bits, satellite, single error upset.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 877
381 Educators’ Adherence to Learning Theories and Their Perceptions on the Advantages and Disadvantages of e-Learning

Authors: Samson T. Obafemi, Seraphin D. Eyono Obono

Abstract:

Information and Communication Technologies (ICTs) are pervasive nowadays, including in education where they are expected to improve the performance of learners. However, the hope placed in ICTs to find viable solutions to the problem of poor academic performance in schools in the developing world has not yet yielded the expected benefits. This problem serves as a motivation to this study whose aim is to examine the perceptions of educators on the advantages and disadvantages of e-learning. This aim will be subdivided into two types of research objectives. Objectives on the identification and design of theories and models will be achieved using content analysis and literature review. However, the objective on the empirical testing of such theories and models will be achieved through the survey of educators from different schools in the Pinetown District of the South African Kwazulu-Natal province. SPSS is used to quantitatively analyse the data collected by the questionnaire of this survey using descriptive statistics and Pearson correlations after assessing the validity and the reliability of the data. The main hypothesis driving this study is that there is a relationship between the demographics of educators’ and their adherence to learning theories on one side, and their perceptions on the advantages and disadvantages of e-learning on the other side, as argued by existing research; but this research views these learning theories under three perspectives: educators’ adherence to self-regulated learning, to constructivism, and to progressivism. This hypothesis was fully confirmed by the empirical study except for the demographic factor where teachers’ level of education was found to be the only demographic factor affecting the perceptions of educators on the advantages and disadvantages of e-learning.

Keywords: Academic performance, e-learning, Learning theories, Teaching and Learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2607
380 A Neuroscience-Based Learning Technique: Framework and Application to STEM

Authors: Dante J. Dorantes-González, Aldrin Balsa-Yepes

Abstract:

Existing learning techniques such as problem-based learning, project-based learning, or case study learning are learning techniques that focus mainly on technical details, but give no specific guidelines on learner’s experience and emotional learning aspects such as arousal salience and valence, being emotional states important factors affecting engagement and retention. Some approaches involving emotion in educational settings, such as social and emotional learning, lack neuroscientific rigorousness and use of specific neurobiological mechanisms. On the other hand, neurobiology approaches lack educational applicability. And educational approaches mainly focus on cognitive aspects and disregard conditioning learning. First, authors start explaining the reasons why it is hard to learn thoughtfully, then they use the method of neurobiological mapping to track the main limbic system functions, such as the reward circuit, and its relations with perception, memories, motivations, sympathetic and parasympathetic reactions, and sensations, as well as the brain cortex. The authors conclude explaining the major finding: The mechanisms of nonconscious learning and the triggers that guarantee long-term memory potentiation. Afterward, the educational framework for practical application and the instructors’ guidelines are established. An implementation example in engineering education is given, namely, the study of tuned-mass dampers for earthquake oscillations attenuation in skyscrapers. This work represents an original learning technique based on nonconscious learning mechanisms to enhance long-term memories that complement existing cognitive learning methods.

Keywords: Emotion, emotion-enhanced memory, learning technique, STEM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 977
379 A Ground Structure Method to Minimize the Total Installed Cost of Steel Frame Structures

Authors: Filippo Ranalli, Forest Flager, Martin Fischer

Abstract:

This paper presents a ground structure method to optimize the topology and discrete member sizing of steel frame structures in order to minimize total installed cost, including material, fabrication and erection components. The proposed method improves upon existing cost-based ground structure methods by incorporating constructability considerations well as satisfying both strength and serviceability constraints. The architecture for the method is a bi-level Multidisciplinary Feasible (MDF) architecture in which the discrete member sizing optimization is nested within the topology optimization process. For each structural topology generated, the sizing optimization process seek to find a set of discrete member sizes that result in the lowest total installed cost while satisfying strength (member utilization) and serviceability (node deflection and story drift) criteria. To accurately assess cost, the connection details for the structure are generated automatically using accurate site-specific cost information obtained directly from fabricators and erectors. Member continuity rules are also applied to each node in the structure to improve constructability. The proposed optimization method is benchmarked against conventional weight-based ground structure optimization methods resulting in an average cost savings of up to 30% with comparable computational efficiency.

Keywords: Cost-based structural optimization, cost-based topology and sizing optimization, steel frame ground structure optimization, multidisciplinary optimization of steel structures.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1370