Search results for: pedestrian target selection
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4950

Search results for: pedestrian target selection

4560 Criterion-Referenced Test Reliability through Threshold Loss Agreement: Fuzzy Logic Analysis Approach

Authors: Mohammad Ali Alavidoost, Hossein Bozorgian

Abstract:

Criterion-referenced tests (CRTs) are designed to measure student performance against a fixed set of predetermined criteria or learning standards. The reliability of such tests cannot be based on internal reliability. Threshold loss agreement is one way to calculate the reliability of CRTs. However, the selection of master and non-master in such agreement is determined by the threshold point. The problem is if the threshold point witnesses a minute change, the selection of master and non-master may have a drastic change, leading to the change in reliability results. Therefore, in this study, the Fuzzy logic approach is employed as a remedial procedure for data analysis to obviate the threshold point problem. Forty-one Iranian students were selected; the participants were all between 20 and 30 years old. A quantitative approach was used to address the research questions. In doing so, a quasi-experimental design was utilized since the selection of the participants was not randomized. Based on the Fuzzy logic approach, the threshold point would be more stable during the analysis, resulting in rather constant reliability results and more precise assessment.

Keywords: criterion-referenced tests, threshold loss agreement, threshold point, fuzzy logic approach

Procedia PDF Downloads 341
4559 Personalized Social Resource Recommender Systems on Interest-Based Social Networks

Authors: C. L. Huang, J. J. Sia

Abstract:

The interest-based social networks, also known as social bookmark sharing systems, are useful platforms for people to conveniently read and collect internet resources. These platforms also providing function of social networks, and users can share and explore internet resources from the social networks. Providing personalized internet resources to users is an important issue on these platforms. This study uses two types of relationship on the social networks—following and follower and proposes a collaborative recommender system, consisting of two main steps. First, this study calculates the relationship strength between the target user and the target user's followings and followers to find top-N similar neighbors. Second, from the top-N similar neighbors, the articles (internet resources) that may interest the target user are recommended to the target user. In this system, users can efficiently obtain recent, related and diverse internet resources (knowledge) from the interest-based social network. This study collected the experimental dataset from Diigo, which is a famous bookmark sharing system. The experimental results show that the proposed recommendation model is more accurate than two traditional baseline recommendation models but slightly lower than the cosine model in accuracy. However, in the metrics of the diversity and executing time, our proposed model outperforms the cosine model.

Keywords: recommender systems, social networks, tagging, bookmark sharing systems, collaborative recommender systems, knowledge management

Procedia PDF Downloads 147
4558 Easymodel: Web-based Bioinformatics Software for Protein Modeling Based on Modeller

Authors: Alireza Dantism

Abstract:

Presently, describing the function of a protein sequence is one of the most common problems in biology. Usually, this problem can be facilitated by studying the three-dimensional structure of proteins. In the absence of a protein structure, comparative modeling often provides a useful three-dimensional model of the protein that is dependent on at least one known protein structure. Comparative modeling predicts the three-dimensional structure of a given protein sequence (target) mainly based on its alignment with one or more proteins of known structure (templates). Comparative modeling consists of four main steps 1. Similarity between the target sequence and at least one known template structure 2. Alignment of target sequence and template(s) 3. Build a model based on alignment with the selected template(s). 4. Prediction of model errors 5. Optimization of the built model There are many computer programs and web servers that automate the comparative modeling process. One of the most important advantages of these servers is that it makes comparative modeling available to both experts and non-experts, and they can easily do their own modeling without the need for programming knowledge, but some other experts prefer using programming knowledge and do their modeling manually because by doing this they can maximize the accuracy of their modeling. In this study, a web-based tool has been designed to predict the tertiary structure of proteins using PHP and Python programming languages. This tool is called EasyModel. EasyModel can receive, according to the user's inputs, the desired unknown sequence (which we know as the target) in this study, the protein sequence file (template), etc., which also has a percentage of similarity with the primary sequence, and its third structure Predict the unknown sequence and present the results in the form of graphs and constructed protein files.

Keywords: structural bioinformatics, protein tertiary structure prediction, modeling, comparative modeling, modeller

Procedia PDF Downloads 67
4557 High Throughput Virtual Screening against ns3 Helicase of Japanese Encephalitis Virus (JEV)

Authors: Soma Banerjee, Aamen Talukdar, Argha Mandal, Dipankar Chaudhuri

Abstract:

Japanese Encephalitis is a major infectious disease with nearly half the world’s population living in areas where it is prevalent. Currently, treatment for it involves only supportive care and symptom management through vaccination. Due to the lack of antiviral drugs against Japanese Encephalitis Virus (JEV), the quest for such agents remains a priority. For these reasons, simulation studies of drug targets against JEV are important. Towards this purpose, docking experiments of the kinase inhibitors were done against the chosen target NS3 helicase as it is a nucleoside binding protein. Previous efforts regarding computational drug design against JEV revealed some lead molecules by virtual screening using public domain software. To be more specific and accurate regarding finding leads, in this study a proprietary software Schrödinger-GLIDE has been used. Druggability of the pockets in the NS3 helicase crystal structure was first calculated by SITEMAP. Then the sites were screened according to compatibility with ATP. The site which is most compatible with ATP was selected as target. Virtual screening was performed by acquiring ligands from databases: KinaseSARfari, KinaseKnowledgebase and Published inhibitor Set using GLIDE. The 25 ligands with best docking scores from each database were re-docked in XP mode. Protein structure alignment of NS3 was performed using VAST against MMDB, and similar human proteins were docked to all the best scoring ligands. The low scoring ligands were chosen for further studies and the high scoring ligands were screened. Seventy-three ligands were listed as the best scoring ones after performing HTVS. Protein structure alignment of NS3 revealed 3 human proteins with RMSD values lesser than 2Å. Docking results with these three proteins revealed the inhibitors that can interfere and inhibit human proteins. Those inhibitors were screened. Among the ones left, those with docking scores worse than a threshold value were also removed to get the final hits. Analysis of the docked complexes through 2D interaction diagrams revealed the amino acid residues that are essential for ligand binding within the active site. Interaction analysis will help to find a strongly interacting scaffold among the hits. This experiment yielded 21 hits with the best docking scores which could be investigated further for their drug like properties. Aside from getting suitable leads, specific NS3 helicase-inhibitor interactions were identified. Selection of Target modification strategies complementing docking methodologies which can result in choosing better lead compounds are in progress. Those enhanced leads can lead to better in vitro testing.

Keywords: antivirals, docking, glide, high-throughput virtual screening, Japanese encephalitis, ns3 helicase

Procedia PDF Downloads 200
4556 A Comparative Study of Additive and Nonparametric Regression Estimators and Variable Selection Procedures

Authors: Adriano Z. Zambom, Preethi Ravikumar

Abstract:

One of the biggest challenges in nonparametric regression is the curse of dimensionality. Additive models are known to overcome this problem by estimating only the individual additive effects of each covariate. However, if the model is misspecified, the accuracy of the estimator compared to the fully nonparametric one is unknown. In this work the efficiency of completely nonparametric regression estimators such as the Loess is compared to the estimators that assume additivity in several situations, including additive and non-additive regression scenarios. The comparison is done by computing the oracle mean square error of the estimators with regards to the true nonparametric regression function. Then, a backward elimination selection procedure based on the Akaike Information Criteria is proposed, which is computed from either the additive or the nonparametric model. Simulations show that if the additive model is misspecified, the percentage of time it fails to select important variables can be higher than that of the fully nonparametric approach. A dimension reduction step is included when nonparametric estimator cannot be computed due to the curse of dimensionality. Finally, the Boston housing dataset is analyzed using the proposed backward elimination procedure and the selected variables are identified.

Keywords: additive model, nonparametric regression, variable selection, Akaike Information Criteria

Procedia PDF Downloads 244
4555 An Analysis of L1 Effects on the Learning of EFL: A Case Study of Undergraduate EFL Learners at Universities in Pakistan

Authors: Nadir Ali Mugheri, Shaukat Ali Lohar

Abstract:

In a multilingual society like Pakistan, code switching is commonly observed in different contexts. Mostly people use L1 (Native Languages) and L2 for common communications and L3 (i.e. English, Urdu, Sindhi) in formal contexts and for academic writings. Such a frequent code switching does affect EFL learners' acquisition of grammar and lexis of the target language which in the long run result in different types of errors in their writings. The current study is to investigate and identify common elements of L1 and L2 (spoken by students of the Universities in Pakistan) which create hindrances for EFL learners. Case study method was used for this research. Formal writings of 400 EFL learners (as participants from various Universities of the country) were observed. Among 400 participants, 200 were female and 200 were male EFL learners having different academic backgrounds. Errors found were categorized into different types according to grammatical items, the difference in meanings, structure of sentences and identifiers of tenses of L1 or L2 in comparison with those of the target language. The findings showed that EFL learners in Pakistani varsities have serious problems in their writings and they committed serious errors related to the grammar and meanings of the target language. After analysis of the committed errors, the results were found in the affirmation of the hypothesis that L1 or L2 does affect EFL learners. The research suggests in the end to adopt natural ways in pedagogy like task-based learning or communicative methods using contextualized material so as to avoid impediments of L1 or L2 in acquisition the target language.

Keywords: multilingualism, L2 acquisition, code switching, language acquisition, communicative language teaching

Procedia PDF Downloads 265
4554 The Discussion on the Composition of Feng Shui by the Environmental Planning Viewpoint

Authors: Jhuang Jin-Jhong, Hsieh Wei-Fan

Abstract:

Climate change causes natural disasters persistently. Therefore, nowadays environmental planning objective tends to the issues of respecting nature and coexisting with nature. As a result, the natural environment analysis, e.g., the analysis of topography, soil, hydrology, climate, vegetation, is highly emphasized. On the other hand, Feng Shui has been a criterion of site selection for residence in Eastern since the ancient times and has had farther influence on site selection for castles and even for temples and tombs. The primary criterion of site selection is judging the quality of Long: mountain range, Sha: nearby mountains, Shui: hydrology, Xue: foundation, Xiang: aspect, which are similar to the environmental variables of mountain range, topography, hydrology and aspect. For the reason, a lot researchers attempt to probe into the connection between the criterion of Feng Shui and environmental planning factors. Most researches only discussed with the composition and theory of space of Feng Shui, but there is no research which explained Feng Shui through the environmental field. Consequently, this study reviewed the theory of Feng Shui through the environmental planning viewpoint and assembled essential composition factors of Feng Shui. The results of this study point. From literature review and comparison of theoretical meanings, we find that the ideal principles for planning the Feng Shui environment can also be used for environmental planning. Therefore, this article uses 12 ideal environmental features used in Feng Shui to contrast the natural aspects of the environment and make comparisons with previous research and classifies the environmental factors into climate, topography, hydrology, vegetation, and soil.

Keywords: the composition of Feng Shui, environmental planning, site selection, main components of the Feng Shui environment

Procedia PDF Downloads 486
4553 Investment Decision among Public Sector Retirees: A Behavioural Finance View

Authors: Bisi S. Olawoyin

Abstract:

This study attempts an exploration into behavioural finance in which the traditional assumptions of expected utility maximization with rational investors in efficient markets are dropped. It reviews prior research and evidence about how psychological biases affect investors behaviour and stock selection. This study examined the relationship between demographic variables and financial behaviour biases among public sector retirees who invested in the Nigerian Stock Exchange prior to their retirement. By using questionnaire survey method, a total of 214 valid convenient samples were collected in order to determine how specific demographic and psychological trait affect stock selection between dividend paying and non-dividend paying stocks. Descriptive statistics and OLS were used to analyse the results. Findings showed that most of the retirees prefer dividend paying stocks in few years preceding their retirement but still hold on to their non-dividend paying stock on retirement. A significant difference also exists between senior and junior retirees in preference for non-dividend paying stocks. These findings are consistent with the clientele theories of dividend.

Keywords: behavioural finance, clientele theories, dividend paying stocks, stock selection

Procedia PDF Downloads 115
4552 Improved Acoustic Source Sensing and Localization Based On Robot Locomotion

Authors: V. Ramu Reddy, Parijat Deshpande, Ranjan Dasgupta

Abstract:

This paper presents different methodology for an acoustic source sensing and localization in an unknown environment. The developed methodology includes an acoustic based sensing and localization system, a converging target localization based on the recursive direction of arrival (DOA) error minimization, and a regressive obstacle avoidance function. Our method is able to augment the existing proven localization techniques and improve results incrementally by utilizing robot locomotion and is capable of converging to a position estimate with greater accuracy using fewer measurements. The results also evinced the DOA error minimization at each iteration, improvement in time for reaching the destination and the efficiency of this target localization method as gradually converging to the real target position. Initially, the system is tested using Kinect mounted on turntable with DOA markings which serve as a ground truth and then our approach is validated using a FireBird VI (FBVI) mobile robot on which Kinect is used to obtain bearing information.

Keywords: acoustic source localization, acoustic sensing, recursive direction of arrival, robot locomotion

Procedia PDF Downloads 463
4551 Customer Segmentation Revisited: The Case of the E-Tailing Industry in Emerging Market

Authors: Sanjeev Prasher, T. Sai Vijay, Chandan Parsad, Abhishek Banerjee, Sahakari Nikhil Krishna, Subham Chatterjee

Abstract:

With rapid rise in internet retailing, the industry is set for a major implosion. Due to the little difference among competitors, companies find it difficult to segment and target the right shoppers. The objective of the study is to segment Indian online shoppers on the basis of the factors – website characteristics and shopping values. Together, these cover extrinsic and intrinsic factors that affect shoppers as they visit web retailers. Data were collected using questionnaire from 319 Indian online shoppers, and factor analysis was used to confirm the factors influencing the shoppers in their selection of web portals. Thereafter, cluster analysis was applied, and different segments of shoppers were identified. The relationship between income groups and online shoppers’ segments was tracked using correspondence analysis. Significant findings from the study include that web entertainment and informativeness together contribute more than fifty percent of the total influence on the web shoppers. Contrary to general perception that shoppers seek utilitarian leverages, the present study highlights the preference for fun, excitement, and entertainment during browsing of the website. Four segments namely Information Seekers, Utility Seekers, Value Seekers and Core Shoppers were identified and profiled. Value seekers emerged to be the most dominant segment with two-fifth of the respondents falling for hedonic as well as utilitarian shopping values. With overlap among the segments, utilitarian shopping value garnered prominence with more than fifty-eight percent of the total respondents. Moreover, a strong relation has been established between the income levels and the segments of Indian online shoppers. Web shoppers show different motives from being utility seekers to information seekers, core shoppers and finally value seekers as income levels increase. Companies can strategically use this information for target marketing and align their web portals accordingly. This study can further be used to develop models revolving around satisfaction, trust and customer loyalty.

Keywords: online shopping, shopping values, effectiveness of information content, web informativeness, web entertainment, information seekers, utility seekers, value seekers, core shoppers

Procedia PDF Downloads 170
4550 Fuzzy Population-Based Meta-Heuristic Approaches for Attribute Reduction in Rough Set Theory

Authors: Mafarja Majdi, Salwani Abdullah, Najmeh S. Jaddi

Abstract:

One of the global combinatorial optimization problems in machine learning is feature selection. It concerned with removing the irrelevant, noisy, and redundant data, along with keeping the original meaning of the original data. Attribute reduction in rough set theory is an important feature selection method. Since attribute reduction is an NP-hard problem, it is necessary to investigate fast and effective approximate algorithms. In this paper, we proposed two feature selection mechanisms based on memetic algorithms (MAs) which combine the genetic algorithm with a fuzzy record to record travel algorithm and a fuzzy controlled great deluge algorithm to identify a good balance between local search and genetic search. In order to verify the proposed approaches, numerical experiments are carried out on thirteen datasets. The results show that the MAs approaches are efficient in solving attribute reduction problems when compared with other meta-heuristic approaches.

Keywords: rough set theory, attribute reduction, fuzzy logic, memetic algorithms, record to record algorithm, great deluge algorithm

Procedia PDF Downloads 424
4549 A New Learning Automata-Based Algorithm to the Priority-Based Target Coverage Problem in Directional Sensor Networks

Authors: Shaharuddin Salleh, Sara Marouf, Hosein Mohammadi

Abstract:

Directional sensor networks (DSNs) have recently attracted a great deal of attention due to their extensive applications in a wide range of situations. One of the most important problems associated with DSNs is covering a set of targets in a given area and, at the same time, maximizing the network lifetime. This is due to limitation in sensing angle and battery power of the directional sensors. This problem gets more complicated by the possibility that targets may have different coverage requirements. In the present study, this problem is referred to as priority-based target coverage (PTC). As sensors are often densely deployed, organizing the sensors into several cover sets and then activating these cover sets successively is a promising solution to this problem. In this paper, we propose a learning automata-based algorithm to organize the directional sensors into several cover sets in such a way that each cover set could satisfy coverage requirements of all the targets. Several experiments are conducted to evaluate the performance of the proposed algorithm. The results demonstrated that the algorithms were able to contribute to solving the problem.

Keywords: directional sensor networks, target coverage problem, cover set formation, learning automata

Procedia PDF Downloads 387
4548 Importance of Location Selection of an Energy Storage System in a Smart Grid

Authors: Vanaja Rao

Abstract:

In the recent times, the need for the integration of Renewable Energy Sources (RES) in a Smart Grid is on the rise. As a result of this, associated energy storage systems are known to play important roles in sustaining the efficient operation of such RES like wind power and solar power. This paper investigates the importance of location selection of Energy Storage Systems (ESSs) in a Smart Grid. Three scenarios of ESS location is studied and analyzed in a Smart Grid, which are – 1. Near the generation/source, 2. In the middle of the Grid and, 3. Near the demand/consumption. This is explained with the aim of assisting any Distribution Network Operator (DNO) in deploying the ESSs in a power network, which will significantly help reduce the costs and time of planning and avoid any damages incurred as a result of installing them at an incorrect location of a Smart Grid. To do this, the outlined scenarios mentioned above are modelled and analyzed with the National Grid’s datasets of energy generation and consumption in the UK power network. As a result, the outcome of this analysis aims to provide a better overview for the location selection of the ESSs in a Smart Grid. This ensures power system stability and security along with the optimum usage of the ESSs.

Keywords: distribution networks, energy storage system, energy security, location planning, power stability, smart grid

Procedia PDF Downloads 277
4547 Switching Losses in Power Electronic Converter of Switched Reluctance Motor

Authors: Ali Asghar Memon

Abstract:

A cautious and astute selection of switching devices used in power electronic converters of a switched reluctance (SR) motor is required. It is a matter of choice of best switching devices with respect to their switching ability rather than fulfilling the number of switches. This paper highlights the computational determination of switching losses comprising of switch-on, switch-off and conduction losses respectively by using experimental data in simulation model of a SR machine. The finding of this research is helpful for proper selection of electronic switches and suitable converter topology for switched reluctance motor.

Keywords: converter, operating modes, switched reluctance motor, switching losses

Procedia PDF Downloads 479
4546 Genome-Wide Mining of Potential Guide RNAs for Streptococcus pyogenes and Neisseria meningitides CRISPR-Cas Systems for Genome Engineering

Authors: Farahnaz Sadat Golestan Hashemi, Mohd Razi Ismail, Mohd Y. Rafii

Abstract:

Clustered regularly interspaced short palindromic repeats (CRISPR) and CRISPR-associated protein (Cas) system can facilitate targeted genome editing in organisms. Dual or single guide RNA (gRNA) can program the Cas9 nuclease to cut target DNA in particular areas; thus, introducing concise mutations either via error-prone non-homologous end-joining repairing or via incorporating foreign DNAs by homologous recombination between donor DNA and target area. In spite of high demand of such promising technology, developing a well-organized procedure in order for reliable mining of potential target sites for gRNAs in large genomic data is still challenging. Hence, we aimed to perform high-throughput detection of target sites by specific PAMs for not only common Streptococcus pyogenes (SpCas9) but also for Neisseria meningitides (NmCas9) CRISPR-Cas systems. Previous research confirmed the successful application of such RNA-guided Cas9 orthologs for effective gene targeting and subsequently genome manipulation. However, Cas9 orthologs need their particular PAM sequence for DNA cleavage activity. Activity levels are based on the sequence of the protospacer and specific combinations of favorable PAM bases. Therefore, based on the specific length and sequence of PAM followed by a constant length of the target site for the two orthogonals of Cas9 protein, we created a reliable procedure to explore possible gRNA sequences. To mine CRISPR target sites, four different searching modes of sgRNA binding to target DNA strand were applied. These searching modes are as follows i) coding strand searching, ii) anti-coding strand searching, iii) both strand searching, and iv) paired-gRNA searching. Finally, a complete list of all potential gRNAs along with their locations, strands, and PAMs sequence orientation can be provided for both SpCas9 as well as another potential Cas9 ortholog (NmCas9). The artificial design of potential gRNAs in a genome of interest can accelerate functional genomic studies. Consequently, the application of such novel genome editing tool (CRISPR/Cas technology) will enhance by presenting increased versatility and efficiency.

Keywords: CRISPR/Cas9 genome editing, gRNA mining, SpCas9, NmCas9

Procedia PDF Downloads 231
4545 Assessment of Relationships between Agro-Morphological Traits and Cold Tolerance in Faba Bean (vicia faba l.) and Wild Relatives

Authors: Nisa Ertoy Inci, Cengiz Toker

Abstract:

Winter or autumn-sown faba bean (Vicia faba L.) is one the most efficient ways to overcome drought since faba bean is usually grown under rainfed where drought and high-temperature stresses are the main growth constraints. The objectives of this study were assessment of (i) relationships between cold tolerance and agro-morphological traits, and (ii) the most suitable agro-morphological trait(s) under cold conditions. Three species of the genus Vicia L. includes 109 genotypes of faba bean (Vicia faba L.), three genotypes of narbon bean (V. narbonensis L.) and two genotypes of V. montbretii Fisch. & C.A. Mey. Davis and Plitmann were sown in autumn at highland of Mediterranean region of Turkey. All relatives of faba bean were more cold-tolerant than the faba bean genotypes. Three faba bean genotypes, ACV-42, ACV-84 and ACV-88, were selected as sources of cold tolerance under field conditions. Path and correlation coefficients and factor and principal component analyses indicated that biological yield should be evaluated in selection for cold tolerance under cold conditions ahead of many agro-morphological traits. The seed weight should be considered for selection in early breeding generations because they had the highest heritability.

Keywords: cold tolerance, faba bean, narbon bean, selection

Procedia PDF Downloads 366
4544 Production of Neutrons by High Intensity Picosecond Laser Interacting with Thick Solid Target at XingGuangIII

Authors: Xi Yuan, Xuebin Zhu, Bojun Li

Abstract:

This work describes the experiment to produce high-intensity pulsed neutron beams on XingGuangIII laser facility. The high-intensity laser is utilized to drive protons and deuterons, which hit a thick solid target to produce neutrons. The pulse duration of the laser used in the experiment is about 0.8 ps, and the laser energy is around 100 J. Protons and deuterons are accelerated from a 10-μm-thick deuterated polyethylene (CD₂) foil and diagnosed by a Thomson parabola ion-spectrometer. The energy spectrum of neutrons generated via ⁷Li(d,n) and ⁷Li(p,n) reaction when proton and deuteron beams hit a 5-mm-thick LiF target is measured by a scintillator-based time-of-flight spectrometer. Results from the neuron measurements show that the maximum neutron energy is about 12.5 MeV and the neutron yield is up to 2×10⁹/pulse. The high-intensity pulsed neutron beams demonstrated in this work can provide a valuable neutron source for material research, fast neutron induced fission research, and so on.

Keywords: picosecond laser driven, fast neutron, time-of-flight spectrometry, XinggungIII

Procedia PDF Downloads 141
4543 Online Consortium of Independent Colleges and Universities (OCICU): Using Cluster Analysis to Grasp Student and Institutional Value of Consolidated Online Offerings in Higher Education

Authors: Alex Rodriguez, Adam Guerrero

Abstract:

Purpose: This study is designed to examine the institutions that comprise the Online Consortium of Independent Colleges and Universities (OCICU) to understand better the types of higher education institutions that comprise their membership. The literature on this topic is extensive in analyzing the current economic environment around higher education, which is largely considered to be negative for independent, tuition-driven institutions, and is forcing colleges and universities to reexamine how the college-attending population defines value and how institutions can best utilize their existing resources (and those of other institutions) to meet that value expectation. The results from this analysis are intended to give OCICU the ability to target their current customer base better, based on their most notable differences, and other institutions to see how to best approach consolidation within higher education. Design/Methodology: This study utilized k-means cluster analysis in order to explore the possibility that different segments exist within the seventy-one colleges and universities that have comprised OCICU. It analyzed fifty different variables, whose selection was based on the previous literature, collected by the Integrated Postsecondary Education Data System (IPEDS), whose data is self-reported by individual institutions. Findings: OCICU member institutions are partitioned into two clusters: "access institutions" and "conventional institutions” based largely on the student profile they target. Value: The methodology of the study is relatively unique as there are not many studies within the field of higher education marketing that have employed cluster analysis, and this type of analysis has never been conducted on OCICU members, specifically, or that of any higher education consolidated offering. OCICU can use the findings of this study to obtain a better grasp as to the specific needs of the two market segments OCICU currently serves and develop measurable marketing programs around how those segments are defined that communicate the value sought by current and potential OCICU members or those of similar institutions. Other consolidation efforts within higher education can also employ the same methodology to determine their own market segments.

Keywords: Consolidation, Colleges, Enrollment, Higher Education, Marketing, Strategy, Universities

Procedia PDF Downloads 112
4542 Porul: Option Generation and Selection and Scoring Algorithms for a Tamil Flash Card Game

Authors: Anitha Narasimhan, Aarthy Anandan, Madhan Karky, C. N. Subalalitha

Abstract:

Games can be the excellent tools for teaching a language. There are few e-learning games in Indian languages like word scrabble, cross word, quiz games etc., which were developed mainly for educational purposes. This paper proposes a Tamil word game called, “Porul”, which focuses on education as well as on players’ thinking and decision-making skills. Porul is a multiple choice based quiz game, in which the players attempt to answer questions correctly from the given multiple options that are generated using a unique algorithm called the Option Selection algorithm which explores the semantics of the question in various dimensions namely, synonym, rhyme and Universal Networking Language semantic category. This kind of semantic exploration of the question not only increases the complexity of the game but also makes it more interesting. The paper also proposes a Scoring Algorithm which allots a score based on the popularity score of the question word. The proposed game has been tested using 20,000 Tamil words.

Keywords: Porul game, Tamil word game, option selection, flash card, scoring, algorithm

Procedia PDF Downloads 386
4541 A Framework for an Automated Decision Support System for Selecting Safety-Conscious Contractors

Authors: Rawan A. Abdelrazeq, Ahmed M. Khalafallah, Nabil A. Kartam

Abstract:

Selection of competent contractors for construction projects is usually accomplished through competitive bidding or negotiated contracting in which the contract bid price is the basic criterion for selection. The evaluation of contractor’s safety performance is still not a typical criterion in the selection process, despite the existence of various safety prequalification procedures. There is a critical need for practical and automated systems that enable owners and decision makers to evaluate contractor safety performance, among other important contractor selection criteria. These systems should ultimately favor safety-conscious contractors to be selected by the virtue of their past good safety records and current safety programs. This paper presents an exploratory sequential mixed-methods approach to develop a framework for an automated decision support system that evaluates contractor safety performance based on a multitude of indicators and metrics that have been identified through a comprehensive review of construction safety research, and a survey distributed to domain experts. The framework is developed in three phases: (1) determining the indicators that depict contractor current and past safety performance; (2) soliciting input from construction safety experts regarding the identified indicators, their metrics, and relative significance; and (3) designing a decision support system using relational database models to integrate the identified indicators and metrics into a system that assesses and rates the safety performance of contractors. The proposed automated system is expected to hold several advantages including: (1) reducing the likelihood of selecting contractors with poor safety records; (2) enhancing the odds of completing the project safely; and (3) encouraging contractors to exert more efforts to improve their safety performance and practices in order to increase their bid winning opportunities which can lead to significant safety improvements in the construction industry. This should prove useful to decision makers and researchers, alike, and should help improve the safety record of the construction industry.

Keywords: construction safety, contractor selection, decision support system, relational database

Procedia PDF Downloads 254
4540 Development of Nondestructive Imaging Analysis Method Using Muonic X-Ray with a Double-Sided Silicon Strip Detector

Authors: I-Huan Chiu, Kazuhiko Ninomiya, Shin’ichiro Takeda, Meito Kajino, Miho Katsuragawa, Shunsaku Nagasawa, Atsushi Shinohara, Tadayuki Takahashi, Ryota Tomaru, Shin Watanabe, Goro Yabu

Abstract:

In recent years, a nondestructive elemental analysis method based on muonic X-ray measurements has been developed and applied for various samples. Muonic X-rays are emitted after the formation of a muonic atom, which occurs when a negatively charged muon is captured in a muon atomic orbit around the nucleus. Because muonic X-rays have higher energy than electronic X-rays due to the muon mass, they can be measured without being absorbed by a material. Thus, estimating the two-dimensional (2D) elemental distribution of a sample became possible using an X-ray imaging detector. In this work, we report a non-destructive imaging experiment using muonic X-rays at Japan Proton Accelerator Research Complex. The irradiated target consisted of polypropylene material, and a double-sided silicon strip detector, which was developed as an imaging detector for astronomical observation, was employed. A peak corresponding to muonic X-rays from the carbon atoms in the target was clearly observed in the energy spectrum at an energy of 14 keV, and 2D visualizations were successfully reconstructed to reveal the projection image from the target. This result demonstrates the potential of the non-destructive elemental imaging method that is based on muonic X-ray measurement. To obtain a higher position resolution for imaging a smaller target, a new detector system will be developed to improve the statistical analysis in further research.

Keywords: DSSD, muon, muonic X-ray, imaging, non-destructive analysis

Procedia PDF Downloads 181
4539 Instance Selection for MI-Support Vector Machines

Authors: Amy M. Kwon

Abstract:

Support vector machine (SVM) is a well-known algorithm in machine learning due to its superior performance, and it also functions well in multiple-instance (MI) problems. Our study proposes a schematic algorithm to select instances based on Hausdorff distance, which can be adapted to SVMs as input vectors under the MI setting. Based on experiments on five benchmark datasets, our strategy for adapting representation outperformed in comparison with original approach. In addition, task execution times (TETs) were reduced by more than 80% based on MissSVM. Hence, it is noteworthy to consider this representation adaptation to SVMs under MI-setting.

Keywords: support vector machine, Margin, Hausdorff distance, representation selection, multiple-instance learning, machine learning

Procedia PDF Downloads 1
4538 The Economic Value of Mastitis Resistance in Dairy Cattle in Kenya

Authors: Caleb B. Sagwa, Tobias O. Okeno, Alexander K. Kahi

Abstract:

Dairy cattle production plays an important role in the Kenyan economy. However, high incidences of mastitis is a major setback to the productivity in this industry. The current dairy cattle breeding objective in Kenya does not include mastitis resistance, mainly because the economic value of mastitis resistance has not been determined. Therefore this study aimed at estimating the economic value of mastitis resistance in dairy cattle in Kenya. Initial input parameters were obtained from literature on dairy cattle production systems in the tropics. Selection index methodology was used to derive the economic value of mastitis resistance. Somatic cell count (SCC) was used an indicator trait for mastitis resistance. The economic value was estimated relative to milk yield (MY). Economic values were assigned to SCC in a selection index such that the overall gain in the breeding goal trait was maximized. The option of estimating the economic value for SCC by equating the response in the trait of interest to its index response was considered. The economic value of mastitis resistance was US $23.64 while maximum response to selection for MY was US $66.01. The findings of this study provide vital information that is a pre-requisite for the inclusion of mastitis resistance in the current dairy cattle breeding goal in Kenya.

Keywords: somatic cell count, milk quality, payment system, breeding goal

Procedia PDF Downloads 232
4537 Analysis of Particle Reinforced Metal Matrix Composite Crankshaft

Authors: R. S. Vikaash, S. Vinodh, T. S. Sai Prashanth

Abstract:

Six sigma is a defect reduction strategy enabling modern organizations to achieve business prosperity. The practitioners are in need to select best six sigma project among the available alternatives to achieve customer satisfaction. In this circumstance, this article presents a study in which six sigma project selection is formulated as Multi-Criteria Decision-Making(MCDM) problem and the best project has been found using AHP. Five main governing criteria and 14 sub criteria are being formulated. The decision maker’s inputs were gathered and computations were performed. The project with the high values from the set of projects is selected as the best project. Based on calculations, Project “P1”is found to be the best and further deployment actions have been undertaken in the organization.

Keywords: six Sigma, project selection, MCDM, analytic hierarchy process, business prosperity

Procedia PDF Downloads 323
4536 Automatic Target Recognition in SAR Images Based on Sparse Representation Technique

Authors: Ahmet Karagoz, Irfan Karagoz

Abstract:

Synthetic Aperture Radar (SAR) is a radar mechanism that can be integrated into manned and unmanned aerial vehicles to create high-resolution images in all weather conditions, regardless of day and night. In this study, SAR images of military vehicles with different azimuth and descent angles are pre-processed at the first stage. The main purpose here is to reduce the high speckle noise found in SAR images. For this, the Wiener adaptive filter, the mean filter, and the median filters are used to reduce the amount of speckle noise in the images without causing loss of data. During the image segmentation phase, pixel values are ordered so that the target vehicle region is separated from other regions containing unnecessary information. The target image is parsed with the brightest 20% pixel value of 255 and the other pixel values of 0. In addition, by using appropriate parameters of statistical region merging algorithm, segmentation comparison is performed. In the step of feature extraction, the feature vectors belonging to the vehicles are obtained by using Gabor filters with different orientation, frequency and angle values. A number of Gabor filters are created by changing the orientation, frequency and angle parameters of the Gabor filters to extract important features of the images that form the distinctive parts. Finally, images are classified by sparse representation method. In the study, l₁ norm analysis of sparse representation is used. A joint database of the feature vectors generated by the target images of military vehicle types is obtained side by side and this database is transformed into the matrix form. In order to classify the vehicles in a similar way, the test images of each vehicle is converted to the vector form and l₁ norm analysis of the sparse representation method is applied through the existing database matrix form. As a result, correct recognition has been performed by matching the target images of military vehicles with the test images by means of the sparse representation method. 97% classification success of SAR images of different military vehicle types is obtained.

Keywords: automatic target recognition, sparse representation, image classification, SAR images

Procedia PDF Downloads 342
4535 Heuristic Algorithms for Time Based Weapon-Target Assignment Problem

Authors: Hyun Seop Uhm, Yong Ho Choi, Ji Eun Kim, Young Hoon Lee

Abstract:

Weapon-target assignment (WTA) is a problem that assigns available launchers to appropriate targets in order to defend assets. Various algorithms for WTA have been developed over past years for both in the static and dynamic environment (denoted by SWTA and DWTA respectively). Due to the problem requirement to be solved in a relevant computational time, WTA has suffered from the solution efficiency. As a result, SWTA and DWTA problems have been solved in the limited situation of the battlefield. In this paper, the general situation under continuous time is considered by Time based Weapon Target Assignment (TWTA) problem. TWTA are studied using the mixed integer programming model, and three heuristic algorithms; decomposed opt-opt, decomposed opt-greedy, and greedy algorithms are suggested. Although the TWTA optimization model works inefficiently when it is characterized by a large size, the decomposed opt-opt algorithm based on the linearization and decomposition method extracted efficient solutions in a reasonable computation time. Because the computation time of the scheduling part is too long to solve by the optimization model, several algorithms based on greedy is proposed. The models show lower performance value than that of the decomposed opt-opt algorithm, but very short time is needed to compute. Hence, this paper proposes an improved method by applying decomposition to TWTA, and more practical and effectual methods can be developed for using TWTA on the battlefield.

Keywords: air and missile defense, weapon target assignment, mixed integer programming, piecewise linearization, decomposition algorithm, military operations research

Procedia PDF Downloads 313
4534 Auto-Tuning of CNC Parameters According to the Machining Mode Selection

Authors: Jenq-Shyong Chen, Ben-Fong Yu

Abstract:

CNC(computer numerical control) machining centers have been widely used for machining different metal components for various industries. For a specific CNC machine, its everyday job is assigned to cut different products with quite different attributes such as material type, workpiece weight, geometry, tooling, and cutting conditions. Theoretically, the dynamic characteristics of the CNC machine should be properly tuned match each machining job in order to get the optimal machining performance. However, most of the CNC machines are set with only a standard set of CNC parameters. In this study, we have developed an auto-tuning system which can automatically change the CNC parameters and in hence change the machine dynamic characteristics according to the selection of machining modes which are set by the mixed combination of three machine performance indexes: the HO (high surface quality) index, HP (high precision) index and HS (high speed) index. The acceleration, jerk, corner error tolerance, oscillation and dynamic bandwidth of machine’s feed axes have been changed according to the selection of the machine performance indexes. The proposed auto-tuning system of the CNC parameters has been implemented on a PC-based CNC controller and a three-axis machining center. The measured experimental result have shown the promising of our proposed auto-tuning system.

Keywords: auto-tuning, CNC parameters, machining mode, high speed, high accuracy, high surface quality

Procedia PDF Downloads 360
4533 Calpoly Autonomous Transportation Experience: Software for Driverless Vehicle Operating on Campus

Authors: F. Tang, S. Boskovich, A. Raheja, Z. Aliyazicioglu, S. Bhandari, N. Tsuchiya

Abstract:

Calpoly Autonomous Transportation Experience (CATE) is a driverless vehicle that we are developing to provide safe, accessible, and efficient transportation of passengers throughout the Cal Poly Pomona campus for events such as orientation tours. Unlike the other self-driving vehicles that are usually developed to operate with other vehicles and reside only on the road networks, CATE will operate exclusively on walk-paths of the campus (potentially narrow passages) with pedestrians traveling from multiple locations. Safety becomes paramount as CATE operates within the same environment as pedestrians. As driverless vehicles assume greater roles in today’s transportation, this project will contribute to autonomous driving with pedestrian traffic in a highly dynamic environment. The CATE project requires significant interdisciplinary work. Researchers from mechanical engineering, electrical engineering and computer science are working together to attack the problem from different perspectives (hardware, software and system). In this abstract, we describe the software aspects of the project, with a focus on the requirements and the major components. CATE shall provide a GUI interface for the average user to interact with the car and access its available functionalities, such as selecting a destination from any origin on campus. We have developed an interface that provides an aerial view of the campus map, the current car location, routes, and the goal location. Users can interact with CATE through audio or manual inputs. CATE shall plan routes from the origin to the selected destination for the vehicle to travel. We will use an existing aerial map for the campus and convert it to a spatial graph configuration where the vertices represent the landmarks and edges represent paths that the car should follow with some designated behaviors (such as stay on the right side of the lane or follow an edge). Graph search algorithms such as A* will be implemented as the default path planning algorithm. D* Lite will be explored to efficiently recompute the path when there are any changes to the map. CATE shall avoid any static obstacles and walking pedestrians within some safe distance. Unlike traveling along traditional roadways, CATE’s route directly coexists with pedestrians. To ensure the safety of the pedestrians, we will use sensor fusion techniques that combine data from both lidar and stereo vision for obstacle avoidance while also allowing CATE to operate along its intended route. We will also build prediction models for pedestrian traffic patterns. CATE shall improve its location and work under a GPS-denied situation. CATE relies on its GPS to give its current location, which has a precision of a few meters. We have implemented an Unscented Kalman Filter (UKF) that allows the fusion of data from multiple sensors (such as GPS, IMU, odometry) in order to increase the confidence of localization. We also noticed that GPS signals can easily get degraded or blocked on campus due to high-rise buildings or trees. UKF can also help here to generate a better state estimate. In summary, CATE will provide on-campus transportation experience that coexists with dynamic pedestrian traffic. In future work, we will extend it to multi-vehicle scenarios.

Keywords: driverless vehicle, path planning, sensor fusion, state estimate

Procedia PDF Downloads 117
4532 Application of extraction chromatography to the separation of Sc, Zr and Sn isotopes from target materials

Authors: Steffen Happel

Abstract:

Non-standard isotopes such as Sc-44/47, Zr-89, and Sn-117mare finding interest is increasing in radiopharmaceutical applications. Methods for the separation of these elements from typical target materials were developed. The methods used in this paper are based on the use of extraction chromatographic resins such as UTEVA, TBP, and DGA resin. Information on the selectivity of the resins (Dw values of selected elements in HCl and HNO3 of varying concentration) will be presented as well as results of the method development such as elution studies, chemical recoveries, and decontamination factors. Developed methods are based on the use of vacuum supported separation allowing for fast and selective separation.

Keywords: elution, extraction chromatography, radiopharmacy, decontamination factors

Procedia PDF Downloads 436
4531 Effect of Pressing Pressure on Mechanical Properties of Elaeis guineensis Jacq. Fronds-Based Composite Board

Authors: Ellisha Iling, Dayang Siti Hazimmah Ali

Abstract:

Experimental composite boards were fabricated using oil palm (Elaeis guineensis Jacq) fronds particles by applying hot press pressure of 5MPa, 6MPa and 7MPa respectively. Modulus of rupture (MOR) and internal bond strength (IB) of the composite boards made with target density of 0.80 g/cm³ were evaluated. Composite board fabricated under hot press pressure of 5MPa had MOR and IB values of 16.27 and 4.34 N/mm² respectively. Corresponding values for composite board fabricated under hot press pressure of 6MPa were 16.76 and 5.41 N/mm² respectively. Whereas, the MOR and IB values of composite board fabricated under hot press pressure of 7MPa were 17.24 and 6.19 N/mm² respectively. All composite boards met the MOR and IB requirement stated in Japanese Industrial Standard (JIS). Based on results of this work, the strength of mechanical properties of composite board increased with increase of hot press pressure. This study revealed that the selection of applied pressure during fabrication of composite board is important to improve mechanical properties of composite boards.

Keywords: composite board, Elaeis guineensis Jacq. Fronds, hot press pressure, mechanical properties

Procedia PDF Downloads 164