Search results for: proposed concept
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7196

Search results for: proposed concept

176 The Use of Graphic Design Elements for Design of Newspaper for Women

Authors: Pibool Waijittragum

Abstract:

This paper has its objectives to reveal contents and personality suitable to women’s newspapers. The research methodology employed in this study is the questionnaire which is derived from a literature review related to newspapers, graphic elements method for print media design and 12 sample sizes of different daily newspapers. In order to acquire an in-depth understanding and comprehensible view of desirable for a women’s newspaper design, graphic elements that related to that personality as well as other preferable elements for a women’s newspaper, including seven editorial Many Thai newspapers were offer a women’s documentary and column space. With its feminine looks, most of them appeared with warm tones and friendly mood through their headlines, contents, illustrations and graphics. The study found that most desirable personalities for a women’s newspaper design in Thailand are: Modern, Chic and Natural. Each personality has significant graphic elements as follows: 1. Modern: significant elements of modern personality comprises of the composition with graduation pattern which creates attractiveness by using an anomalous alignment layout grid and outstanding structure to create focal points and dynamic movement. Dark to black color that has narrowed, limited hue coupled with bright color tones. The round shape of the Thai font style was suitable for this concept. Such Thai fonts have harmonious proportion and consistent stroke with the urban-polite look. 2. Chic: significant elements of chic personality comprises of the proper composition with distinctive scale, using rhythmic repetition and a contrast of scale to draw in reader attention. Vivid and bright color tones with extensive hues coupled with similar color tones and round shape of the Thai font style with a light stroke and consistent line. 3. Natural: significant elements of natural personality comprises of the proper composition using rhythmic repetition that creates a focal point through striking images and harmonious perspective. Warm color tones with restricted hues that appear to look natural. Duo tone color was suitable through the gradually increasing gradient. The Thai style with hand writing font was suitable through the inconsistent stroke. There are 10 types of daily content that were revealed to be the most desirable for Thai women readers, these are: Daily News, Economics News, Education News, Entertainment News, International news, Political News, Public Health News, Scientific News, Social News and Sports News. As well, there are 16 topics identified as very desirable for Thai women readers, such as: Art and Culture, Automobile, Classified, Special Scoop, Editorial, Advertisement, Entertainment, Health and Quality of Life, History, Horoscope, Lifestyle and Fashion, Literature, Nature - Environment and Tourism, Night Life, Stars and Jet Set Gossip, Women’s Issue.

Keywords: Graphic design elements, women newspaper, newspaper design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1229
175 A Design for Customer Preferences Model by Cluster Analysis of Geometric Features and Customer Preferences

Authors: Yuan-Jye Tseng, Ching-Yen Chen

Abstract:

In the design cycle, a main design task is to determine the external shape of the product. The external shape of a product is one of the key factors that can affect the customers’ preferences linking to the motivation to buy the product, especially in the case of a consumer electronic product such as a mobile phone. The relationship between the external shape and the customer preferences needs to be studied to enhance the customer’s purchase desire and action. In this research, a design for customer preferences model is developed for investigating the relationships between the external shape and the customer preferences of a product. In the first stage, the names of the geometric features are collected and evaluated from the data of the specified internet web pages using the developed text miner. The key geometric features can be determined if the number of occurrence on the web pages is relatively high. For each key geometric feature, the numerical values are explored using the text miner to collect the internet data from the web pages. In the second stage, a cluster analysis model is developed to evaluate the numerical values of the key geometric features to divide the external shapes into several groups. Several design suggestion cases can be proposed, for example, large model, mid-size model, and mini model, for designing a mobile phone. A customer preference index is developed by evaluating the numerical data of each of the key geometric features of the design suggestion cases. The design suggestion case with the top ranking of the customer preference index can be selected as the final design of the product. In this paper, an example product of a notebook computer is illustrated. It shows that the external shape of a product can be used to drive customer preferences. The presented design for customer preferences model is useful for determining a suitable external shape of the product to increase customer preferences.

Keywords: Cluster analysis, customer preferences, design evaluation, design for customer preferences, product design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 743
174 CO2 Emission and Cost Optimization of Reinforced Concrete Frame Designed by Performance Based Design Approach

Authors: Jin Woo Hwang, Byung Kwan Oh, Yousok Kim, Hyo Seon Park

Abstract:

As greenhouse effect has been recognized as serious environmental problem of the world, interests in carbon dioxide (CO2) emission which comprises major part of greenhouse gas (GHG) emissions have been increased recently. Since construction industry takes a relatively large portion of total CO2 emissions of the world, extensive studies about reducing CO2 emissions in construction and operation of building have been carried out after the 2000s. Also, performance based design (PBD) methodology based on nonlinear analysis has been robustly developed after Northridge Earthquake in 1994 to assure and assess seismic performance of building more exactly because structural engineers recognized that prescriptive code based design approach cannot address inelastic earthquake responses directly and assure performance of building exactly. Although CO2 emissions and PBD approach are recent rising issues on construction industry and structural engineering, there were few or no researches considering these two issues simultaneously. Thus, the objective of this study is to minimize the CO2 emissions and cost of building designed by PBD approach in structural design stage considering structural materials. 4 story and 4 span reinforced concrete building optimally designed to minimize CO2 emissions and cost of building and to satisfy specific seismic performance (collapse prevention in maximum considered earthquake) of building satisfying prescriptive code regulations using non-dominated sorting genetic algorithm-II (NSGA-II). Optimized design result showed that minimized CO2 emissions and cost of building were acquired satisfying specific seismic performance. Therefore, the methodology proposed in this paper can be used to reduce both CO2 emissions and cost of building designed by PBD approach.

Keywords: CO2 emissions, performance based design, optimization, sustainable design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1828
173 Data Hiding in Images in Discrete Wavelet Domain Using PMM

Authors: Souvik Bhattacharyya, Gautam Sanyal

Abstract:

Over last two decades, due to hostilities of environment over the internet the concerns about confidentiality of information have increased at phenomenal rate. Therefore to safeguard the information from attacks, number of data/information hiding methods have evolved mostly in spatial and transformation domain.In spatial domain data hiding techniques,the information is embedded directly on the image plane itself. In transform domain data hiding techniques the image is first changed from spatial domain to some other domain and then the secret information is embedded so that the secret information remains more secure from any attack. Information hiding algorithms in time domain or spatial domain have high capacity and relatively lower robustness. In contrast, the algorithms in transform domain, such as DCT, DWT have certain robustness against some multimedia processing.In this work the authors propose a novel steganographic method for hiding information in the transform domain of the gray scale image.The proposed approach works by converting the gray level image in transform domain using discrete integer wavelet technique through lifting scheme.This approach performs a 2-D lifting wavelet decomposition through Haar lifted wavelet of the cover image and computes the approximation coefficients matrix CA and detail coefficients matrices CH, CV, and CD.Next step is to apply the PMM technique in those coefficients to form the stego image. The aim of this paper is to propose a high-capacity image steganography technique that uses pixel mapping method in integer wavelet domain with acceptable levels of imperceptibility and distortion in the cover image and high level of overall security. This solution is independent of the nature of the data to be hidden and produces a stego image with minimum degradation.

Keywords: Cover Image, Pixel Mapping Method (PMM), StegoImage, Integer Wavelet Tranform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2822
172 A Study of Shear Stress Intensity Factor of PP and HDPE by a Modified Experimental Method together with FEM

Authors: Md. Shafiqul Islam, Abdullah Khan, Sharon Kao-Walter, Li Jian

Abstract:

Shear testing is one of the most complex testing areas where available methods and specimen geometries are different from each other. Therefore, a modified shear test specimen (MSTS) combining the simple uniaxial test with a zone of interest (ZOI) is tested which gives almost the pure shear. In this study, material parameters of polypropylene (PP) and high density polyethylene (HDPE) are first measured by tensile tests with a dogbone shaped specimen. These parameters are then used as an input for the finite element analysis. Secondly, a specially designed specimen (MSTS) is used to perform the shear stress tests in a tensile testing machine to get the results in terms of forces and extension, crack initiation etc. Scanning Electron Microscopy (SEM) is also performed on the shear fracture surface to find material behavior. These experiments are then simulated by finite element method and compared with the experimental results in order to confirm the simulation model. Shear stress state is inspected to find the usability of the proposed shear specimen. Finally, a geometry correction factor can be established for these two materials in this specific loading and geometry with notch using Linear Elastic Fracture Mechanics (LEFM). By these results, strain energy of shear failure and stress intensity factor (SIF) of shear of these two polymers are discussed in the special application of the screw cap opening of the medical or food packages with a temper evidence safety solution.

Keywords: Shear test specimen, Stress intensity factor, Finite Element simulation, Scanning electron microscopy, Screw cap opening.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2905
171 Analysis of Linked in Series Servers with Blocking, Priority Feedback Service and Threshold Policy

Authors: Walenty Oniszczuk

Abstract:

The use of buffer thresholds, blocking and adequate service strategies are well-known techniques for computer networks traffic congestion control. This motivates the study of series queues with blocking, feedback (service under Head of Line (HoL) priority discipline) and finite capacity buffers with thresholds. In this paper, the external traffic is modelled using the Poisson process and the service times have been modelled using the exponential distribution. We consider a three-station network with two finite buffers, for which a set of thresholds (tm1 and tm2) is defined. This computer network behaves as follows. A task, which finishes its service at station B, gets sent back to station A for re-processing with probability o. When the number of tasks in the second buffer exceeds a threshold tm2 and the number of task in the first buffer is less than tm1, the fed back task is served under HoL priority discipline. In opposite case, for fed backed tasks, “no two priority services in succession" procedure (preventing a possible overflow in the first buffer) is applied. Using an open Markovian queuing schema with blocking, priority feedback service and thresholds, a closed form cost-effective analytical solution is obtained. The model of servers linked in series is very accurate. It is derived directly from a twodimensional state graph and a set of steady-state equations, followed by calculations of main measures of effectiveness. Consequently, efficient expressions of the low computational cost are determined. Based on numerical experiments and collected results we conclude that the proposed model with blocking, feedback and thresholds can provide accurate performance estimates of linked in series networks.

Keywords: Blocking, Congestion control, Feedback, Markov chains, Performance evaluation, Threshold-base networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1274
170 Director Compensation, CEO Duality, State Ownership, and Firm Performance in China: Proof from Panel Data of Publicly Listed Enterprises from 1999 to 2020

Authors: Wanda Luen-Wun Siu, Xiaowen Zhang

Abstract:

This paper offered the primary methodical proof on how director remuneration related to enterprise earnings in listed firms in China in light of most evidence focusing on cross-sectional data or data in a short span of time. Using full economic and business panel data on China’s publicly listed enterprise from 1999 to 2020 over two decades in the China Stock Market & Accounting Research database, we found statistically significant positive associations between director pay and firm performance in privately owned firms over this period, supporting the agency theory. In contrast, among the state-owned enterprises, there was a reverse relation between director compensation and firm financial performance, contributing to the existing literature. But the results also revealed that state-owned enterprises financially performed as well as private enterprises. Such findings suggested that state ownership might line up officials’ career incentives with party prime concern rather than pecuniary incentives. Also, CEO duality enhanced firm performance. As such, allegiance to the party and possible advancement to an upper-level political position would motivate company directors in state-owned enterprises. On the other hand, directors in privately owned enterprises might be motivated by monetary incentives. In addition, a statistical regression model was proposed and tested to get the results of the performance of state-owned enterprises. Finally, some suggestions were made about how to improve the institutional management of government-owned corporations in China.

Keywords: China’s listed Firm, director compensation, CEO duality, firm performance, panel analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 450
169 Mobile Learning in Developing Countries: A Synthesis of the Past to Define the Future

Authors: Harriet Koshie Lamptey, Richard Boateng

Abstract:

Mobile learning (m-learning) is a novel approach to knowledge acquisition and dissemination and is gaining global attention. Steady progress in wireless technologies and the portability of communication devices continue to broaden the scope and use of mobiles. With the convergence of Web functionality onto mobile platforms and the affordability and availability of mobile technology, m-learning has the potential of being the next prevalent channel of education in both formal and informal settings. There is substantive literature on developed countries but the state in developing countries (DCs) however appears vague. This paper is a synthesis of extant literature on mobile learning in DCs. The research interest is based on the fact that in DCs, mobile communication and internet connectivity are popular. However, its use in education is under explored. There are some reviews on the state, conceptualizations, trends and teacher education, but to the authors’ knowledge, no study has focused on mobile learning adoption and integration issues. This study examines issues and gaps associated with its adoption and integration in DCs higher education institutions. A qualitative build-up of literature was conducted using articles pooled from electronic databases (Google Scholar and ERIC). To enable criteria for inclusion and incorporate diverse study perspectives, search terms used were m-learning, DCs, higher education institutions, challenges, benefits, impact, gaps and issues. The synthesis revealed that though mobile technology has diffused globally, its pedagogical pursuit in DCs remains quite low. The absence of a mobile Web and the difficulty of resource conversion into mobile format due to lack of funding and technical competence is a stumbling block. Again, the lack of established design and implementation rules to guide the development of m-learning platforms in DCs is a hindrance. The absence of access restrictions on devices poses security threats to institutional systems. Negative perceptions that devices are taking over faculty roles lead to resistance in some situations. Resistance to change can be a hindrance to the acceptance and success of new systems. Lack of interest for m-learning is also attributed to lower technological literacy levels of the underprivileged masses. Scholarly works on m-learning in DCs is yet to mature. Most technological innovations are handed down from developed countries, and this constantly creates a lag for DCs. Lack of theoretical grounding was also identified which reduces the objectivity of study reports. The socio-cultural terrain of DCs results in societies with different views and needs that have been identified as a hindrance to research. Institutional commitment decisions, adequate funding for the necessary infrastructural development as well as multiple stakeholder participation is important for project success. Evidence suggests that while adoption decisions are readily made, successful integration of the concept for its full benefits to be realized is often neglected. Recommendations to findings were made to provide possible remedies to identified issues.

Keywords: Developing countries, higher education institutions, mobile learning, literature review.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2005
168 Lamb Wave Wireless Communication in Healthy Plates Using Coherent Demodulation

Authors: Rudy Bahouth, Farouk Benmeddour, Emmanuel Moulin, Jamal Assaad

Abstract:

Guided ultrasonic waves are used in Non-Destructive Testing and Structural Health Monitoring for inspection and damage detection. Recently, wireless data transmission using ultrasonic waves in solid metallic channels has gained popularity in some industrial applications such as nuclear, aerospace and smart vehicles. The idea is to find a good substitute for electromagnetic waves since they are highly attenuated near metallic components due to Faraday shielding. The proposed solution is to use ultrasonic guided waves such as Lamb waves as an information carrier due to their capability of propagation for long distances. In addition to this, valuable information about the health of the structure could be extracted simultaneously. In this work, the reliable frequency bandwidth for communication is extracted experimentally from dispersion curves at first. Then, an experimental platform for wireless communication using Lamb waves is described and built. After this, coherent demodulation algorithm used in telecommunications is tested for Amplitude Shift Keying, On-Off Keying and Binary Phase Shift Keying modulation techniques. Signal processing parameters such as threshold choice, number of cycles per bit and Bit Rate are optimized. Experimental results are compared based on the average bit error percentage. Results has shown high sensitivity to threshold selection for Amplitude Shift Keying and On-Off Keying techniques resulting a Bit Rate decrease. Binary Phase Shift Keying technique shows the highest stability and data rate between all tested modulation techniques.

Keywords: Lamb Wave Communication, wireless communication, coherent demodulation, bit error percentage.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 537
167 E-Learning Recommender System Based on Collaborative Filtering and Ontology

Authors: John Tarus, Zhendong Niu, Bakhti Khadidja

Abstract:

In recent years, e-learning recommender systems has attracted great attention as a solution towards addressing the problem of information overload in e-learning environments and providing relevant recommendations to online learners. E-learning recommenders continue to play an increasing educational role in aiding learners to find appropriate learning materials to support the achievement of their learning goals. Although general recommender systems have recorded significant success in solving the problem of information overload in e-commerce domains and providing accurate recommendations, e-learning recommender systems on the other hand still face some issues arising from differences in learner characteristics such as learning style, skill level and study level. Conventional recommendation techniques such as collaborative filtering and content-based deal with only two types of entities namely users and items with their ratings. These conventional recommender systems do not take into account the learner characteristics in their recommendation process. Therefore, conventional recommendation techniques cannot make accurate and personalized recommendations in e-learning environment. In this paper, we propose a recommendation technique combining collaborative filtering and ontology to recommend personalized learning materials to online learners. Ontology is used to incorporate the learner characteristics into the recommendation process alongside the ratings while collaborate filtering predicts ratings and generate recommendations. Furthermore, ontological knowledge is used by the recommender system at the initial stages in the absence of ratings to alleviate the cold-start problem. Evaluation results show that our proposed recommendation technique outperforms collaborative filtering on its own in terms of personalization and recommendation accuracy.

Keywords: Collaborative filtering, e-learning, ontology, recommender system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3084
166 Analysis of the Operational Performance of Three Unconventional Arterial Intersection Designs: Median U-Turn, Superstreet and Single Quadrant

Authors: Hana Naghawi, Khair Jadaan, Rabab Al-Louzi, Taqwa Hadidi

Abstract:

This paper is aimed to evaluate and compare the operational performance of three Unconventional Arterial Intersection Designs (UAIDs) including Median U-Turn, Superstreet, and Single Quadrant Intersection using real traffic data. For this purpose, the heavily congested signalized intersection of Wadi Saqra in Amman was selected. The effect of implementing each of the proposed UAIDs was not only evaluated on the isolated Wadi Saqra signalized intersection, but also on the arterial road including both surrounding intersections. The operational performance of the isolated intersection was based on the level of service (LOS) expressed in terms of control delay and volume to capacity ratio. On the other hand, the measures used to evaluate the operational performance on the arterial road included traffic progression, stopped delay per vehicle, number of stops and the travel speed. The analysis was performed using SYNCHRO 8 microscopic software. The simulation results showed that all three selected UAIDs outperformed the conventional intersection design in terms of control delay but only the Single Quadrant Intersection design improved the main intersection LOS from F to B. Also, the results indicated that the Single Quadrant Intersection design resulted in an increase in average travel speed by 52%, and a decrease in the average stopped delay by 34% on the selected corridor when compared to the corridor with conventional intersection design. On basis of these results, it can be concluded that the Median U-Turn and the Superstreet do not perform the best under heavy traffic volumes.

Keywords: Median U-turn, single quadrant, superstreet, unconventional arterial intersection design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 835
165 Emotion Detection in Twitter Messages Using Combination of Long Short-Term Memory and Convolutional Deep Neural Networks

Authors: B. Golchin, N. Riahi

Abstract:

One of the most significant issues as attended a lot in recent years is that of recognizing the sentiments and emotions in social media texts. The analysis of sentiments and emotions is intended to recognize the conceptual information such as the opinions, feelings, attitudes and emotions of people towards the products, services, organizations, people, topics, events and features in the written text. These indicate the greatness of the problem space. In the real world, businesses and organizations are always looking for tools to gather ideas, emotions, and directions of people about their products, services, or events related to their own. This article uses the Twitter social network, one of the most popular social networks with about 420 million active users, to extract data. Using this social network, users can share their information and opinions about personal issues, policies, products, events, etc. It can be used with appropriate classification of emotional states due to the availability of its data. In this study, supervised learning and deep neural network algorithms are used to classify the emotional states of Twitter users. The use of deep learning methods to increase the learning capacity of the model is an advantage due to the large amount of available data. Tweets collected on various topics are classified into four classes using a combination of two Bidirectional Long Short Term Memory network and a Convolutional network. The results obtained from this study with an average accuracy of 93%, show good results extracted from the proposed framework and improved accuracy compared to previous work.

Keywords: emotion classification, sentiment analysis, social networks, deep neural networks

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 637
164 Game-Theory-Based on Downlink Spectrum Allocation in Two-Tier Networks

Authors: Yu Zhang, Ye Tian, Fang Ye Yixuan Kang

Abstract:

The capacity of conventional cellular networks has reached its upper bound and it can be well handled by introducing femtocells with low-cost and easy-to-deploy. Spectrum interference issue becomes more critical in peace with the value-added multimedia services growing up increasingly in two-tier cellular networks. Spectrum allocation is one of effective methods in interference mitigation technology. This paper proposes a game-theory-based on OFDMA downlink spectrum allocation aiming at reducing co-channel interference in two-tier femtocell networks. The framework is formulated as a non-cooperative game, wherein the femto base stations are players and frequency channels available are strategies. The scheme takes full account of competitive behavior and fairness among stations. In addition, the utility function reflects the interference from the standpoint of channels essentially. This work focuses on co-channel interference and puts forward a negative logarithm interference function on distance weight ratio aiming at suppressing co-channel interference in the same layer network. This scenario is more suitable for actual network deployment and the system possesses high robustness. According to the proposed mechanism, interference exists only when players employ the same channel for data communication. This paper focuses on implementing spectrum allocation in a distributed fashion. Numerical results show that signal to interference and noise ratio can be obviously improved through the spectrum allocation scheme and the users quality of service in downlink can be satisfied. Besides, the average spectrum efficiency in cellular network can be significantly promoted as simulations results shown.

Keywords: Femtocell networks, game theory, interference mitigation, spectrum allocation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 715
163 Engineering Topology of Photonic Systems for Sustainable Molecular Structure: Autopoiesis Systems

Authors: Moustafa Osman Mohammed

Abstract:

This paper introduces topological order in descried social systems starting with the original concept of autopoiesis by biologists and scientists, including the modification of general systems based on socialized medicine. Topological order is important in describing the physical systems for exploiting optical systems and improving photonic devices. The stats of topologically order have some interesting properties of topological degeneracy and fractional statistics that reveal the entanglement origin of topological order, etc. Topological ideas in photonics form exciting developments in solid-state materials, that being; insulating in the bulk, conducting electricity on their surface without dissipation or back-scattering, even in the presence of large impurities. A specific type of autopoiesis system is interrelated to the main categories amongst existing groups of the ecological phenomena interaction social and medical sciences. The hypothesis, nevertheless, has a nonlinear interaction with its natural environment ‘interactional cycle’ for exchange photon energy with molecules without changes in topology (i.e., chemical transformation into products do not propagate any changes or variation in the network topology of physical configuration). The engineering topology of a biosensor is based on the excitation boundary of surface electromagnetic waves in photonic band gap multilayer films. The device operation is similar to surface Plasmonic biosensors in which a photonic band gap film replaces metal film as the medium when surface electromagnetic waves are excited. The use of photonic band gap film offers sharper surface wave resonance leading to the potential of greatly enhanced sensitivity. So, the properties of the photonic band gap material are engineered to operate a sensor at any wavelength and conduct a surface wave resonance that ranges up to 470 nm. The wavelength is not generally accessible with surface Plasmon sensing. Lastly, the photonic band gap films have robust mechanical functions that offer new substrates for surface chemistry to understand the molecular design structure, and create sensing chips surface with different concentrations of DNA sequences in the solution to observe and track the surface mode resonance under the influences of processes that take place in the spectroscopic environment. These processes led to the development of several advanced analytical technologies, which are automated, real-time, reliable, reproducible and cost-effective. This results in faster and more accurate monitoring and detection of biomolecules on refractive index sensing, antibody–antigen reactions with a DNA or protein binding. Ultimately, the controversial aspect of molecular frictional properties is adjusted to each other in order to form unique spatial structure and dynamics of biological molecules for providing the environment mutual contribution in investigation of changes due the pathogenic archival architecture of cell clusters.

Keywords: autopoiesis, engineering topology, photonic system molecular structure, biosensor

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 433
162 The Significance of Cultural Risks for Western Consultants Executing Gulf Cooperation Council Megaprojects

Authors: Alan Walsh, Peter Walker

Abstract:

Differences in commercial, professional and personal cultural traditions between western consultants and project sponsors in the Gulf Cooperation Council (GCC) region are potentially significant in the workplace, and this can impact on project outcomes. These cultural differences can, for example, result in conflict amongst senior managers, which can negatively impact the megaproject. New entrants to the GCC often experience ‘culture shock’ as they attempt to integrate into their unfamiliar environments. Megaprojects are unique ventures with individual project characteristics, which need to be considered when managing their associated risks. Megaproject research to date has mostly ignored the significance of the absence of cultural congruence in the GCC, which is surprising considering that there are large volumes of megaprojects in various stages of construction in the GCC. An initial step to dealing with cultural issues is to acknowledge culture as a significant risk factor (SRF). This paper seeks to understand the criticality for western consultants to address these risks. It considers the cultural barriers that exist between GCC sponsors and western consultants and examines the cultural distance between the key actors. Initial findings suggest the presence to a certain extent of ethnocentricity. Other cultural clashes arise out of a lack of appreciation of the customs, practices and traditions of ‘the Other’, such as the need for avoiding public humiliation and the hierarchal significance rankings. The concept and significance of cultural shock as part of the integration process for new arrivals are considered. Culture shock describes the state of anxiety and frustration resulting from the immersion in a culture distinctly different from one's own. There are potentially substantial project risks associated with underestimating the process of cultural integration. This paper examines two distinct but intertwined issues: the societal and professional culture differences associated with expatriate assignments. A case study examines the cultural congruences between GCC sponsors and American, British and German consultants, over a ten-year cycle. This provides indicators as to which nationalities encountered the most profound cultural issues and the nature of these. GCC megaprojects are typically intensive fast track demanding ventures, where consultant turnover is high. The study finds that building trust-filled relationships is key to successful project team integration and therefore, to successful megaproject execution. Findings indicate that both professional and social inclusion processes have steep learning curves. Traditional risk management practice is to approach any uncertainty in a structured way to mitigate the potential impact on project outcomes. This research highlights cultural risk as a significant factor in the management of GCC megaprojects. These risks arising from high staff turnover typically include loss of project knowledge, delays to the project, cost and disruption in replacing staff. This paper calls for cultural risk to be recognised as an SRF, as the first step to developing risk management strategies, and to reduce staff turnover for western consultants in GCC megaprojects.

Keywords: Western consultants in megaprojects, national culture impacts on GCC Megaprojects, significant risk factors in megaprojects, professional culture in megaprojects.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 635
161 Review of Strategies for Hybrid Energy Storage Management System in Electric Vehicle Application

Authors: Kayode A. Olaniyi, Adeola A. Ogunleye, Tola M. Osifeko

Abstract:

Electric Vehicles (EV) appear to be gaining increasing patronage as a feasible alternative to Internal Combustion Engine Vehicles (ICEVs) for having low emission and high operation efficiency. The EV energy storage systems are required to handle high energy and power density capacity constrained by limited space, operating temperature, weight and cost. The choice of strategies for energy storage evaluation, monitoring and control remains a challenging task. This paper presents review of various energy storage technologies and recent researches in battery evaluation techniques used in EV applications. It also underscores strategies for the hybrid energy storage management and control schemes for the improvement of EV stability and reliability. The study reveals that despite the advances recorded in battery technologies there is still no cell which possess both the optimum power and energy densities among other requirements, for EV application. However combination of two or more energy storages as hybrid and allowing the advantageous attributes from each device to be utilized is a promising solution. The review also reveals that State-of-Charge (SoC) is the most crucial method for battery estimation. The conventional method of SoC measurement is however questioned in the literature and adaptive algorithms that include all model of disturbances are being proposed. The review further suggests that heuristic-based approach is commonly adopted in the development of strategies for hybrid energy storage system management. The alternative approach which is optimization-based is found to be more accurate but is memory and computational intensive and as such not recommended in most real-time applications.

Keywords: Hybrid electric vehicle, hybrid energy storage, battery state estimation, ate of charge, state of health.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1009
160 Exploring Additional Intention Predictors within Dietary Behavior among Type 2 Diabetes

Authors: D. O. Omondi, M. K. Walingo, G. M. Mbagaya

Abstract:

Objective: This study explored the possibility of integrating Health Belief Concepts as additional predictors of intention to adopt a recommended diet-category within the Theory of Planned Behavior (TPB). Methods: The study adopted a Sequential Exploratory Mixed Methods approach. Qualitative data were generated on attitude, subjective norm, perceived behavioral control and perceptions on predetermined diet-categories including perceived susceptibility, perceived benefits, perceived severity and cues to action. Synthesis of qualitative data was done using constant comparative approach during phase 1. A survey tool developed from qualitative results was used to collect information on the same concepts across 237 legible Type 2 diabetics. Data analysis included use of Structural Equation Modeling in Analysis of Moment Structures to explore the possibility of including perceived susceptibility, perceived benefits, perceived severity and cues to action as additional intention predictors in a single nested model. Results: Two models-one nested based on the traditional TPB model {χ2=223.3, df = 77, p = .02, χ2/df = 2.9; TLI = .93; CFI =.91; RMSEA (90CI) = .090(.039, .146)} and the newly proposed Planned Behavior Health Belief Model (PBHB) {χ2 = 743.47, df = 301, p = .019; TLI = .90; CFI=.91; RMSEA (90CI) = .079(.031, .14)} passed the goodness of fit tests based on common fit indicators used. Conclusion: The newly developed PBHB Model ranked higher than the traditional TPB model with reference made to chi-square ratios (PBHB: χ2/df = 2.47; p=0.19 against TPB: χ2/df = 2.9, p=0.02). The integrated model can be used to motivate Type 2 diabetics towards healthy eating.

Keywords: Theory, intention, predictors, mixed methods design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1392
159 Open Innovation Laboratory for Rapid Realization of Sensing, Smart and Sustainable Products (S3 Products) for Higher Education

Authors: J. Miranda, D. Chavarría-Barrientos, M. Ramírez-Cadena, M. E. Macías, P. Ponce, J. Noguez, R. Pérez-Rodríguez, P. K. Wright, A. Molina

Abstract:

Higher education methods need to evolve because the new generations of students are learning in different ways. One way is by adopting emergent technologies, new learning methods and promoting the maker movement. As a result, Tecnologico de Monterrey is developing Open Innovation Laboratories as an immediate response to educational challenges of the world. This paper presents an Open Innovation Laboratory for Rapid Realization of Sensing, Smart and Sustainable Products (S3 Products). The Open Innovation Laboratory is composed of a set of specific resources where students and teachers use them to provide solutions to current problems of priority sectors through the development of a new generation of products. This new generation of products considers the concepts Sensing, Smart, and Sustainable. The Open Innovation Laboratory has been implemented in different courses in the context of New Product Development (NPD) and Integrated Manufacturing Systems (IMS) at Tecnologico de Monterrey. The implementation consists of adapting this Open Innovation Laboratory within the course’s syllabus in combination with the implementation of specific methodologies for product development, learning methods (Active Learning and Blended Learning using Massive Open Online Courses MOOCs) and rapid product realization platforms. Using the concepts proposed it is possible to demonstrate that students can propose innovative and sustainable products, and demonstrate how the learning process could be improved using technological resources applied in the higher educational sector. Finally, examples of innovative S3 products developed at Tecnologico de Monterrey are presented.

Keywords: Active learning, blended learning, maker movement, new product development, open innovation laboratory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1232
158 Normalizing Flow to Augmented Posterior: Conditional Density Estimation with Interpretable Dimension Reduction for High Dimensional Data

Authors: Cheng Zeng, George Michailidis, Hitoshi Iyatomi, Leo L Duan

Abstract:

The conditional density characterizes the distribution of a response variable y given other predictor x, and plays a key role in many statistical tasks, including classification and outlier detection. Although there has been abundant work on the problem of Conditional Density Estimation (CDE) for a low-dimensional response in the presence of a high-dimensional predictor, little work has been done for a high-dimensional response such as images. The promising performance of normalizing flow (NF) neural networks in unconditional density estimation acts a motivating starting point. In this work, we extend NF neural networks when external x is present. Specifically, they use the NF to parameterize a one-to-one transform between a high-dimensional y and a latent z that comprises two components [zP , zN]. The zP component is a low-dimensional subvector obtained from the posterior distribution of an elementary predictive model for x, such as logistic/linear regression. The zN component is a high-dimensional independent Gaussian vector, which explains the variations in y not or less related to x. Unlike existing CDE methods, the proposed approach, coined Augmented Posterior CDE (AP-CDE), only requires a simple modification on the common normalizing flow framework, while significantly improving the interpretation of the latent component, since zP represents a supervised dimension reduction. In image analytics applications, AP-CDE shows good separation of x-related variations due to factors such as lighting condition and subject id, from the other random variations. Further, the experiments show that an unconditional NF neural network, based on an unsupervised model of z, such as Gaussian mixture, fails to generate interpretable results.

Keywords: Conditional density estimation, image generation, normalizing flow, supervised dimension reduction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 109
157 Case Study on Innovative Aquatic-Based Bioeconomy for Chlorella sorokiniana

Authors: Iryna Atamaniuk, Hannah Boysen, Nils Wieczorek, Natalia Politaeva, Iuliia Bazarnova, Kerstin Kuchta

Abstract:

Over the last decade due to climate change and a strategy of natural resources preservation, the interest for the aquatic biomass has dramatically increased. Along with mitigation of the environmental pressure and connection of waste streams (including CO2 and heat emissions), microalgae bioeconomy can supply food, feed, as well as the pharmaceutical and power industry with number of value-added products. Furthermore, in comparison to conventional biomass, microalgae can be cultivated in wide range of conditions without compromising food and feed production, thus addressing issues associated with negative social and the environmental impacts. This paper presents the state-of-the art technology for microalgae bioeconomy from cultivation process to production of valuable components and by-streams. Microalgae Chlorella sorokiniana were cultivated in the pilot-scale innovation concept in Hamburg (Germany) using different systems such as race way pond (5000 L) and flat panel reactors (8 x 180 L). In order to achieve the optimum growth conditions along with suitable cellular composition for the further extraction of the value-added components, process parameters such as light intensity, temperature and pH are continuously being monitored. On the other hand, metabolic needs in nutrients were provided by addition of micro- and macro-nutrients into a medium to ensure autotrophic growth conditions of microalgae. The cultivation was further followed by downstream process and extraction of lipids, proteins and saccharides. Lipids extraction is conducted in repeated-batch semi-automatic mode using hot extraction method according to Randall. As solvents hexane and ethanol are used at different ratio of 9:1 and 1:9, respectively. Depending on cell disruption method along with solvents ratio, the total lipids content showed significant variations between 8.1% and 13.9 %. The highest percentage of extracted biomass was reached with a sample pretreated with microwave digestion using 90% of hexane and 10% of ethanol as solvents. Proteins content in microalgae was determined by two different methods, namely: Total Kejadahl Nitrogen (TKN), which further was converted to protein content, as well as Bradford method using Brilliant Blue G-250 dye. Obtained results, showed a good correlation between both methods with protein content being in the range of 39.8–47.1%. Characterization of neutral and acid saccharides from microalgae was conducted by phenol-sulfuric acid method at two wavelengths of 480 nm and 490 nm. The average concentration of neutral and acid saccharides under the optimal cultivation conditions was 19.5% and 26.1%, respectively. Subsequently, biomass residues are used as substrate for anaerobic digestion on the laboratory-scale. The methane concentration, which was measured on the daily bases, showed some variations for different samples after extraction steps but was in the range between 48% and 55%. CO2 which is formed during the fermentation process and after the combustion in the Combined Heat and Power unit can potentially be used within the cultivation process as a carbon source for the photoautotrophic synthesis of biomass.

Keywords: Bioeconomy, lipids, microalgae, proteins, saccharides.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 870
156 A Conceptual Framework and a Mathematical Equation for Managing Construction-Material Waste and Cost Overruns

Authors: Saidu Ibrahim, Winston M. W. Shakantu

Abstract:

The problem of construction material waste remains unresolved, as a significant percentage of the materials delivered to some project sites end up as waste which might result in additional project cost. Cost overrun is a problem which affects 90% of the completed projects in the world. The argument on how to eliminate it has been on-going for the past 70 years, but there is neither substantial improvement nor significant solution for mitigating its detrimental effects. Research evidence has proposed various construction cost overruns and material-waste management approaches; nonetheless, these studies failed to give a clear indication on the framework and the equation for managing construction material waste and cost overruns. Hence, this research aims to develop a conceptual framework and a mathematical equation for managing material waste and cost overrun in the construction industry. The paper adopts the desktop methodological approach. This involves comparing the causes of material waste and those of cost overruns from the literature to determine the possible relationship. The review revealed a relationship between material waste and cost overrun that; increase in material waste would result to a corresponding increase in the amount of cost overrun at both the pre-contract and the post contract stages of a project. It was found from the equation that achieving an effective construction material waste management must ensure a “Good Quality-of-Planning, Estimating, and Design Management” and a “Good Quality- of-Construction, Procurement and Site Management”; a decrease in “Design Complexity” which would reduce “Material Waste” and subsequently reduce the amount of cost overrun by 86.74%. The conceptual framework and the mathematical equation developed in this study are recommended to the professionals of the construction industry.

Keywords: Conceptual framework, cost overrun, material waste, project stags.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2731
155 Geochemistry of Cenozoic Basaltic Rocksaround Liuhe National Geopark, Jiangsu Province, Eastern China: Petrogenesis and Mantle Source

Authors: Yung-Tan Lee, Ren-Yi Huang, Ju-Chin Chen, Jyh-Yi Shih, Meng-Lung Lin, Hsiao-Ling Yu, Yen-Tsui Hu, Chih-Cheng Chen

Abstract:

Cenozoic basalts found in Jiangsu province of eastern China include tholeiites and alkali basalts. The present paper analyzed the major, trace elements, rare earth elements of these Cenozoic basalts and combined with Sr-Nd isotopic compositions proposed by Chen et al. (1990)[1] in the literatures to discuss the petrogenesis of these basalts and the geochemical characteristics of the source mantle. Based on major, trace elements and fractional crystallization model established by Brooks and Nielsen (1982)[2] we suggest that the basaltic magma has experienced olivine + clinopyroxene fractionation during its evolution. The chemical compositions of basaltic rocks from Jiangsu province indicate that these basalts may belong to the same magmatic system. Spidergrams reveal that Cenozoic basalts from Jiangsu province have geochemical characteristics similar to those of ocean island basalts(OIB). The slight positive Nb and Ti anomalies found in basaltic rocks of this study suggest the presence of Ti-bearing minerals in the mantle source and these Ti-bearing minerals had contributed to basaltic magma during partial melting, indicating a metasomatic event might have occurred before the partial melting. Based on the Sr vs. Nd isotopic ratio plots, we suggest that Jiangsu basalts may be derived from partial melting of mantle source which may represent two-end members mixing of DMM and EM-I. Some Jiangsu basaltic magma may be derived from partial melting of EM-I heated by the upwelling asthenospheric mantle or asthenospheric diapirism.

Keywords: Geochemistry, Jiangsu Province, Cenozoic basalts, Fractional crystallization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2278
154 Minimization of Non-Productive Time during 2.5D Milling

Authors: Satish Kumar, Arun Kumar Gupta, Pankaj Chandna

Abstract:

In the modern manufacturing systems, the use of thermal cutting techniques using oxyfuel, plasma and laser have become indispensable for the shape forming of high quality complex components; however, the conventional chip removal production techniques still have its widespread space in the manufacturing industry. Both these types of machining operations require the positioning of end effector tool at the edge where the cutting process commences. This repositioning of the cutting tool in every machining operation is repeated several times and is termed as non-productive time or airtime motion. Minimization of this non-productive machining time plays an important role in mass production with high speed machining. As, the tool moves from one region to the other by rapid movement and visits a meticulous region once in the whole operation, hence the non-productive time can be minimized by synchronizing the tool movements. In this work, this problem is being formulated as a general travelling salesman problem (TSP) and a genetic algorithm approach has been applied to solve the same. For improving the efficiency of the algorithm, the GA has been hybridized with a noble special heuristic and simulating annealing (SA). In the present work a novel heuristic in the combination of GA has been developed for synchronization of toolpath movements during repositioning of the tool. A comparative analysis of new Meta heuristic techniques with simple genetic algorithm has been performed. The proposed metaheuristic approach shows better performance than simple genetic algorithm for minimization of nonproductive toolpath length. Also, the results obtained with the help of hybrid simulated annealing genetic algorithm (HSAGA) are also found better than the results using simple genetic algorithm only.

Keywords: Non-productive time, Airtime, 2.5 D milling, Laser cutting, Metaheuristic, Genetic Algorithm, Simulated Annealing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2709
153 Thermal and Electrical Properties of Carbon Nanotubes Purified by Acid Digestion

Authors: Neslihan Yuca, Nilgün Karatepe, Fahrettin Yakuphanoğlu

Abstract:

Carbon nanotubes (CNTs) possess unique structural, mechanical, thermal and electronic properties, and have been proposed to be used for applications in many fields. However, to reach the full potential of the CNTs, many problems still need to be solved, including the development of an easy and effective purification procedure, since synthesized CNTs contain impurities, such as amorphous carbon, carbon nanoparticles and metal particles. Different purification methods yield different CNT characteristics and may be suitable for the production of different types of CNTs. In this study, the effect of different purification chemicals on carbon nanotube quality was investigated. CNTs were firstly synthesized by chemical vapor deposition (CVD) of acetylene (C2H2) on a magnesium oxide (MgO) powder impregnated with an iron nitrate (Fe(NO3)3·9H2O) solution. The synthesis parameters were selected as: the synthesis temperature of 800°C, the iron content in the precursor of 5% and the synthesis time of 30 min. The liquid phase oxidation method was applied for the purification of the synthesized CNT materials. Three different acid chemicals (HNO3, H2SO4, and HCl) were used in the removal of the metal catalysts from the synthesized CNT material to investigate the possible effects of each acid solution to the purification step. Purification experiments were carried out at two different temperatures (75 and 120 °C), two different acid concentrations (3 and 6 M) and for three different time intervals (6, 8 and 15 h). A 30% H2O2 : 3M HCl (1:1 v%) solution was also used in the purification step to remove both the metal catalysts and the amorphous carbon. The purifications using this solution were performed at the temperature of 75°C for 8 hours. Purification efficiencies at different conditions were evaluated by thermogravimetric analysis. Thermal and electrical properties of CNTs were also determined. It was found that the obtained electrical conductivity values for the carbon nanotubes were typical for organic semiconductor materials and thermal stabilities were changed depending on the purification chemicals.

Keywords: Carbon nanotubes, purification, acid digestion, thermalstability, electrical conductivity

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2374
152 Machine Learning Techniques for COVID-19 Detection: A Comparative Analysis

Authors: Abeer Aljohani

Abstract:

The COVID-19 virus spread has been one of the extreme pandemics across the globe. It is also referred as corona virus which is a contagious disease that continuously mutates into numerous variants. Currently, the B.1.1.529 variant labeled as Omicron is detected in South Africa. The huge spread of COVID-19 disease has affected several lives and has surged exceptional pressure on the healthcare systems worldwide. Also, everyday life and the global economy have been at stake. Numerous COVID-19 cases have produced a huge burden on hospitals as well as health workers. To reduce this burden, this paper predicts COVID-19 disease based on the symptoms and medical history of the patient. As machine learning is a widely accepted area and gives promising results for healthcare, this research presents an architecture for COVID-19 detection using ML techniques integrated with feature dimensionality reduction. This paper uses a standard University of California Irvine (UCI) dataset for predicting COVID-19 disease. This dataset comprises symptoms of 5434 patients. This paper also compares several supervised ML techniques on the presented architecture. The architecture has also utilized 10-fold cross validation process for generalization and Principal Component Analysis (PCA) technique for feature reduction. Standard parameters are used to evaluate the proposed architecture including F1-Score, precision, accuracy, recall, Receiver Operating Characteristic (ROC) and Area under Curve (AUC). The results depict that Decision tree, Random Forest and neural networks outperform all other state-of-the-art ML techniques. This result can be used to effectively identify COVID-19 infection cases.

Keywords: Supervised machine learning, COVID-19 prediction, healthcare analytics, Random Forest, Neural Network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 345
151 Transmission Line Congestion Management Using Hybrid Fish-Bee Algorithm with Unified Power Flow Controller

Authors: P. Valsalal, S. Thangalakshmi

Abstract:

There is a widespread changeover in the electrical power industry universally from old-style monopolistic outline towards a horizontally distributed competitive structure to come across the demand of rising consumption. When the transmission lines of derestricted system are incapable to oblige the entire service needs, the lines are overloaded or congested. The governor between customer and power producer is nominated as Independent System Operator (ISO) to lessen the congestion without obstructing transmission line restrictions. Among the existing approaches for congestion management, the frequently used approaches are reorganizing the generation and load curbing. There is a boundary for reorganizing the generators, and further loads may not be supplemented with the prevailing resources unless more private power producers are added in the system by considerably raising the cost. Hence, congestion is relaxed by appropriate Flexible AC Transmission Systems (FACTS) devices which boost the existing transfer capacity of transmission lines. The FACTs device, namely, Unified Power Flow Controller (UPFC) is preferred, and the correct placement of UPFC is more vital and should be positioned in the highly congested line. Hence, the weak line is identified by using power flow performance index with the new objective function with proposed hybrid Fish – Bee algorithm. Further, the location of UPFC at appropriate line reduces the branch loading and minimizes the voltage deviation. The power transfer capacity of lines is determined with and without UPFC in the identified congested line of IEEE 30 bus structure and the simulated results are compared with prevailing algorithms. It is observed that the transfer capacity of existing line is increased with the presented algorithm and thus alleviating the congestion.

Keywords: Available line transfer capability, congestion management, FACTS device, hybrid fish-bee algorithm, ISO, UPFC.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1557
150 Online Multilingual Dictionary Using Hamburg Notation for Avatar-Based Indian Sign Language Generation System

Authors: Sugandhi, Parteek Kumar, Sanmeet Kaur

Abstract:

Sign Language (SL) is used by deaf and other people who cannot speak but can hear or have a problem with spoken languages due to some disability. It is a visual gesture language that makes use of either one hand or both hands, arms, face, body to convey meanings and thoughts. SL automation system is an effective way which provides an interface to communicate with normal people using a computer. In this paper, an avatar based dictionary has been proposed for text to Indian Sign Language (ISL) generation system. This research work will also depict a literature review on SL corpus available for various SL s over the years. For ISL generation system, a written form of SL is required and there are certain techniques available for writing the SL. The system uses Hamburg sign language Notation System (HamNoSys) and Signing Gesture Mark-up Language (SiGML) for ISL generation. It is developed in PHP using Web Graphics Library (WebGL) technology for 3D avatar animation. A multilingual ISL dictionary is developed using HamNoSys for both English and Hindi Language. This dictionary will be used as a database to associate signs with words or phrases of a spoken language. It provides an interface for admin panel to manage the dictionary, i.e., modification, addition, or deletion of a word. Through this interface, HamNoSys can be developed and stored in a database and these notations can be converted into its corresponding SiGML file manually. The system takes natural language input sentence in English and Hindi language and generate 3D sign animation using an avatar. SL generation systems have potential applications in many domains such as healthcare sector, media, educational institutes, commercial sectors, transportation services etc. This research work will help the researchers to understand various techniques used for writing SL and generation of Sign Language systems.

Keywords: Avatar, dictionary, HamNoSys, hearing-impaired, Indian Sign Language, sign language.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1303
149 Synthesis of Highly Sensitive Molecular Imprinted Sensor for Selective Determination of Doxycycline in Honey Samples

Authors: Nadia El Alami El Hassani, Soukaina Motia, Benachir Bouchikhi, Nezha El Bari

Abstract:

Doxycycline (DXy) is a cycline antibiotic, most frequently prescribed to treat bacterial infections in veterinary medicine. However, its broad antimicrobial activity and low cost, lead to an intensive use, which can seriously affect human health. Therefore, its spread in the food products has to be monitored. The scope of this work was to synthetize a sensitive and very selective molecularly imprinted polymer (MIP) for DXy detection in honey samples. Firstly, the synthesis of this biosensor was performed by casting a layer of carboxylate polyvinyl chloride (PVC-COOH) on the working surface of a gold screen-printed electrode (Au-SPE) in order to bind covalently the analyte under mild conditions. Secondly, DXy as a template molecule was bounded to the activated carboxylic groups, and the formation of MIP was performed by a biocompatible polymer by the mean of polyacrylamide matrix. Then, DXy was detected by measurements of differential pulse voltammetry (DPV). A non-imprinted polymer (NIP) prepared in the same conditions and without the use of template molecule was also performed. We have noticed that the elaborated biosensor exhibits a high sensitivity and a linear behavior between the regenerated current and the logarithmic concentrations of DXy from 0.1 pg.mL−1 to 1000 pg.mL−1. This technic was successfully applied to determine DXy residues in honey samples with a limit of detection (LOD) of 0.1 pg.mL−1 and an excellent selectivity when compared to the results of oxytetracycline (OXy) as analogous interfering compound. The proposed method is cheap, sensitive, selective, simple, and is applied successfully to detect DXy in honey with the recoveries of 87% and 95%. Considering these advantages, this system provides a further perspective for food quality control in industrial fields.

Keywords: Electrochemical sensor, molecular imprinted polymer, doxycycline, food control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1155
148 Designing a Fuzzy Logic Controller to Enhance Directional Stability of Vehicles under Difficult Maneuvers

Authors: Mehrdad N. Khajavi , Golamhassan Paygane, Ali Hakima

Abstract:

Vehicle which are turning or maneuvering at high speeds are susceptible to sliding and subsequently deviate from desired path. In this paper the dynamics governing the Yaw/Roll behavior of a vehicle has been simulated. Two different simulations have been used one for the real vehicle, for which a fuzzy controller is designed to increase its directional stability property. The other simulation is for a hypothetical vehicle with much higher tire cornering stiffness which is capable of developing the required lateral forces at the tire-ground patch contact to attain the desired lateral acceleration for the vehicle to follow the desired path without slippage. This simulation model is our reference model. The logic for keeping the vehicle on the desired track in the cornering or maneuvering state is to have some braking forces on the inner or outer tires based on the direction of vehicle deviation from the desired path. The inputs to our vehicle simulation model is steer angle δ and vehicle velocity V , and the outputs can be any kinematical parameters like yaw rate, yaw acceleration, side slip angle, rate of side slip angle and so on. The proposed fuzzy controller is a feed forward controller. This controller has two inputs which are steer angle δ and vehicle velocity V, and the output of the controller is the correcting moment M, which guides the vehicle back to the desired track. To develop the membership functions for the controller inputs and output and the fuzzy rules, the vehicle simulation has been run for 1000 times and the correcting moment have been determined by trial and error. Results of the vehicle simulation with fuzzy controller are very promising and show the vehicle performance is enhanced greatly over the vehicle without the controller. In fact the vehicle performance with the controller is very near the performance of the reference ideal model.

Keywords: Vehicle, Directional Stability, Fuzzy Logic Controller, ANFIS..

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1496
147 High Sensitivity Crack Detection and Locating with Optimized Spatial Wavelet Analysis

Authors: A. Ghanbari Mardasi, N. Wu, C. Wu

Abstract:

In this study, a spatial wavelet-based crack localization technique for a thick beam is presented. Wavelet scale in spatial wavelet transformation is optimized to enhance crack detection sensitivity. A windowing function is also employed to erase the edge effect of the wavelet transformation, which enables the method to detect and localize cracks near the beam/measurement boundaries. Theoretical model and vibration analysis considering the crack effect are first proposed and performed in MATLAB based on the Timoshenko beam model. Gabor wavelet family is applied to the beam vibration mode shapes derived from the theoretical beam model to magnify the crack effect so as to locate the crack. Relative wavelet coefficient is obtained for sensitivity analysis by comparing the coefficient values at different positions of the beam with the lowest value in the intact area of the beam. Afterward, the optimal wavelet scale corresponding to the highest relative wavelet coefficient at the crack position is obtained for each vibration mode, through numerical simulations. The same procedure is performed for cracks with different sizes and positions in order to find the optimal scale range for the Gabor wavelet family. Finally, Hanning window is applied to different vibration mode shapes in order to overcome the edge effect problem of wavelet transformation and its effect on the localization of crack close to the measurement boundaries. Comparison of the wavelet coefficients distribution of windowed and initial mode shapes demonstrates that window function eases the identification of the cracks close to the boundaries.

Keywords: Edge effect, scale optimization, small crack locating, spatial wavelet.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 933