Search results for: model based clustering
36798 Switched Uses of a Bidirectional Microphone as a Microphone and Sensors with High Gain and Wide Frequency Range
Authors: Toru Shionoya, Yosuke Kurihara, Takashi Kaburagi, Kajiro Watanabe
Abstract:
Mass-produced bidirectional microphones have attractive characteristics. They work as a microphone as well as a sensor with high gain over a wide frequency range; they are also highly reliable and economical. We present novel multiple functional uses of the microphones. A mathematical model for explaining the high-pass-filtering characteristics of bidirectional microphones was presented. Based on the model, the characteristics of the microphone were investigated, and a novel use for the microphone as a sensor with a wide frequency range was presented. In this study, applications for using the microphone as a security sensor and a human biosensor were introduced. The mathematical model was validated through experiments, and the feasibility of the abovementioned applications for security monitoring and the biosignal monitoring were examined through experiments.Keywords: bidirectional microphone, low-frequency, mathematical model, frequency response
Procedia PDF Downloads 54436797 An AI-Based Dynamical Resource Allocation Calculation Algorithm for Unmanned Aerial Vehicle
Authors: Zhou Luchen, Wu Yubing, Burra Venkata Durga Kumar
Abstract:
As the scale of the network becomes larger and more complex than before, the density of user devices is also increasing. The development of Unmanned Aerial Vehicle (UAV) networks is able to collect and transform data in an efficient way by using software-defined networks (SDN) technology. This paper proposed a three-layer distributed and dynamic cluster architecture to manage UAVs by using an AI-based resource allocation calculation algorithm to address the overloading network problem. Through separating services of each UAV, the UAV hierarchical cluster system performs the main function of reducing the network load and transferring user requests, with three sub-tasks including data collection, communication channel organization, and data relaying. In this cluster, a head node and a vice head node UAV are selected considering the Central Processing Unit (CPU), operational (RAM), and permanent (ROM) memory of devices, battery charge, and capacity. The vice head node acts as a backup that stores all the data in the head node. The k-means clustering algorithm is used in order to detect high load regions and form the UAV layered clusters. The whole process of detecting high load areas, forming and selecting UAV clusters, and moving the selected UAV cluster to that area is proposed as offloading traffic algorithm.Keywords: k-means, resource allocation, SDN, UAV network, unmanned aerial vehicles
Procedia PDF Downloads 11136796 Meaning beyond Pleasure in Leisure: Comparison between Korea and France
Authors: Joane Adeclas, Yoonyoung Kim, Taekyun Hur
Abstract:
This study investigates individual’s intrinsic motivation to practice their leisure activities, as well as, how the cultural environment may influence their motivation to practice their activities. Focused on the positive psychology, the present study proposed redefinition of leisure activities considering two factors. First, leisure activities could be as any activities that provide pleasure or meaning to individuals. Second, they can be practiced alone or in groups. In fact, based on this definition, a four-dimensional model of leisure activities was developed, to measure individual’s perception of their leisure experience, based on four factors that are: personal pleasure, social pleasure, personal meaning and social meaning. Furthermore, recent studies have argued that leisure activities can be interpreted and understood differently across cultures. Therefore, the present study proposed to examine the possible role of the cultural context of individual’s leisure practices. To do so, two cultural groups (Koreans vs. French) were compared in terms of the four-dimensional model of leisure activities. Three hundred Koreans and three hundred French participants were asked to answer an online survey about their leisure activities. Participants had to respond to questions related to several aspects of leisure practices as followed: the reason why their practice their leisure activities, the reason why they fail to practice their leisure, and their obsession relate to their leisure activities. Factor analyses based on participant’s responses proposed a moderate fit of the four-dimensional model of leisure activities. Furthermore, significant cultural differences were also found. As a result, the cultural context seems to influence the reason why individuals practice their leisure activities based on our model. In fact, Koreans explained more than French, the practice of their leisure activities with social-pleasurable reasons. At a contrary, French explained more than Koreans, the practice of their leisure activities with social-meaningful reasons. The two cultural groups also significantly differ on their perception of failure. The results showed that French participants used more meaningful social factors to explain why they failed to practice their leisure activities than did Koreans participants. Finally, Koreans and French significantly differed regarding their obsession on their leisure activities. In general, French tend to have more obsession than Koreans about their leisure activities. Those results validated the four-dimensional model of leisure, as well as, the cultural differences in leisure practices. However, further studies are needed to validate this model at an individual and cultural level.Keywords: culture, leisure, meaning, pleasure
Procedia PDF Downloads 26336795 A Model of Applied Psychology Research Defining Community Participation and Collective Identity as a Major Asset for Strategic Planning and Political Decision: The Project SIA (Social Inclusion through Accessibility)
Authors: Rui Serôdio, Alexandra Serra, José Albino Lima, Luísa Catita, Paula Lopes
Abstract:
We will present the outline of the Project SIA (Social Inclusion through Accessibility) focusing in one of its core components: how our applied research model contributes to define community participation as a pillar for strategic and political agenda amongst local authorities. Project ISA, supported by EU regional funding, was design as part of a broader model developed by SIMLab–Social Inclusion Monitoring Laboratory, in which the relation University-Community is a core element. The project illustrates how University of Porto developed a large scale project of applied psychology research in a close partnership with 18 municipalities that cover almost all regions of Portugal, and with a private architecture enterprise, specialized in inclusive accessibility and “design for all”. Three fundamental goals were defined: (1) creation of a model that would promote the effective civic participation of local citizens; (2) the “voice” of such participation should be both individual and collective; (3) the scientific and technical framework should serve as one of the bases for political decision on inclusive accessibility local planning. The two main studies were run in a standardized model across all municipalities and the samples of the three modalities of community participation were the following: individual participation based on 543 semi-structured interviews and 6373 inquiries; collective participation based on group session with 302 local citizens. We present some of the broader findings of Project SIA and discuss how they relate to our applied research model.Keywords: applied psychology, collective identity, community participation, inclusive accessibility
Procedia PDF Downloads 44736794 Investigations of Bergy Bits and Ship Interactions in Extreme Waves Using Smoothed Particle Hydrodynamics
Authors: Mohammed Islam, Jungyong Wang, Dong Cheol Seo
Abstract:
The Smoothed Particle Hydrodynamics (SPH) method is a novel, meshless, and Lagrangian technique based numerical method that has shown promises to accurately predict the hydrodynamics of water and structure interactions in violent flow conditions. The main goal of this study is to build confidence on the versatility of the Smoothed Particle Hydrodynamics (SPH) based tool, to use it as a complementary tool to the physical model testing capabilities and support research need for the performance evaluation of ships and offshore platforms exposed to an extreme and harsh environment. In the current endeavor, an open-sourced SPH-based tool was used and validated for modeling and predictions of the hydrodynamic interactions of a 6-DOF ship and bergy bits. The study involved the modeling of a modern generic drillship and simplified bergy bits in floating and towing scenarios and in regular and irregular wave conditions. The predictions were validated using the model-scale measurements on a moored ship towed at multiple oblique angles approaching a floating bergy bit in waves. Overall, this study results in a thorough comparison between the model scale measurements and the prediction outcomes from the SPH tool for performance and accuracy. The SPH predicted ship motions and forces were primarily within ±5% of the measurements. The velocity and pressure distribution and wave characteristics over the free surface depicts realistic interactions of the wave, ship, and the bergy bit. This work identifies and presents several challenges in preparing the input file, particularly while defining the mass properties of complex geometry, the computational requirements, and the post-processing of the outcomes.Keywords: SPH, ship and bergy bit, hydrodynamic interactions, model validation, physical model testing
Procedia PDF Downloads 13236793 Improving the Run Times of Existing and Historical Demand Models Using Simple Python Scripting
Authors: Abhijeet Ostawal, Parmjit Lall
Abstract:
The run times for a large strategic model that we were managing had become too long leading to delays in project delivery, increased costs and loss in productivity. Software developers are continuously working towards developing more efficient tools by changing their algorithms and processes. The issue faced by our team was how do you apply the latest technologies on validated existing models which are based on much older versions of software that do not have the latest software capabilities. The multi-model transport model that we had could only be run in sequential assignment order. Recent upgrades to the software now allowed the assignment to be run in parallel, a concept called parallelization. Parallelization is a Python script working only within the latest version of the software. A full model transfer to the latest version was not possible due to time, budget and the potential changes in trip assignment. This article is to show the method to adapt and update the Python script in such a way that it can be used in older software versions by calling the latest version and then recalling the old version for assignment model without affecting the results. Through a process of trial-and-error run time savings of up to 30-40% have been achieved. Assignment results were maintained within the older version and through this learning process we’ve applied this methodology to other even older versions of the software resulting in huge time savings, more productivity and efficiency for both client and consultant.Keywords: model run time, demand model, parallelisation, python scripting
Procedia PDF Downloads 11836792 Green It-Outsourcing Assurance Model for It-Outsourcing Vendors
Authors: Siffat Ullah Khan, Rahmat Ullah Khan, Rafiq Ahmad Khan, Habibullah Khan
Abstract:
Green IT or green computing has emerged as a fast growing business paradigm in recent years in order to develop energy-efficient Software and peripheral devices. With the constant evolution of technology and the world critical environmental status, all private and public information technology (IT) businesses are moving towards sustainability. We identified, through systematic literature review and questionnaire survey, 9 motivators, in total, faced by vendors in IT-Outsourcing relationship. Amongst these motivators 7 were ranked as critical motivators. We also identified 21, in total, practices for addressing these critical motivators. Based on these inputs we have developed Green IT-Outsourcing Assurance Model (GITAM) for IT-Outsourcing vendors. The model comprises four different levels. i.e. Initial, White, Green and Grey. Each level comprises different critical motivators and their relevant practices. We conclude that our model, GITAM, will assist IT-Outsourcing vendors in gauging their level in order to manage IT-Outsourcing activities in a green and sustainable fashion to assist the environment and to reduce the carbon emission. The model will assist vendors in improving their current level by suggesting various practices. The model will contribute to the body of knowledge in the field of Green IT.Keywords: Green IT-outsourcing Assurance Model (GITAM), Systematic Literature Review, Empirical Study, Case Study
Procedia PDF Downloads 25236791 Local Government Digital Attention and Green Technology Innovation: Analysis Based on Spatial Durbin Model
Authors: Xin Wang, Chaoqun Ma, Zheng Yao
Abstract:
Although green technology innovation faces new opportunities and challenges in the digital era, its theoretical research remains limited. Drawing on the attention-based view, this study employs the spatial Durbin model to investigate the impact of local government digital attention and digital industrial agglomeration on green technology innovation across 30 Chinese provinces from 2011 to 2021, as well as the spatial spillover effects present. The results suggest that both government digital attention and digital industrial agglomeration positively influence green technology innovation in local and neighboring provinces, with digital industrial agglomeration exhibiting a positive moderating effect on this direct local and indirect spatial spillover relationship. The findings of this study provide a new theoretical perspective for green technology innovation research and hold valuable implications for the advancement of the attention-based view and green technology innovation.Keywords: local government digital attention, digital industrial agglomeration, green technology innovation, attention-based view
Procedia PDF Downloads 6936790 The Adoption of Technological Innovations in a B2C Context: An Empirical Study on the Higher Education Industry in Egypt
Authors: Maha Mourad, Rania Samir
Abstract:
This paper seeks to explain the adoption of technological innovations in a business to consumer context. Specifically, the use of web based technology (WEBCT/blackboard) in the delivery of educational material and communication with students at universities in Egypt is the focus of this study. The analysis draws on existing research in a B2C context which highlights the importance of internal organization characteristics, perceived attributes of the innovation as well as consumer based factors as the main drivers of adoption. A distinctive B2C model is developed drawing on Roger’s innovation adoption model, as well as theoretical and empirical foundations in previous innovation adoption literature to study the adoption of technological innovations in higher education in Egypt. The model proposes that the adoption decision is dependent on a combination of perceived attributes of the innovation, inter-organization factors and consumer factors. The model is testified drawing on the results of empirical work in the form of a large survey conducted on students in three different universities in Egypt (one public, one private and one international). In addition to the attributes of the innovation, specific organization factors (such as university resources) as well as consumer factors were identified as likely to have an important influence on the adoption of technological innovations in higher education.Keywords: innovation, WEBCT, higher education, adoption, Egypt
Procedia PDF Downloads 54736789 Quantum Statistical Machine Learning and Quantum Time Series
Authors: Omar Alzeley, Sergey Utev
Abstract:
Minimizing a constrained multivariate function is the fundamental of Machine learning, and these algorithms are at the core of data mining and data visualization techniques. The decision function that maps input points to output points is based on the result of optimization. This optimization is the central of learning theory. One approach to complex systems where the dynamics of the system is inferred by a statistical analysis of the fluctuations in time of some associated observable is time series analysis. The purpose of this paper is a mathematical transition from the autoregressive model of classical time series to the matrix formalization of quantum theory. Firstly, we have proposed a quantum time series model (QTS). Although Hamiltonian technique becomes an established tool to detect a deterministic chaos, other approaches emerge. The quantum probabilistic technique is used to motivate the construction of our QTS model. The QTS model resembles the quantum dynamic model which was applied to financial data. Secondly, various statistical methods, including machine learning algorithms such as the Kalman filter algorithm, are applied to estimate and analyses the unknown parameters of the model. Finally, simulation techniques such as Markov chain Monte Carlo have been used to support our investigations. The proposed model has been examined by using real and simulated data. We establish the relation between quantum statistical machine and quantum time series via random matrix theory. It is interesting to note that the primary focus of the application of QTS in the field of quantum chaos was to find a model that explain chaotic behaviour. Maybe this model will reveal another insight into quantum chaos.Keywords: machine learning, simulation techniques, quantum probability, tensor product, time series
Procedia PDF Downloads 46936788 Optimizing the Capacity of a Convolutional Neural Network for Image Segmentation and Pattern Recognition
Authors: Yalong Jiang, Zheru Chi
Abstract:
In this paper, we study the factors which determine the capacity of a Convolutional Neural Network (CNN) model and propose the ways to evaluate and adjust the capacity of a CNN model for best matching to a specific pattern recognition task. Firstly, a scheme is proposed to adjust the number of independent functional units within a CNN model to make it be better fitted to a task. Secondly, the number of independent functional units in the capsule network is adjusted to fit it to the training dataset. Thirdly, a method based on Bayesian GAN is proposed to enrich the variances in the current dataset to increase its complexity. Experimental results on the PASCAL VOC 2010 Person Part dataset and the MNIST dataset show that, in both conventional CNN models and capsule networks, the number of independent functional units is an important factor that determines the capacity of a network model. By adjusting the number of functional units, the capacity of a model can better match the complexity of a dataset.Keywords: CNN, convolutional neural network, capsule network, capacity optimization, character recognition, data augmentation, semantic segmentation
Procedia PDF Downloads 15336787 [Keynote Talk]: The Challenges and Solutions for Developing Mobile Apps in a Small University
Authors: Greg Turner, Bin Lu, Cheer-Sun Yang
Abstract:
As computing technology advances, smartphone applications can assist in student learning in a pervasive way. For example, the idea of using a mobile apps for the PA Common Trees, Pests, Pathogens, in the field as a reference tool allows middle school students to learn about trees and associated pests/pathogens without bringing a textbook. In the past, some researches study the mobile software Mobile Application Software Development Life Cycle (MADLC) including traditional models such as the waterfall model, or more recent Agile Methods. Others study the issues related to the software development process. Very little research is on the development of three heterogenous mobile systems simultaneously in a small university where the availability of developers is an issue. In this paper, we propose to use a hybride model of Waterfall Model and the Agile Model, known as the Relay Race Methodology (RRM) in practice, to reflect the concept of racing and relaying for scheduling. Based on the development project, we observe that the modeling of the transition between any two phases is manifested naturally. Thus, we claim that the RRM model can provide a de fecto rather than a de jure basis for the core concept in the MADLC. In this paper, the background of the project is introduced first. Then, the challenges are pointed out followed by our solutions. Finally, the experiences learned and the future work are presented.Keywords: agile methods, mobile apps, software process model, waterfall model
Procedia PDF Downloads 40936786 Numerical Simulations of the Transition Flow of Model Propellers for Predicting Open Water Performance
Authors: Huilan Yao, Huaixin Zhang
Abstract:
Simulations of the transition flow of model propellers are important for predicting hydrodynamic performance and studying scale effects. In this paper, the transition flow of a model propeller under different loadings are simulated using a transition model provided by STAR-CCM+, and the influence of turbulence intensity (TI) on the transition, especially friction and pressure components of propeller performance, was studied. Before that, the transition model was applied to simulate the transition flow of a flat plate and an airfoil. Predicted transitions agree well with experimental results. Then, the transition model was applied for propeller simulations in open water, and the influence of TI was studied. Under the heavy and moderate loadings, thrust and torque of the propeller predicted by the transition model (different TI) and two turbulence models are very close and agree well with measurements. However, under the light loading, only the transition model with low TI predicts the most accurate results. Above all, the friction components of propeller performance predicted by the transition model with different TI have obvious difference.Keywords: transition flow, model propellers, hydrodynamic performance, numerical simulation
Procedia PDF Downloads 26336785 Creeping Control Strategy for Direct Shift Gearbox Based on the Investigation of Temperature Variation of the Wet Clutch
Authors: Biao Ma, Jikai Liu, Man Chen, Jianpeng Wu, Liyong Wang, Changsong Zheng
Abstract:
Proposing an appropriate control strategy is an effective and practical way to address the overheat problems of the wet multi-plate clutch in Direct Shift Gearbox under the long-time creeping condition. To do so, the temperature variation of the wet multi-plate clutch is investigated firstly by establishing a thermal resistance model for the gearbox cooling system. To calculate the generated heat flux and predict the clutch temperature precisely, the friction torque model is optimized by introducing an improved friction coefficient, which is related to the pressure, the relative speed and the temperature. After that, the heat transfer model and the reasonable friction torque model are employed by the vehicle powertrain model to construct a comprehensive co-simulation model for the Direct Shift Gearbox (DSG) vehicle. A creeping control strategy is then proposed and, to evaluate the vehicle performance, the safety temperature (250 ℃) is particularly adopted as an important metric. During the creeping process, the temperature of two clutches is always under the safety value (250 ℃), which demonstrates the effectiveness of the proposed control strategy in avoiding the thermal failures of clutches.Keywords: creeping control strategy, direct shift gearbox, temperature variation, wet clutch
Procedia PDF Downloads 13336784 Distance Protection Performance Analysis
Authors: Abdelsalam Omar
Abstract:
This paper presents simulation-based case study that indicate the need for accurate dynamic modeling of distance protection relay. In many cases, a static analysis based on current and voltage phasors may be sufficient to assess the performance of distance protection. There are several circumstances under which such a simplified study does not provide the depth of analysis necessary to obtain accurate results, however. This letter present study of the influences of magnetizing inrush and power swing on the performance of distance protection relay. One type of numerical distance protection relay has been investigated: 7SA511. The study has been performed in order to demonstrate the relay response when dynamic model of distance relay is utilized.Keywords: distance protection, magnitizing inrush, power swing, dynamic model of protection relays, simulatio
Procedia PDF Downloads 48836783 Continuous Differential Evolution Based Parameter Estimation Framework for Signal Models
Authors: Ammara Mehmood, Aneela Zameer, Muhammad Asif Zahoor Raja, Muhammad Faisal Fateh
Abstract:
In this work, the strength of bio-inspired computational intelligence based technique is exploited for parameter estimation for the periodic signals using Continuous Differential Evolution (CDE) by defining an error function in the mean square sense. Multidimensional and nonlinear nature of the problem emerging in sinusoidal signal models along with noise makes it a challenging optimization task, which is dealt with robustness and effectiveness of CDE to ensure convergence and avoid trapping in local minima. In the proposed scheme of Continuous Differential Evolution based Signal Parameter Estimation (CDESPE), unknown adjustable weights of the signal system identification model are optimized utilizing CDE algorithm. The performance of CDESPE model is validated through statistics based various performance indices on a sufficiently large number of runs in terms of estimation error, mean squared error and Thiel’s inequality coefficient. Efficacy of CDESPE is examined by comparison with the actual parameters of the system, Genetic Algorithm based outcomes and from various deterministic approaches at different signal-to-noise ratio (SNR) levels.Keywords: parameter estimation, bio-inspired computing, continuous differential evolution (CDE), periodic signals
Procedia PDF Downloads 30236782 An Analysis of Innovative Cloud Model as Bridging the Gap between Physical and Virtualized Business Environments: The Customer Perspective
Authors: Asim Majeed, Rehan Bhana, Mak Sharma, Rebecca Goode, Nizam Bolia, Mike Lloyd-Williams
Abstract:
This study aims to investigate and explore the underlying causes of security concerns of customers emerged when WHSmith transformed its physical system to virtualized business model through NetSuite. NetSuite is essentially fully integrated software which helps transforming the physical system to virtualized business model. Modern organisations are moving away from traditional business models to cloud based models and consequently it is expected to have a better, secure and innovative environment for customers. The vital issue of the modern age race is the security when transforming virtualized through cloud based models and designers of interactive systems often misunderstand privacy and even often ignore it, thus causing concerns for users. The content analysis approach is being used to collect the qualitative data from 120 online bloggers including TRUSTPILOT. The results and finding provide useful new insights into the nature and form of security concerns of online users after they have used the WHSmith services offered online through their website. Findings have theoretical as well as practical implications for the successful adoption of cloud computing Business-to-Business model and similar systems.Keywords: innovation, virtualization, cloud computing, organizational flexibility
Procedia PDF Downloads 38436781 Debris Flow Mapping Using Geographical Information System Based Model and Geospatial Data in Middle Himalayas
Authors: Anand Malik
Abstract:
The Himalayas with high tectonic activities poses a great threat to human life and property. Climate change is another reason which triggering extreme events multiple fold effect on high mountain glacial environment, rock falls, landslides, debris flows, flash flood and snow avalanches. One such extreme event of cloud burst along with breach of moraine dammed Chorabri Lake occurred from June 14 to June 17, 2013, triggered flooding of Saraswati and Mandakini rivers in the Kedarnath Valley of Rudraprayag district of Uttrakhand state of India. As a result, huge volume of water with its high velocity created a catastrophe of the century, which resulted into loss of large number of human/animals, pilgrimage, tourism, agriculture and property. Thus a comprehensive assessment of debris flow hazards requires GIS-based modeling using numerical methods. The aim of present study is to focus on analysis and mapping of debris flow movements using geospatial data with flow-r (developed by team at IGAR, University of Lausanne). The model is based on combined probabilistic and energetic algorithms for the assessment of spreading of flow with maximum run out distances. Aster Digital Elevation Model (DEM) with 30m x 30m cell size (resolution) is used as main geospatial data for preparing the run out assessment, while Landsat data is used to analyze land use land cover change in the study area. The results of the study area show that model can be applied with great accuracy as the model is very useful in determining debris flow areas. The results are compared with existing available landslides/debris flow maps. ArcGIS software is used in preparing run out susceptibility maps which can be used in debris flow mitigation and future land use planning.Keywords: debris flow, geospatial data, GIS based modeling, flow-R
Procedia PDF Downloads 27336780 Innovations in the Implementation of Preventive Strategies and Measuring Their Effectiveness Towards the Prevention of Harmful Incidents to People with Mental Disabilities who Receive Home and Community Based Services
Authors: Carlos V. Gonzalez
Abstract:
Background: Providers of in-home and community based services strive for the elimination of preventable harm to the people under their care as well as to the employees who support them. Traditional models of safety and protection from harm have assumed that the absence of incidents of harm is a good indicator of safe practices. However, this model creates an illusion of safety that is easily shaken by sudden and inadvertent harmful events. As an alternative, we have developed and implemented an evidence-based resilient model of safety known as C.O.P.E. (Caring, Observing, Predicting and Evaluating). Within this model, safety is not defined by the absence of harmful incidents, but by the presence of continuous monitoring, anticipation, learning, and rapid response to events that may lead to harm. Objective: The objective was to evaluate the effectiveness of the C.O.P.E. model for the reduction of harm to individuals with mental disabilities who receive home and community based services. Methods: Over the course of 2 years we counted the number of incidents of harm and near misses. We trained employees on strategies to eliminate incidents before they fully escalated. We trained employees to track different levels of patient status within a scale from 0 to 10. Additionally, we provided direct support professionals and supervisors with customized smart phone applications to track and notify the team of changes in that status every 30 minutes. Finally, the information that we collected was saved in a private computer network that analyzes and graphs the outcome of each incident. Result and conclusions: The use of the COPE model resulted in: A reduction in incidents of harm. A reduction the use of restraints and other physical interventions. An increase in Direct Support Professional’s ability to detect and respond to health problems. Improvement in employee alertness by decreasing sleeping on duty. Improvement in caring and positive interaction between Direct Support Professionals and the person who is supported. Developing a method to globally measure and assess the effectiveness of prevention from harm plans. Future applications of the COPE model for the reduction of harm to people who receive home and community based services are discussed.Keywords: harm, patients, resilience, safety, mental illness, disability
Procedia PDF Downloads 44736779 Analysis and Optimized Design of a Packaged Liquid Chiller
Authors: Saeed Farivar, Mohsen Kahrom
Abstract:
The purpose of this work is to develop a physical simulation model for the purpose of studying the effect of various design parameters on the performance of packaged-liquid chillers. This paper presents a steady-state model for predicting the performance of package-Liquid chiller over a wide range of operation condition. The model inputs are inlet conditions; geometry and output of model include system performance variable such as power consumption, coefficient of performance (COP) and states of refrigerant through the refrigeration cycle. A computer model that simulates the steady-state cyclic performance of a vapor compression chiller is developed for the purpose of performing detailed physical design analysis of actual industrial chillers. The model can be used for optimizing design and for detailed energy efficiency analysis of packaged liquid chillers. The simulation model takes into account presence of all chiller components such as compressor, shell-and-tube condenser and evaporator heat exchangers, thermostatic expansion valve and connection pipes and tubing’s by thermo-hydraulic modeling of heat transfer, fluids flow and thermodynamics processes in each one of the mentioned components. To verify the validity of the developed model, a 7.5 USRT packaged-liquid chiller is used and a laboratory test stand for bringing the chiller to its standard steady-state performance condition is build. Experimental results obtained from testing the chiller in various load and temperature conditions is shown to be in good agreement with those obtained from simulating the performance of the chiller using the computer prediction model. An entropy-minimization-based optimization analysis is performed based on the developed analytical performance model of the chiller. The variation of design parameters in construction of shell-and-tube condenser and evaporator heat exchangers are studied using the developed performance and optimization analysis and simulation model and a best-match condition between the physical design and construction of chiller heat exchangers and its compressor is found to exist. It is expected that manufacturers of chillers and research organizations interested in developing energy-efficient design and analysis of compression chillers can take advantage of the presented study and its results.Keywords: optimization, packaged liquid chiller, performance, simulation
Procedia PDF Downloads 27836778 Development of a Context Specific Planning Model for Achieving a Sustainable Urban City
Authors: Jothilakshmy Nagammal
Abstract:
This research paper deals with the different case studies, where the Form-Based Codes are adopted in general and the different implementation methods in particular are discussed to develop a method for formulating a new planning model. The organizing principle of the Form-Based Codes, the transect is used to zone the city into various context specific transects. An approach is adopted to develop the new planning model, city Specific Planning Model (CSPM), as a tool to achieve sustainability for any city in general. A case study comparison method in terms of the planning tools used, the code process adopted and the various control regulations implemented in thirty two different cities are done. The analysis shows that there are a variety of ways to implement form-based zoning concepts: Specific plans, a parallel or optional form-based code, transect-based code /smart code, required form-based standards or design guidelines. The case studies describe the positive and negative results from based zoning, Where it is implemented. From the different case studies on the method of the FBC, it is understood that the scale for formulating the Form-Based Code varies from parts of the city to the whole city. The regulating plan is prepared with the organizing principle as the transect in most of the cases. The various implementation methods adopted in these case studies for the formulation of Form-Based Codes are special districts like the Transit Oriented Development (TOD), traditional Neighbourhood Development (TND), specific plan and Street based. The implementation methods vary from mandatory, integrated and floating. To attain sustainability the research takes the approach of developing a regulating plan, using the transect as the organizing principle for the entire area of the city in general in formulating the Form-Based Codes for the selected Special Districts in the study area in specific, street based. Planning is most powerful when it is embedded in the broader context of systemic change and improvement. Systemic is best thought of as holistic, contextualized and stake holder-owned, While systematic can be thought of more as linear, generalisable, and typically top-down or expert driven. The systemic approach is a process that is based on the system theory and system design principles, which are too often ill understood by the general population and policy makers. The system theory embraces the importance of a global perspective, multiple components, interdependencies and interconnections in any system. In addition, the recognition that a change in one part of a system necessarily alters the rest of the system is a cornerstone of the system theory. The proposed regulating plan taking the transect as an organizing principle and Form-Based Codes to achieve sustainability of the city has to be a hybrid code, which is to be integrated within the existing system - A Systemic Approach with a Systematic Process. This approach of introducing a few form based zones into a conventional code could be effective in the phased replacement of an existing code. It could also be an effective way of responding to the near-term pressure of physical change in “sensitive” areas of the community. With this approach and method the new Context Specific Planning Model is created towards achieving sustainability is explained in detail this research paper.Keywords: context based planning model, form based code, transect, systemic approach
Procedia PDF Downloads 33736777 Geometric Simplification Method of Building Energy Model Based on Building Performance Simulation
Authors: Yan Lyu, Yiqun Pan, Zhizhong Huang
Abstract:
In the design stage of a new building, the energy model of this building is often required for the analysis of the performance on energy efficiency. In practice, a certain degree of geometric simplification should be done in the establishment of building energy models, since the detailed geometric features of a real building are hard to be described perfectly in most energy simulation engine, such as ESP-r, eQuest or EnergyPlus. Actually, the detailed description is not necessary when the result with extremely high accuracy is not demanded. Therefore, this paper analyzed the relationship between the error of the simulation result from building energy models and the geometric simplification of the models. Finally, the following two parameters are selected as the indices to characterize the geometric feature of in building energy simulation: the southward projected area and total side surface area of the building, Based on the parameterization method, the simplification from an arbitrary column building to a typical shape (a cuboid) building can be made for energy modeling. The result in this study indicates that this simplification would only lead to the error that is less than 7% for those buildings with the ratio of southward projection length to total perimeter of the bottom of 0.25~0.35, which can cover most situations.Keywords: building energy model, simulation, geometric simplification, design, regression
Procedia PDF Downloads 18036776 Application of Latent Class Analysis and Self-Organizing Maps for the Prediction of Treatment Outcomes for Chronic Fatigue Syndrome
Authors: Ben Clapperton, Daniel Stahl, Kimberley Goldsmith, Trudie Chalder
Abstract:
Chronic fatigue syndrome (CFS) is a condition characterised by chronic disabling fatigue and other symptoms that currently can't be explained by any underlying medical condition. Although clinical trials support the effectiveness of cognitive behaviour therapy (CBT), the success rate for individual patients is modest. Patients vary in their response and little is known which factors predict or moderate treatment outcomes. The aim of the project is to develop a prediction model from baseline characteristics of patients, such as demographics, clinical and psychological variables, which may predict likely treatment outcome and provide guidance for clinical decision making and help clinicians to recommend the best treatment. The project is aimed at identifying subgroups of patients with similar baseline characteristics that are predictive of treatment effects using modern cluster analyses and data mining machine learning algorithms. The characteristics of these groups will then be used to inform the types of individuals who benefit from a specific treatment. In addition, results will provide a better understanding of for whom the treatment works. The suitability of different clustering methods to identify subgroups and their response to different treatments of CFS patients is compared.Keywords: chronic fatigue syndrome, latent class analysis, prediction modelling, self-organizing maps
Procedia PDF Downloads 22636775 Ideology versus Faith in the Collective Political Identity Formation: An Analysis of the Thoughts of Iqbal and Jinnah-The Founding Fathers of Pakistan
Authors: Muhammad Sajjad-ur-Rehman
Abstract:
Pakistan was meant to be a progressive modern Muslim nation state since its inception in 1947. Its birth was a big hope for the Muslims of Sub-continent to transform their societies on Islamic lines—the promise which made them unite and vote for Pakistan during independence movement. This was the vision put forwarded by Allama Iqbal and Muhammad Ali Jinnah—the two founding fathers of Pakistan. Dwelling on interpretive/ analytical approach, this paper analyzes the thoughts and reflections of Iqbal and Jinnah to understand the issues of collective identity formation in Pakistan. It argues that there may be traced two distinct identity models in the thoughts and reflections of these two leading figures of Pakistan movement: First may be called as ‘faith-based identity model’ while the other may be named as ‘interests-based identity model’. These can also be entitled as ‘Islam-as-faith model’ and ‘Islam-as-ideology model’. Former seeks the diffusion of power by cultural/ faith based means and thus society remains independent in determining its change. While the later goes on to open and expand the power realm by maximizing the role of state in determining the social change. With the help of these models, it can better be explained that what made Pakistani society fail in the collective political identity construction, hindering thus the political potential of the society to be utilized for initiating state formation and societal growth. As a result, today, we see a state that is often rebelled and resisted on the name of ethnicity, religion and sectarianism on one hand and by the ordinary folk when and wherever possible.Keywords: idealogy, Iqbal, Jinnah, identity
Procedia PDF Downloads 636774 Effects of the Air Supply Outlets Geometry on Human Comfort inside Living Rooms: CFD vs. ADPI
Authors: Taher M. Abou-deif, Esmail M. El-Bialy, Essam E. Khalil
Abstract:
The paper is devoted to numerically investigating the influence of the air supply outlets geometry on human comfort inside living looms. A computational fluid dynamics model is developed to examine the air flow characteristics of a room with different supply air diffusers. The work focuses on air flow patterns, thermal behavior in the room with few number of occupants. As an input to the full-scale 3-D room model, a 2-D air supply diffuser model that supplies direction and magnitude of air flow into the room is developed. Air distribution effect on thermal comfort parameters was investigated depending on changing the air supply diffusers type, angles and velocity. Air supply diffusers locations and numbers were also investigated. The pre-processor Gambit is used to create the geometric model with parametric features. Commercially available simulation software “Fluent 6.3” is incorporated to solve the differential equations governing the conservation of mass, three momentum and energy in the processing of air flow distribution. Turbulence effects of the flow are represented by the well-developed two equation turbulence model. In this work, the so-called standard k-ε turbulence model, one of the most widespread turbulence models for industrial applications, was utilized. Basic parameters included in this work are air dry bulb temperature, air velocity, relative humidity and turbulence parameters are used for numerical predictions of indoor air distribution and thermal comfort. The thermal comfort predictions through this work were based on ADPI (Air Diffusion Performance Index),the PMV (Predicted Mean Vote) model and the PPD (Percentage People Dissatisfied) model, the PMV and PPD were estimated using Fanger’s model.Keywords: thermal comfort, Fanger's model, ADPI, energy effeciency
Procedia PDF Downloads 40936773 Recurrent Neural Networks with Deep Hierarchical Mixed Structures for Chinese Document Classification
Authors: Zhaoxin Luo, Michael Zhu
Abstract:
In natural languages, there are always complex semantic hierarchies. Obtaining the feature representation based on these complex semantic hierarchies becomes the key to the success of the model. Several RNN models have recently been proposed to use latent indicators to obtain the hierarchical structure of documents. However, the model that only uses a single-layer latent indicator cannot achieve the true hierarchical structure of the language, especially a complex language like Chinese. In this paper, we propose a deep layered model that stacks arbitrarily many RNN layers equipped with latent indicators. After using EM and training it hierarchically, our model solves the computational problem of stacking RNN layers and makes it possible to stack arbitrarily many RNN layers. Our deep hierarchical model not only achieves comparable results to large pre-trained models on the Chinese short text classification problem but also achieves state of art results on the Chinese long text classification problem.Keywords: nature language processing, recurrent neural network, hierarchical structure, document classification, Chinese
Procedia PDF Downloads 6836772 Modeling of Anisotropic Hardening Based on Crystal Plasticity Theory and Virtual Experiments
Authors: Bekim Berisha, Sebastian Hirsiger, Pavel Hora
Abstract:
Advanced material models involving several sets of model parameters require a big experimental effort. As models are getting more and more complex like e.g. the so called “Homogeneous Anisotropic Hardening - HAH” model for description of the yielding behavior in the 2D/3D stress space, the number and complexity of the required experiments are also increasing continuously. In the context of sheet metal forming, these requirements are even more pronounced, because of the anisotropic behavior or sheet materials. In addition, some of the experiments are very difficult to perform e.g. the plane stress biaxial compression test. Accordingly, tensile tests in at least three directions, biaxial tests and tension-compression or shear-reverse shear experiments are performed to determine the parameters of the macroscopic models. Therefore, determination of the macroscopic model parameters based on virtual experiments is a very promising strategy to overcome these difficulties. For this purpose, in the framework of multiscale material modeling, a dislocation density based crystal plasticity model in combination with a FFT-based spectral solver is applied to perform virtual experiments. Modeling of the plastic behavior of metals based on crystal plasticity theory is a well-established methodology. However, in general, the computation time is very high and therefore, the computations are restricted to simplified microstructures as well as simple polycrystal models. In this study, a dislocation density based crystal plasticity model – including an implementation of the backstress – is used in a spectral solver framework to generate virtual experiments for three deep drawing materials, DC05-steel, AA6111-T4 and AA4045 aluminum alloys. For this purpose, uniaxial as well as multiaxial loading cases, including various pre-strain histories, has been computed and validated with real experiments. These investigations showed that crystal plasticity modeling in the framework of Representative Volume Elements (RVEs) can be used to replace most of the expensive real experiments. Further, model parameters of advanced macroscopic models like the HAH model can be determined from virtual experiments, even for multiaxial deformation histories. It was also found that crystal plasticity modeling can be used to model anisotropic hardening more accurately by considering the backstress, similar to well-established macroscopic kinematic hardening models. It can be concluded that an efficient coupling of crystal plasticity models and the spectral solver leads to a significant reduction of the amount of real experiments needed to calibrate macroscopic models. This advantage leads also to a significant reduction of computational effort needed for the optimization of metal forming process. Further, due to the time efficient spectral solver used in the computation of the RVE models, detailed modeling of the microstructure are possible.Keywords: anisotropic hardening, crystal plasticity, micro structure, spectral solver
Procedia PDF Downloads 31436771 Performance Complexity Measurement of Tightening Equipment Based on Kolmogorov Entropy
Authors: Guoliang Fan, Aiping Li, Xuemei Liu, Liyun Xu
Abstract:
The performance of the tightening equipment will decline with the working process in manufacturing system. The main manifestations are the randomness and discretization degree increasing of the tightening performance. To evaluate the degradation tendency of the tightening performance accurately, a complexity measurement approach based on Kolmogorov entropy is presented. At first, the states of performance index are divided for calibrating the discrete degree. Then the complexity measurement model based on Kolmogorov entropy is built. The model describes the performance degradation tendency of tightening equipment quantitatively. At last, a study case is applied for verifying the efficiency and validity of the approach. The research achievement shows that the presented complexity measurement can effectively evaluate the degradation tendency of the tightening equipment. It can provide theoretical basis for preventive maintenance and life prediction of equipment.Keywords: complexity measurement, Kolmogorov entropy, manufacturing system, performance evaluation, tightening equipment
Procedia PDF Downloads 25936770 Systematic Study of Structure Property Relationship in Highly Crosslinked Elastomers
Authors: Natarajan Ramasamy, Gurulingamurthy Haralur, Ramesh Nivarthu, Nikhil Kumar Singha
Abstract:
Elastomers are polymeric materials with varied backbone architectures ranging from linear to dendrimeric structures and wide varieties of monomeric repeat units. These elastomers show strongly viscous and weakly elastic when it is not cross-linked. But when crosslinked, based on the extent the properties of these elastomers can range from highly flexible to highly stiff nature. Lightly cross-linked systems are well studied and reported. Understanding the nature of highly cross-linked rubber based upon chemical structure and architecture is critical for varieties of applications. One of the critical parameters is cross-link density. In the current work, we have studied the highly cross-linked state of linear, lightly branched to star-shaped branched elastomers and determined the cross-linked density by using different models. Change in hardness, shift in Tg, change in modulus and swelling behavior were measured experimentally as a function of the extent of curing. These properties were analyzed using varied models to determine cross-link density. We used hardness measurements to examine cure time. Hardness to the extent of curing relationship is determined. It is well known that micromechanical transitions like Tg and storage modulus are related to the extent of crosslinking. The Tg of the elastomer in different crosslinked state was determined by DMA, and based on plateau modulus the crosslink density is estimated by using Nielsen’s model. Usually for lightly crosslinked systems, based on equilibrium swelling ratio in solvent the cross link density is estimated by using Flory–Rhener model. When it comes to highly crosslinked system, Flory-Rhener model is not valid because of smaller chain length. So models based on the assumption of polymer as a Non-Gaussian chain like 1) Helmis–Heinrich–Straube (HHS) model, 2) Gloria M.gusler and Yoram Cohen Model, 3) Barbara D. Barr-Howell and Nikolaos A. Peppas model is used for estimating crosslink density. In this work, correction factors are determined to the existing models and based upon it structure-property relationship of highly crosslinked elastomers was studied.Keywords: dynamic mechanical analysis, glass transition temperature, parts per hundred grams of rubber, crosslink density, number of networks per unit volume of elastomer
Procedia PDF Downloads 16536769 Modeling and Statistical Analysis of a Soap Production Mix in Bejoy Manufacturing Industry, Anambra State, Nigeria
Authors: Okolie Chukwulozie Paul, Iwenofu Chinwe Onyedika, Sinebe Jude Ebieladoh, M. C. Nwosu
Abstract:
The research work is based on the statistical analysis of the processing data. The essence is to analyze the data statistically and to generate a design model for the production mix of soap manufacturing products in Bejoy manufacturing company Nkpologwu, Aguata Local Government Area, Anambra state, Nigeria. The statistical analysis shows the statistical analysis and the correlation of the data. T test, Partial correlation and bi-variate correlation were used to understand what the data portrays. The design model developed was used to model the data production yield and the correlation of the variables show that the R2 is 98.7%. However, the results confirm that the data is fit for further analysis and modeling. This was proved by the correlation and the R-squared.Keywords: General Linear Model, correlation, variables, pearson, significance, T-test, soap, production mix and statistic
Procedia PDF Downloads 445