Search results for: problem posing
5959 Socioeconomic Status and Mortality in Older People with Angina: A Population-Based Cohort Study in China
Authors: Weiju Zhou, Alex Hopkins, Ruoling Chen
Abstract:
Background: China has increased the gap in income between richer and poorer over the past 40 years, and the number of deaths from people with angina has been rising. It is unclear whether socioeconomic status (SES) is associated with increased mortality in older people with angina. Methods: Data from a cohort study of 2,380 participants aged ≥ 65 years, who were randomly recruited from 5-province urban communities were examined in China. The cohort members were interviewed to record socio-demographic and risk factors and document doctor-diagnosed angina at baseline and were followed them up in 3-10 years, including monitoring vital status. Multivariate Cox regression models were employed to examine all-cause mortality in relation to low SES. Results: The cohort follow-up identified 373 deaths occurred; 41 deaths in 208 angina patients. Compared to participants without angina (n=2,172), patients with angina had increased mortality (multivariate adjusted hazard ratio (HR) was 1.41, 95% CI 1.01-1.97). Within angina patients, the risk of mortality increased with low satisfactory income (2.51, 1.08-5.85) and having financial problem (4.00, 1.07-15.00), but significantly with levels of education and occupation. In non-angina participants, none of these four SES indicators were associated with mortality. There was a significant interaction effect between angina and low satisfactory income on mortality. Conclusions: In China, having low income and financial problem increase mortality in older people with angina. Strategies to improve economic circumstances in older people could help reduce inequality in angina survival.Keywords: angina, mortality, older people, socio-economic status
Procedia PDF Downloads 1195958 Workforce Optimization: Fair Workload Balance and Near-Optimal Task Execution Order
Authors: Alvaro Javier Ortega
Abstract:
A large number of companies face the challenge of matching highly-skilled professionals to high-end positions by human resource deployment professionals. However, when the professional list and tasks to be matched are larger than a few dozens, this process result is far from optimal and takes a long time to be made. Therefore, an automated assignment algorithm for this workforce management problem is needed. The majority of companies are divided into several sectors or departments, where trained employees with different experience levels deal with a large number of tasks daily. Also, the execution order of all tasks is of mater consequence, due to some of these tasks just can be run it if the result of another task is provided. Thus, a wrong execution order leads to large waiting times between consecutive tasks. The desired goal is, therefore, creating accurate matches and a near-optimal execution order that maximizes the number of tasks performed and minimizes the idle time of the expensive skilled employees. The problem described before can be model as a mixed-integer non-linear programming (MINLP) as it will be shown in detail through this paper. A large number of MINLP algorithms have been proposed in the literature. Here, genetic algorithm solutions are considered and a comparison between two different mutation approaches is presented. The simulated results considering different complexity levels of assignment decisions show the appropriateness of the proposed model.Keywords: employees, genetic algorithm, industry management, workforce
Procedia PDF Downloads 1685957 A Case Study for User Rating Prediction on Automobile Recommendation System Using Mapreduce
Authors: Jiao Sun, Li Pan, Shijun Liu
Abstract:
Recommender systems have been widely used in contemporary industry, and plenty of work has been done in this field to help users to identify items of interest. Collaborative Filtering (CF, for short) algorithm is an important technology in recommender systems. However, less work has been done in automobile recommendation system with the sharp increase of the amount of automobiles. What’s more, the computational speed is a major weakness for collaborative filtering technology. Therefore, using MapReduce framework to optimize the CF algorithm is a vital solution to this performance problem. In this paper, we present a recommendation of the users’ comment on industrial automobiles with various properties based on real world industrial datasets of user-automobile comment data collection, and provide recommendation for automobile providers and help them predict users’ comment on automobiles with new-coming property. Firstly, we solve the sparseness of matrix using previous construction of score matrix. Secondly, we solve the data normalization problem by removing dimensional effects from the raw data of automobiles, where different dimensions of automobile properties bring great error to the calculation of CF. Finally, we use the MapReduce framework to optimize the CF algorithm, and the computational speed has been improved times. UV decomposition used in this paper is an often used matrix factorization technology in CF algorithm, without calculating the interpolation weight of neighbors, which will be more convenient in industry.Keywords: collaborative filtering, recommendation, data normalization, mapreduce
Procedia PDF Downloads 2175956 Going beyond Elementary Algebraic Identities: The Expectation of a Gifted Child, an Indian Scenario
Authors: S. R. Santhanam
Abstract:
A gifted child is one who gives evidence of creativity, good memory, rapid learning. In mathematics, a teacher often comes across some gifted children and they exhibit the following characteristics: unusual alertness, enjoying solving problems, getting bored on repetitions, self-taught, going beyond what teacher taught, ask probing questions, connecting unconnected concepts, vivid imagination, readiness for research work, perseverance of a topic. There are two main areas of research carried out on them: 1)identifying gifted children, 2) interacting and channelizing them. A lack of appropriate recognition will lead the gifted child demotivated. One of the main findings is if proper attention and nourishment are not given then it leads a gifted child to become depressed, underachieving, fail to reach their full potential and sometimes develop negative attitude towards school and study. After identifying them, a mathematics teacher has to develop them into a fall fledged achiever. The responsibility of the teacher is enormous. The teacher has to be resourceful and patient. But interacting with them one finds a lot of surprises and awesomeness. The elementary algebraic identities like (a+b)(a-b)=a²-b², expansion of like (a+b)²(a-b)² and others are taught to students, of age group 13-15 in India. An average child will be satisfied with a single proof and immediate application of these identities. But a gifted child expects more from the teacher and at one stage after a little training will surpass the teacher also. In this short paper, the author shares his experience regarding teaching algebraic identities to gifted children. The following problem was given to a set of 10 gifted children of the specified age group: If a natural number ‘n’ to expressed as the sum of the two squares, will 2n also be expressed as the sum of two squares? An investigation has been done on what multiples of n satisfying the criterion. The attempts of the gifted children were consolidated and conclusion was drawn. A second problem was given to them as: can two natural numbers be found such that the difference of their square is 3? After a successful solution, more situations were analysed. As a third question, the finding of the sign of an algebraic expression in three variables was analysed. As an example: if a,b,c are real and unequal what will be sign of a²+4b²+9c²-4ab-12bc-6ca? Apart from an expression as a perfect square what other methods can be employed to prove an algebraic expression as positive negative or non negative has been analysed. Expressions like 4x²+2y²+13y²-2xy-4yz-6zx were given, and the children were asked to find the sign of the expression for all real values of x,y and z. In all investigations, only basic algebraic identities were used. As a next probe, a divisibility problem was initiated. When a,b,c are natural numbers such that a+b+c is at least 6, and if a+b+c is divisible by 6 then will 6 divide a³+b³+c³. The gifted children solved it in two different ways.Keywords: algebraic identities, gifted children, Indian scenario, research
Procedia PDF Downloads 1825955 Weighted-Distance Sliding Windows and Cooccurrence Graphs for Supporting Entity-Relationship Discovery in Unstructured Text
Authors: Paolo Fantozzi, Luigi Laura, Umberto Nanni
Abstract:
The problem of Entity relation discovery in structured data, a well covered topic in literature, consists in searching within unstructured sources (typically, text) in order to find connections among entities. These can be a whole dictionary, or a specific collection of named items. In many cases machine learning and/or text mining techniques are used for this goal. These approaches might be unfeasible in computationally challenging problems, such as processing massive data streams. A faster approach consists in collecting the cooccurrences of any two words (entities) in order to create a graph of relations - a cooccurrence graph. Indeed each cooccurrence highlights some grade of semantic correlation between the words because it is more common to have related words close each other than having them in the opposite sides of the text. Some authors have used sliding windows for such problem: they count all the occurrences within a sliding windows running over the whole text. In this paper we generalise such technique, coming up to a Weighted-Distance Sliding Window, where each occurrence of two named items within the window is accounted with a weight depending on the distance between items: a closer distance implies a stronger evidence of a relationship. We develop an experiment in order to support this intuition, by applying this technique to a data set consisting in the text of the Bible, split into verses.Keywords: cooccurrence graph, entity relation graph, unstructured text, weighted distance
Procedia PDF Downloads 1545954 Identification of Hepatocellular Carcinoma Using Supervised Learning Algorithms
Authors: Sagri Sharma
Abstract:
Analysis of diseases integrating multi-factors increases the complexity of the problem and therefore, development of frameworks for the analysis of diseases is an issue that is currently a topic of intense research. Due to the inter-dependence of the various parameters, the use of traditional methodologies has not been very effective. Consequently, newer methodologies are being sought to deal with the problem. Supervised Learning Algorithms are commonly used for performing the prediction on previously unseen data. These algorithms are commonly used for applications in fields ranging from image analysis to protein structure and function prediction and they get trained using a known dataset to come up with a predictor model that generates reasonable predictions for the response to new data. Gene expression profiles generated by DNA analysis experiments can be quite complex since these experiments can involve hypotheses involving entire genomes. The application of well-known machine learning algorithm - Support Vector Machine - to analyze the expression levels of thousands of genes simultaneously in a timely, automated and cost effective way is thus used. The objectives to undertake the presented work are development of a methodology to identify genes relevant to Hepatocellular Carcinoma (HCC) from gene expression dataset utilizing supervised learning algorithms and statistical evaluations along with development of a predictive framework that can perform classification tasks on new, unseen data.Keywords: artificial intelligence, biomarker, gene expression datasets, hepatocellular carcinoma, machine learning, supervised learning algorithms, support vector machine
Procedia PDF Downloads 4295953 Computationally Efficient Stacking Sequence Blending for Composite Structures with a Large Number of Design Regions Using Cellular Automata
Authors: Ellen Van Den Oord, Julien Marie Jan Ferdinand Van Campen
Abstract:
This article introduces a computationally efficient method for stacking sequence blending of composite structures. The computational efficiency makes the presented method especially interesting for composite structures with a large number of design regions. Optimization of composite structures with an unequal load distribution may lead to locally optimized thicknesses and ply orientations that are incompatible with one another. Blending constraints can be enforced to achieve structural continuity. In literature, many methods can be found to implement structural continuity by means of stacking sequence blending in one way or another. The complexity of the problem makes the blending of a structure with a large number of adjacent design regions, and thus stacking sequences, prohibitive. In this work the local stacking sequence optimization is preconditioned using a method found in the literature that couples the mechanical behavior of the laminate, in the form of lamination parameters, to blending constraints, yielding near-optimal easy-to-blend designs. The preconditioned design is then fed to the scheme using cellular automata that have been developed by the authors. The method is applied to the benchmark 18-panel horseshoe blending problem to demonstrate its performance. The computational efficiency of the proposed method makes it especially suited for composite structures with a large number of design regions.Keywords: composite, blending, optimization, lamination parameters
Procedia PDF Downloads 2295952 Comparison of Finite Difference Schemes for Numerical Study of Ripa Model
Authors: Sidrah Ahmed
Abstract:
The river and lakes flows are modeled mathematically by shallow water equations that are depth-averaged Reynolds Averaged Navier-Stokes equations under Boussinesq approximation. The temperature stratification dynamics influence the water quality and mixing characteristics. It is mainly due to the atmospheric conditions including air temperature, wind velocity, and radiative forcing. The experimental observations are commonly taken along vertical scales and are not sufficient to estimate small turbulence effects of temperature variations induced characteristics of shallow flows. Wind shear stress over the water surface influence flow patterns, heat fluxes and thermodynamics of water bodies as well. Hence it is crucial to couple temperature gradients with shallow water model to estimate the atmospheric effects on flow patterns. The Ripa system has been introduced to study ocean currents as a variant of shallow water equations with addition of temperature variations within the flow. Ripa model is a hyperbolic system of partial differential equations because all the eigenvalues of the system’s Jacobian matrix are real and distinct. The time steps of a numerical scheme are estimated with the eigenvalues of the system. The solution to Riemann problem of the Ripa model is composed of shocks, contact and rarefaction waves. Solving Ripa model with Riemann initial data with the central schemes is difficult due to the eigen structure of the system.This works presents the comparison of four different finite difference schemes for the numerical solution of Riemann problem for Ripa model. These schemes include Lax-Friedrichs, Lax-Wendroff, MacCormack scheme and a higher order finite difference scheme with WENO method. The numerical flux functions in both dimensions are approximated according to these methods. The temporal accuracy is achieved by employing TVD Runge Kutta method. The numerical tests are presented to examine the accuracy and robustness of the applied methods. It is revealed that Lax-Freidrichs scheme produces results with oscillations while Lax-Wendroff and higher order difference scheme produce quite better results.Keywords: finite difference schemes, Riemann problem, shallow water equations, temperature gradients
Procedia PDF Downloads 2045951 User Experience Evaluation on the Usage of Commuter Line Train Ticket Vending Machine
Authors: Faishal Muhammad, Erlinda Muslim, Nadia Faradilla, Sayidul Fikri
Abstract:
To deal with the increase of mass transportation needs problem, PT. Kereta Commuter Jabodetabek (KCJ) implements Commuter Vending Machine (C-VIM) as the solution. For that background, C-VIM is implemented as a substitute to the conventional ticket windows with the purposes to make transaction process more efficient and to introduce self-service technology to the commuter line user. However, this implementation causing problems and long queues when the user is not accustomed to using the machine. The objective of this research is to evaluate user experience after using the commuter vending machine. The goal is to analyze the existing user experience problem and to achieve a better user experience design. The evaluation method is done by giving task scenario according to the features offered by the machine. The features are daily insured ticket sales, ticket refund, and multi-trip card top up. There 20 peoples that separated into two groups of respondents involved in this research, which consist of 5 males and 5 females each group. The experienced and inexperienced user to prove that there is a significant difference between both groups in the measurement. The user experience is measured by both quantitative and qualitative measurement. The quantitative measurement includes the user performance metrics such as task success, time on task, error, efficiency, and learnability. The qualitative measurement includes system usability scale questionnaire (SUS), questionnaire for user interface satisfaction (QUIS), and retrospective think aloud (RTA). Usability performance metrics shows that 4 out of 5 indicators are significantly different in both group. This shows that the inexperienced group is having a problem when using the C-VIM. Conventional ticket windows also show a better usability performance metrics compared to the C-VIM. From the data processing, the experienced group give the SUS score of 62 with the acceptability scale of 'marginal low', grade scale of “D”, and the adjective ratings of 'good' while the inexperienced group gives the SUS score of 51 with the acceptability scale of 'marginal low', grade scale of 'F', and the adjective ratings of 'ok'. This shows that both groups give a low score on the system usability scale. The QUIS score of the experienced group is 69,18 and the inexperienced group is 64,20. This shows the average QUIS score below 70 which indicate a problem with the user interface. RTA was done to obtain user experience issue when using C-VIM through interview protocols. The issue obtained then sorted using pareto concept and diagram. The solution of this research is interface redesign using activity relationship chart. This method resulted in a better interface with an average SUS score of 72,25, with the acceptable scale of 'acceptable', grade scale of 'B', and the adjective ratings of 'excellent'. From the time on task indicator of performance metrics also shows a significant better time by using the new interface design. Result in this study shows that C-VIM not yet have a good performance and user experience.Keywords: activity relationship chart, commuter line vending machine, system usability scale, usability performance metrics, user experience evaluation
Procedia PDF Downloads 2625950 Return to Work after a Mental Health Problem: Analysis of Two Different Management Models
Authors: Lucie Cote, Sonia McFadden
Abstract:
Mental health problems in the workplace are currently one of the main causes of absences. Research work has highlighted the importance of a collaborative process involving the stakeholders in the return-to-work process and has established the best management practices to ensure a successful return-to-work. However, very few studies have specifically explored the combination of various management models and determined whether they could satisfy the needs of the stakeholders. The objective of this study is to analyze two models for managing the return to work: the ‘medical-administrative’ and the ‘support of the worker’ in order to understand the actions and actors involved in these models. The study also aims to explore whether these models meet the needs of the actors involved in the management of the return to work. A qualitative case study was conducted in a Canadian federal organization. An abundant internal documentation and semi-directed interviews with six managers, six workers and four human resources professionals involved in the management of records of employees returning to work after a mental health problem resulted in a complete picture of the return to work management practices used in this organization. The triangulation of this data facilitated the examination of the benefits and limitations of each approach. The results suggest that the actions of management for employee return to work from both models of management ‘support of the worker’ and ‘medical-administrative’ are compatible and can meet the needs of the actors involved in the return to work. More research is needed to develop a structured model integrating best practices of the two approaches to ensure the success of the return to work.Keywords: return to work, mental health, management models, organizations
Procedia PDF Downloads 2135949 Research on the Optimization of the Facility Layout of Efficient Cafeterias for Troops
Authors: Qing Zhang, Jiachen Nie, Yujia Wen, Guanyuan Kou, Peng Yu, Kun Xia, Qin Yang, Li Ding
Abstract:
BACKGROUND: A facility layout problem (FLP) is an NP-complete (non-deterministic polynomial) problem, which is hard to obtain an exact optimal solution. FLP has been widely studied in various limited spaces and workflows. For example, cafeterias with many types of equipment for troops cause chaotic processes when dining. OBJECTIVE: This article tried to optimize the layout of troops’ cafeteria and to improve the overall efficiency of the dining process. METHODS: First, the original cafeteria layout design scheme was analyzed from an ergonomic perspective and two new design schemes were generated. Next, three facility layout models were designed, and further simulation was applied to compare the total time and density of troops between each scheme. Last, an experiment of the dining process with video observation and analysis verified the simulation results. RESULTS: In a simulation, the dining time under the second new layout is shortened by 2.25% and 1.89% (p<0.0001, p=0.0001) compared with the other two layouts, while troops-flow density and interference both greatly reduced in the two new layouts. In the experiment, process completing time and the number of interference reduced as well, which verified corresponding simulation results. CONCLUSIONS: Our two new layout schemes are tested to be optimal by a series of simulation and space experiments. In future research, similar approaches could be applied when taking layout-design algorithm calculation into consideration.Keywords: layout optimization, dining efficiency, troops’ cafeteria, anylogic simulation, field experiment
Procedia PDF Downloads 1435948 Enhancing the Performance of Automatic Logistic Centers by Optimizing the Assignment of Material Flows to Workstations and Flow Racks
Authors: Sharon Hovav, Ilya Levner, Oren Nahum, Istvan Szabo
Abstract:
In modern large-scale logistic centers (e.g., big automated warehouses), complex logistic operations performed by human staff (pickers) need to be coordinated with the operations of automated facilities (robots, conveyors, cranes, lifts, flow racks, etc.). The efficiency of advanced logistic centers strongly depends on optimizing picking technologies in synch with the facility/product layout, as well as on optimal distribution of material flows (products) in the system. The challenge is to develop a mathematical operations research (OR) tool that will optimize system cost-effectiveness. In this work, we propose a model that describes an automatic logistic center consisting of a set of workstations located at several galleries (floors), with each station containing a known number of flow racks. The requirements of each product and the working capacity of stations served by a given set of workers (pickers) are assumed as predetermined. The goal of the model is to maximize system efficiency. The proposed model includes two echelons. The first is the setting of the (optimal) number of workstations needed to create the total processing/logistic system, subject to picker capacities. The second echelon deals with the assignment of the products to the workstations and flow racks, aimed to achieve maximal throughputs of picked products over the entire system given picker capacities and budget constraints. The solutions to the problems at the two echelons interact to balance the overall load in the flow racks and maximize overall efficiency. We have developed an operations research model within each echelon. In the first echelon, the problem of calculating the optimal number of workstations is formulated as a non-standard bin-packing problem with capacity constraints for each bin. The problem arising in the second echelon is presented as a constrained product-workstation-flow rack assignment problem with non-standard mini-max criteria in which the workload maximum is calculated across all workstations in the center and the exterior minimum is calculated across all possible product-workstation-flow rack assignments. The OR problems arising in each echelon are proved to be NP-hard. Consequently, we find and develop heuristic and approximation solution algorithms based on exploiting and improving local optimums. The LC model considered in this work is highly dynamic and is recalculated periodically based on updated demand forecasts that reflect market trends, technological changes, seasonality, and the introduction of new items. The suggested two-echelon approach and the min-max balancing scheme are shown to work effectively on illustrative examples and real-life logistic data.Keywords: logistics center, product-workstation, assignment, maximum performance, load balancing, fast algorithm
Procedia PDF Downloads 2285947 Pineapple Waste Valorization through Biogas Production: Effect of Substrate Concentration and Microwave Pretreatment
Authors: Khamdan Cahyari, Pratikno Hidayat
Abstract:
Indonesia has produced more than 1.8 million ton pineapple fruit in 2013 of which turned into waste due to industrial processing, deterioration and low qualities. It was estimated that this waste accounted for more than 40 percent of harvested fruits. In addition, pineapple leaves were one of biomass waste from pineapple farming land, which contributed even higher percentages. Most of the waste was only dumped into landfill area without proper pretreatment causing severe environmental problem. This research was meant to valorize the pineapple waste for producing renewable energy source of biogas through mesophilic (30℃) anaerobic digestion process. Especially, it was aimed to investigate effect of substrate concentration of pineapple fruit waste i.e. peel, core as well as effect of microwave pretreatment of pineapple leaves waste. The concentration of substrate was set at value 12, 24 and 36 g VS/liter culture whereas 800-Watt microwave pretreatment conducted at 2 and 5 minutes. It was noticed that optimum biogas production obtained at concentration 24 g VS/l with biogas yield 0.649 liter/g VS (45%v CH4) whereas microwave pretreatment at 2 minutes duration performed better compare to 5 minutes due to shorter exposure of microwave heat. This results suggested that valorization of pineapple waste could be carried out through biogas production at the aforementioned process condition. Application of this method is able to both reduce the environmental problem of the waste and produce renewable energy source of biogas to fulfill local energy demand of pineapple farming areas.Keywords: pineapple waste, substrate concentration, microwave pretreatment, biogas, anaerobic digestion
Procedia PDF Downloads 5815946 An Integrated Architecture of E-Learning System to Digitize the Learning Method
Authors: M. Touhidul Islam Sarker, Mohammod Abul Kashem
Abstract:
The purpose of this paper is to improve the e-learning system and digitize the learning method in the educational sector. The learner will login into e-learning platform and easily access the digital content, the content can be downloaded and take an assessment for evaluation. Learner can get access to these digital resources by using tablet, computer, and smart phone also. E-learning system can be defined as teaching and learning with the help of multimedia technologies and the internet by access to digital content. E-learning replacing the traditional education system through information and communication technology-based learning. This paper has designed and implemented integrated e-learning system architecture with University Management System. Moodle (Modular Object-Oriented Dynamic Learning Environment) is the best e-learning system, but the problem of Moodle has no school or university management system. In this research, we have not considered the school’s student because they are out of internet facilities. That’s why we considered the university students because they have the internet access and used technologies. The University Management System has different types of activities such as student registration, account management, teacher information, semester registration, staff information, etc. If we integrated these types of activity or module with Moodle, then we can overcome the problem of Moodle, and it will enhance the e-learning system architecture which makes effective use of technology. This architecture will give the learner to easily access the resources of e-learning platform anytime or anywhere which digitizes the learning method.Keywords: database, e-learning, LMS, Moodle
Procedia PDF Downloads 1885945 The Inverse Problem in the Process of Heat and Moisture Transfer in Multilayer Walling
Authors: Bolatbek Rysbaiuly, Nazerke Rysbayeva, Aigerim Rysbayeva
Abstract:
Relevance: Energy saving elevated to public policy in almost all developed countries. One of the areas for energy efficiency is improving and tightening design standards. In the tie with the state standards, make high demands for thermal protection of buildings. Constructive arrangement of layers should ensure normal operation in which the humidity of materials of construction should not exceed a certain level. Elevated levels of moisture in the walls can be attributed to a defective condition, as moisture significantly reduces the physical, mechanical and thermal properties of materials. Absence at the design stage of modeling the processes occurring in the construction and predict the behavior of structures during their work in the real world leads to an increase in heat loss and premature aging structures. Method: To solve this problem, widely used method of mathematical modeling of heat and mass transfer in materials. The mathematical modeling of heat and mass transfer are taken into the equation interconnected layer [1]. In winter, the thermal and hydraulic conductivity characteristics of the materials are nonlinear and depends on the temperature and moisture in the material. In this case, the experimental method of determining the coefficient of the freezing or thawing of the material becomes much more difficult. Therefore, in this paper we propose an approximate method for calculating the thermal conductivity and moisture permeability characteristics of freezing or thawing material. Questions. Following the development of methods for solving the inverse problem of mathematical modeling allows us to answer questions that are closely related to the rational design of fences: Where the zone of condensation in the body of the multi-layer fencing; How and where to apply insulation rationally his place; Any constructive activities necessary to provide for the removal of moisture from the structure; What should be the temperature and humidity conditions for the normal operation of the premises enclosing structure; What is the longevity of the structure in terms of its components frost materials. Tasks: The proposed mathematical model to solve the following problems: To assess the condition of the thermo-physical designed structures at different operating conditions and select appropriate material layers; Calculate the temperature field in a structurally complex multilayer structures; When measuring temperature and moisture in the characteristic points to determine the thermal characteristics of the materials constituting the surveyed construction; Laboratory testing to significantly reduce test time, and eliminates the climatic chamber and expensive instrumentation experiments and research; Allows you to simulate real-life situations that arise in multilayer enclosing structures associated with freezing, thawing, drying and cooling of any layer of the building material.Keywords: energy saving, inverse problem, heat transfer, multilayer walling
Procedia PDF Downloads 3995944 Physics-Informed Machine Learning for Displacement Estimation in Solid Mechanics Problem
Authors: Feng Yang
Abstract:
Machine learning (ML), especially deep learning (DL), has been extensively applied to many applications in recently years and gained great success in solving different problems, including scientific problems. However, conventional ML/DL methodologies are purely data-driven which have the limitations, such as need of ample amount of labelled training data, lack of consistency to physical principles, and lack of generalizability to new problems/domains. Recently, there is a growing consensus that ML models need to further take advantage of prior knowledge to deal with these limitations. Physics-informed machine learning, aiming at integration of physics/domain knowledge into ML, has been recognized as an emerging area of research, especially in the recent 2 to 3 years. In this work, physics-informed ML, specifically physics-informed neural network (NN), is employed and implemented to estimate the displacements at x, y, z directions in a solid mechanics problem that is controlled by equilibrium equations with boundary conditions. By incorporating the physics (i.e. the equilibrium equations) into the learning process of NN, it is showed that the NN can be trained very efficiently with a small set of labelled training data. Experiments with different settings of the NN model and the amount of labelled training data were conducted, and the results show that very high accuracy can be achieved in fulfilling the equilibrium equations as well as in predicting the displacements, e.g. in setting the overall displacement of 0.1, a root mean square error (RMSE) of 2.09 × 10−4 was achieved.Keywords: deep learning, neural network, physics-informed machine learning, solid mechanics
Procedia PDF Downloads 1505943 Climate Change Law and Transnational Corporations
Authors: Manuel Jose Oyson
Abstract:
The Intergovernmental Panel on Climate Change (IPCC) warned in its most recent report for the entire world “to both mitigate and adapt to climate change if it is to effectively avoid harmful climate impacts.” The IPCC observed “with high confidence” a more rapid rise in total anthropogenic greenhouse gas emissions (GHG) emissions from 2000 to 2010 than in the past three decades that “were the highest in human history”, which if left unchecked will entail a continuing process of global warming and can alter the climate system. Current efforts, however, to respond to the threat of global warming, such as the United Nations Framework Convention on Climate Change and the Kyoto Protocol, have focused on states, and fail to involve Transnational Corporations (TNCs) which are responsible for a vast amount of GHG emissions. Involving TNCs in the search for solutions to climate change is consistent with an acknowledgment by contemporary international law that there is an international role for other international persons, including TNCs, and departs from the traditional “state-centric” response to climate change. Putting the focus of GHG emissions away from states recognises that the activities of TNCs “are not bound by national borders” and that the international movement of goods meets the needs of consumers worldwide. Although there is no legally-binding instrument that covers TNC activities or legal responsibilities generally, TNCs have increasingly been made legally responsible under international law for violations of human rights, exploitation of workers and environmental damage, but not for climate change damage. Imposing on TNCs a legally-binding obligation to reduce their GHG emissions or a legal liability for climate change damage is arguably formidable and unlikely in the absence a recognisable source of obligation in international law or municipal law. Instead a recourse to “soft law” and non-legally binding instruments may be a way forward for TNCs to reduce their GHG emissions and help in addressing climate change. Positive effects have been noted by various studies to voluntary approaches. TNCs have also in recent decades voluntarily committed to “soft law” international agreements. This development reflects a growing recognition among corporations in general and TNCs in particular of their corporate social responsibility (CSR). While CSR used to be the domain of “small, offbeat companies”, it has now become part of mainstream organization. The paper argues that TNCs must voluntarily commit to reducing their GHG emissions and helping address climate change as part of their CSR. One, as a serious “global commons problem”, climate change requires international cooperation from multiple actors, including TNCs. Two, TNCs are not innocent bystanders but are responsible for a large part of GHG emissions across their vast global operations. Three, TNCs have the capability to help solve the problem of climate change. Assuming arguendo that TNCs did not strongly contribute to the problem of climate change, society would have valid expectations for them to use their capabilities, knowledge-base and advanced technologies to help address the problem. It would seem unthinkable for TNCs to do nothing while the global environment fractures.Keywords: climate change law, corporate social responsibility, greenhouse gas emissions, transnational corporations
Procedia PDF Downloads 3515942 Variable Mapping: From Bibliometrics to Implications
Authors: Przemysław Tomczyk, Dagmara Plata-Alf, Piotr Kwiatek
Abstract:
Literature review is indispensable in research. One of the key techniques used in it is bibliometric analysis, where one of the methods is science mapping. The classic approach that dominates today in this area consists of mapping areas, keywords, terms, authors, or citations. This approach is also used in relation to the review of literature in the field of marketing. The development of technology has resulted in the fact that researchers and practitioners use the capabilities of software available on the market for this purpose. The use of science mapping software tools (e.g., VOSviewer, SciMAT, Pajek) in recent publications involves the implementation of a literature review, and it is useful in areas with a relatively high number of publications. Despite this well-grounded science mapping approach having been applied in the literature reviews, performing them is a painstaking task, especially if authors would like to draw precise conclusions about the studied literature and uncover potential research gaps. The aim of this article is to identify to what extent a new approach to science mapping, variable mapping, takes advantage of the classic science mapping approach in terms of research problem formulation and content/thematic analysis for literature reviews. To perform the analysis, a set of 5 articles on customer ideation was chosen. Next, the analysis of key words mapping results in VOSviewer science mapping software was performed and compared with the variable map prepared manually on the same articles. Seven independent expert judges (management scientists on different levels of expertise) assessed the usability of both the stage of formulating, the research problem, and content/thematic analysis. The results show the advantage of variable mapping in the formulation of the research problem and thematic/content analysis. First, the ability to identify a research gap is clearly visible due to the transparent and comprehensive analysis of the relationships between the variables, not only keywords. Second, the analysis of relationships between variables enables the creation of a story with an indication of the directions of relationships between variables. Demonstrating the advantage of the new approach over the classic one may be a significant step towards developing a new approach to the synthesis of literature and its reviews. Variable mapping seems to allow scientists to build clear and effective models presenting the scientific achievements of a chosen research area in one simple map. Additionally, the development of the software enabling the automation of the variable mapping process on large data sets may be a breakthrough change in the field of conducting literature research.Keywords: bibliometrics, literature review, science mapping, variable mapping
Procedia PDF Downloads 1225941 Cooperation of Unmanned Vehicles for Accomplishing Missions
Authors: Ahmet Ozcan, Onder Alparslan, Anil Sezgin, Omer Cetin
Abstract:
The use of unmanned systems for different purposes has become very popular over the past decade. Expectations from these systems have also shown an incredible increase in this parallel. But meeting the demands of the tasks are often not possible with the usage of a single unmanned vehicle in a mission, so it is necessary to use multiple autonomous vehicles with different abilities together in coordination. Therefore the usage of the same type of vehicles together as a swarm is helped especially to satisfy the time constraints of the missions effectively. In other words, it allows sharing the workload by the various numbers of homogenous platforms together. Besides, it is possible to say there are many kinds of problems that require the usage of the different capabilities of the heterogeneous platforms together cooperatively to achieve successful results. In this case, cooperative working brings additional problems beyond the homogeneous clusters. In the scenario presented as an example problem, it is expected that an autonomous ground vehicle, which is lack of its position information, manage to perform point-to-point navigation without losing its way in a previously unknown labyrinth. Furthermore, the ground vehicle is equipped with very limited sensors such as ultrasonic sensors that can detect obstacles. It is very hard to plan or complete the mission for the ground vehicle by self without lost its way in the unknown labyrinth. Thus, in order to assist the ground vehicle, the autonomous air drone is also used to solve the problem cooperatively. The autonomous drone also has limited sensors like downward looking camera and IMU, and it also lacks computing its global position. In this context, it is aimed to solve the problem effectively without taking additional support or input from the outside, just benefiting capabilities of two autonomous vehicles. To manage the point-to-point navigation in a previously unknown labyrinth, the platforms have to work together coordinated. In this paper, cooperative work of heterogeneous unmanned systems is handled in an applied sample scenario, and it is mentioned that how to work together with an autonomous ground vehicle and the autonomous flying platform together in a harmony to take advantage of different platform-specific capabilities. The difficulties of using heterogeneous multiple autonomous platforms in a mission are put forward, and the successful solutions are defined and implemented against the problems like spatially distributed tasks planning, simultaneous coordinated motion, effective communication, and sensor fusion.Keywords: unmanned systems, heterogeneous autonomous vehicles, coordination, task planning
Procedia PDF Downloads 1295940 The Problem of Reconciling the Principle of Confidentiality in Foreign Investment Arbitration with the Public Interest
Authors: Bárbara Magalhães Bravo, Cláudia Figueiras
Abstract:
The economical globalization through the liberalization of the markets and capitals boosted the economical development of the nations and the needs for sorting out the disputes arising from the foreign investment. The arbitration, for all the inherent advantages, such as swiftness, arbitrators’ specialise skills and impartiality sets a pacifier tool for the interest in account. Safeguarded the public interest, we face the problem of the confidentiality in the arbitration. The urgent development of impelling mechanisms concerning transparency, guaranty and protection of the interest in account, reveals itself urgent. Through a bibliography review, we will dense the state of art, by going through the several solutions concerning, and pointing out the most suitable. Through the jurisprudential analysis we will point out the solution for the conflict confidentiality/public interest. The transparency, inextricable from the public interest, imposes the arbitration process can be open to all citizens. Transparency rules have been considered at the UNCITRAL in attempting to conciliate the necessity of publicity and the public interest, however still insufficient. The arbitration of foreign investment carries consequences to the citizens of the State. Articulating mechanisms between the arbitral procedures secrecy and the public interest should be adopted. The arbitration of foreign investment, being a tertius genius between the international arbitration and the administrative arbitration would claim its own regulation in each and every States where the confidentiality rules and its exceptions could be identified. One should enquiry where the limit of the citizens’ individual rights protection and the public interest should give way to the principle of transparencyKeywords: arbitration, foreign investment, transparency, confidenciality, International Centre for Settlement of Investment Disputes UNCITRAL
Procedia PDF Downloads 2165939 Root Cause Analysis of Excessive Vibration in a Feeder Pump of a Large Thermal Electric Power Plant: A Simulation Approach
Authors: Kavindan Balakrishnan
Abstract:
Root cause Identification of the Vibration phenomenon in a feedwater pumping station was the main objective of this research. First, the mode shapes of the pumping structure were investigated using numerical and analytical methods. Then the flow pressure and streamline distribution in the pump sump were examined using C.F.D. simulation, which was hypothesized can be a cause of vibration in the pumping station. As the problem specification of this research states, the vibration phenomenon in the pumping station, with four parallel pumps operating at the same time and heavy vibration recorded even after several maintenance steps. They also specified that a relatively large amplitude of vibration exited by pumps 1 and 4 while others remain normal. As a result, the focus of this research was on determining the cause of such a mode of vibration in the pump station with the assistance of Finite Element Analysis tools and Analytical methods. Major outcomes were observed in structural behavior which is favorable to the vibration pattern phenomenon in the pumping structure as a result of this research. Behaviors of the numerical and analytical models of the pump structure have similar characteristics in their mode shapes, particularly in their 2nd mode shape, which is considerably related to the exact cause of the research problem statement. Since this study reveals several possible points of flow visualization in the pump sump model that can be a favorable cause of vibration in the system, there is more room for improved investigation on flow conditions relating to pump vibrations.Keywords: vibration, simulation, analysis, Ansys, Matlab, mode shapes, pressure distribution, structure
Procedia PDF Downloads 1265938 Moderating Effect of Owner's Influence on the Relationship between the Probability of Client Failure and Going Concern Opinion Issuance
Authors: Mohammad Noor Hisham Osman, Ahmed Razman Abdul Latiff, Zaidi Mat Daud, Zulkarnain Muhamad Sori
Abstract:
The problem that Malaysian auditors do not issue going concern opinion (GC opinion) to seriously financially distressed companies is still a pressing issue. Policy makers, particularly the Financial Statement Review Committee (FSRC) of Malaysian Institute of Accountant, have raised this issue as early as in 2009. Similar problem happened in the US, UK, and many developing countries. It is important for auditors to issue GC opinion properly because such opinion is one signal about the viability of a company much needed by stakeholders. There are at least two unanswered questions or research gaps in the literature on determinants of GC opinion. Firstly, is client’s probability of failure associated with GC opinion issuance? Secondly, to what extent influential owners (management, family, and institution) moderate the association between client probability of failure and GC opinion issuance. The objective of this study is, therefore, twofold; (1) To examine the extent of the relationship between the probability of client failure and the issuance of GC opinion and (2) To examine the level of management, family, and institutional ownerships moderate the association between client probability of failure and the issuance of GC opinion. This study is quantitative in nature, and the sources of data are secondary (mainly company’s annual reports). A total of four hypotheses have been developed and tested on data accumulated from annual reports of seriously financially distressed Malaysian public listed companies. Data from 2006 to 2012 on a sample of 644 observations have been analyzed using panel logistic regression. It is found that certainty (rather than probability) of client failure affects the issuance of GC opinion. In addition, it is found that only the level of family ownership does positively moderate the relationship between client probability of failure and GC opinion issuance. This study is a contribution to auditing literature as its findings can enhance our understanding about audit quality; particularly on the variables that are associated with the issuance of GC opinion. The findings of this study shed light on the roles family owners in GC opinion issuance process, and this would open ways for the researcher to suggest measures that can be used to tackle the problem of auditors do not want to issue GC opinion to financially distressed clients. The measures to be suggested can be useful to policy makers in formulating future promulgations.Keywords: audit quality, auditing, auditor characteristics, going concern opinion, Malaysia
Procedia PDF Downloads 2615937 Development of Industry Oriented Undergraduate Research Program
Authors: Sung Ryong Kim, Hyung Sup Han, Jae-Yup Kim
Abstract:
Many engineering students feel uncomfortable in solving the industry related problems. There are many ways to strengthen the engineering student’s ability to solve the assigned problem when they get a job. Korea National University of Transportation has developed an industry-oriented undergraduate research program (URP). An URP program is designed for engineering students to provide an experience of solving a company’s research problem. The URP project is carried out for 6 months. Each URP team consisted of 1 company mentor, 1 professor, and 3-4 engineering students. A team of different majors is strongly encouraged to integrate different perspectives of multidisciplinary background. The corporate research projects proposed by companies are chosen by the major-related student teams. A company mentor gives the detailed technical background of the project to the students, and he/she also provides a basic data, raw materials and so forth. The company allows students to use the company's research equipment. An assigned professor has adjusted the project scope and level to the student’s ability after discussing with a company mentor. Monthly meeting is used to check the progress, to exchange ideas, and to help the students. It is proven as an effective engineering education program not only to provide an experience of company research but also to motivate the students in their course work. This program provides a premier interdisciplinary platform for undergraduate students to perform the practical challenges encountered in their major-related companies and it is especially helpful for students who want to get a job from a company that proposed the project.Keywords: company mentor, industry oriented, interdisciplinary platform, undergraduate research program
Procedia PDF Downloads 2465936 The Intersection/Union Region Computation for Drosophila Brain Images Using Encoding Schemes Based on Multi-Core CPUs
Authors: Ming-Yang Guo, Cheng-Xian Wu, Wei-Xiang Chen, Chun-Yuan Lin, Yen-Jen Lin, Ann-Shyn Chiang
Abstract:
With more and more Drosophila Driver and Neuron images, it is an important work to find the similarity relationships among them as the functional inference. There is a general problem that how to find a Drosophila Driver image, which can cover a set of Drosophila Driver/Neuron images. In order to solve this problem, the intersection/union region for a set of images should be computed at first, then a comparison work is used to calculate the similarities between the region and other images. In this paper, three encoding schemes, namely Integer, Boolean, Decimal, are proposed to encode each image as a one-dimensional structure. Then, the intersection/union region from these images can be computed by using the compare operations, Boolean operators and lookup table method. Finally, the comparison work is done as the union region computation, and the similarity score can be calculated by the definition of Tanimoto coefficient. The above methods for the region computation are also implemented in the multi-core CPUs environment with the OpenMP. From the experimental results, in the encoding phase, the performance by the Boolean scheme is the best than that by others; in the region computation phase, the performance by Decimal is the best when the number of images is large. The speedup ratio can achieve 12 based on 16 CPUs. This work was supported by the Ministry of Science and Technology under the grant MOST 106-2221-E-182-070.Keywords: Drosophila driver image, Drosophila neuron images, intersection/union computation, parallel processing, OpenMP
Procedia PDF Downloads 2395935 Teaching, Learning and Evaluation Enhancement of Information Communication Technology Education in Schools through Pedagogical and E-Learning Techniques in the Sri Lankan Context
Authors: M. G. N. A. S. Fernando
Abstract:
This study uses a researchable framework to improve the quality of ICT education and the Teaching Learning Assessment/ Evaluation (TLA/TLE) process. It utilizes existing resources while improving the methodologies along with pedagogical techniques and e-Learning approaches used in the secondary schools of Sri Lanka. The study was carried out in two phases. Phase I focused on investigating the factors which affect the quality of ICT education. Based on the key factors of phase I, the Phase II focused on the design of an Experimental Application Model with 6 activity levels. Each Level in the Activity Model covers one or more levels in the Revised Bloom’s Taxonomy. Towards further enhancement of activity levels, other pedagogical techniques (activity based learning, e-learning techniques, problem solving activities and peer discussions etc.) were incorporated to each level in the activity model as appropriate. The application model was validated by a panel of teachers including a domain expert and was tested in the school environment too. The validity of performance was proved using 6 hypotheses testing and other methodologies. The analysis shows that student performance with problem solving activities increased by 19.5% due to the different treatment levels used. Compared to existing process it was also proved that the embedded techniques (mixture of traditional and modern pedagogical methods and their applications) are more effective with skills development of teachers and students.Keywords: activity models, Bloom’s taxonomy, ICT education, pedagogies
Procedia PDF Downloads 1655934 Parallel Pipelined Conjugate Gradient Algorithm on Heterogeneous Platforms
Authors: Sergey Kopysov, Nikita Nedozhogin, Leonid Tonkov
Abstract:
The article presents a parallel iterative solver for large sparse linear systems which can be used on a heterogeneous platform. Traditionally, the problem of solving linear systems does not scale well on multi-CPU/multi-GPUs clusters. For example, most of the attempts to implement the classical conjugate gradient method were at best counted in the same amount of time as the problem was enlarged. The paper proposes the pipelined variant of the conjugate gradient method (PCG), a formulation that is potentially better suited for hybrid CPU/GPU computing since it requires only one synchronization point per one iteration instead of two for standard CG. The standard and pipelined CG methods need the vector entries generated by the current GPU and other GPUs for matrix-vector products. So the communication between GPUs becomes a major performance bottleneck on multi GPU cluster. The article presents an approach to minimize the communications between parallel parts of algorithms. Additionally, computation and communication can be overlapped to reduce the impact of data exchange. Using the pipelined version of the CG method with one synchronization point, the possibility of asynchronous calculations and communications, load balancing between the CPU and GPU for solving the large linear systems allows for scalability. The algorithm is implemented with the combined use of technologies: MPI, OpenMP, and CUDA. We show that almost optimum speed up on 8-CPU/2GPU may be reached (relatively to a one GPU execution). The parallelized solver achieves a speedup of up to 5.49 times on 16 NVIDIA Tesla GPUs, as compared to one GPU.Keywords: conjugate gradient, GPU, parallel programming, pipelined algorithm
Procedia PDF Downloads 1655933 The Morality of the Sensitive in Adorno: Suffering and Recognition in the Mimesis Model
Authors: Talita Cavaignac
Abstract:
Adorno's critique of totality, especially in a split society marked by reification, also rests on the impossibility of generalizing normative principles. Given the unfeasibility of normative universalizations, which conditions can justify the possibility of criticism and normativity in Adorno's thought? If reason itself is still entangled in alienation from the model of the domination of nature, how could be possible a critical theory? In political terms, if the notion of totality is challenged by the critique of identity, how can Adorno maintain the ideal of liberation and reconciliation between private interests and the possibility of some sort of ethics without giving up a materialist theory of society and without betting in a necessary link between redemption and history? Faced with this complex of questions, it is intended to reflect on the sense in which the notion of ‘suffering’ could throw help to the epistemological problem of the foundations of criticism in Adorno's work. The idea is that, in contrast to a universalizable model of justice, Adorno mobilizes in the notion of ‘suffering’ a gateway to the critical reflection of society. He would thus develop an approach to moral problems through the sensual-bodily perspective, fear, pain, and somatic factors. Nevertheless, due to the attention to the damaged experience and to the constitution of subjectivity -a sense in which the concept of mimesis continues to stand out- we understand suffering as an expression of an objective reification. Following the statement of other authors, the intention is to think how the resources linked to the idea of ‘suffering’ in Adorno's writings are engaged in the reflection of the problem of morality and of the contradictions between universal and particular (articulated in Hegel's tradition).Keywords: ethics, morality, sensitive, Theodor Adorno
Procedia PDF Downloads 1385932 Computer Network Applications, Practical Implementations and Structural Control System Representations
Authors: El Miloudi Djelloul
Abstract:
The computer network play an important position for practical implementations of the differently system. To implement a system into network above all is needed to know all the configurations, which is responsible to be a part of the system, and to give adequate information and solution in realtime. So if want to implement this system for example in the school or relevant institutions, the first step is to analyze the types of model which is needed to be configured and another important step is to organize the works in the context of devices, as a part of the general system. Often before configuration, as important point is descriptions and documentations from all the works into the respective process, and then to organize in the aspect of problem-solving. The computer network as critic infrastructure is very specific so the paper present the effectiveness solutions in the structured aspect viewed from one side, and another side is, than the paper reflect the positive aspect in the context of modeling and block schema presentations as an better alternative to solve the specific problem because of continually distortions of the system from the line of devices, programs and signals or packed collisions, which are in movement from one computer node to another nodes.Keywords: local area networks, LANs, block schema presentations, computer network system, computer node, critical infrastructure packed collisions, structural control system representations, computer network, implementations, modeling structural representations, companies, computers, context, control systems, internet, software
Procedia PDF Downloads 3665931 Anlaytical Studies on Subgrade Soil Using Jute Geotextile
Authors: A. Vinod Kumar, G. Sunny Deol, Rakesh Kumar, B. Chandra
Abstract:
Application of fiber reinforcement in road construction is gaining some interest in enhancing soil strength. In this paper, the natural geotextile material obtained from gunny bags was used due to its vast local availability. Construction of flexible pavement on weaker soil such as clay soils is a significant problem in construction as well as in design due to its expansive characteristics. Jute geotextile (JGT) was used on a foundation layer of flexible pavement on rural roads. This problem will be conquered by increasing the subgrade strength by decreasing sub-base layer thickness by improving their overall pavement strength characteristics which ultimately reduces the cost of construction and leads to an economical design. California Bearing Ratio (CBR), unconfined compressive strength (UCS) and triaxial laboratory tests were conducted on two different soil samples, CI and MI. Weaker soil is reinforced with JGT, JGT+Bitumen. JGT+polythene sheet was varied with heights while performing the laboratory tests. Subgrade strength evaluation was investigated by conducting soak CBR test in the laboratory for clayey and silt soils. Laboratory results reveal that reinforced soak CBR value of clayey soil (CI) observed was 10.35%, and silty soil (MI) was 15.6%. This study intends to develop new technique for reinforcing weaker soil with JGT varying parameters for the need of low volume flexible pavements. It was observed that the performance of JGT is inferior when used with bitumen and polyethylene sheets.Keywords: CBR, jute geotextile, low volume road, weaker soil
Procedia PDF Downloads 4435930 Digital Joint Equivalent Channel Hybrid Precoding for Millimeterwave Massive Multiple Input Multiple Output Systems
Authors: Linyu Wang, Mingjun Zhu, Jianhong Xiang, Hanyu Jiang
Abstract:
Aiming at the problem that the spectral efficiency of hybrid precoding (HP) is too low in the current millimeter wave (mmWave) massive multiple input multiple output (MIMO) system, this paper proposes a digital joint equivalent channel hybrid precoding algorithm, which is based on the introduction of digital encoding matrix iteration. First, the objective function is expanded to obtain the relation equation, and the pseudo-inverse iterative function of the analog encoder is derived by using the pseudo-inverse method, which solves the problem of greatly increasing the amount of computation caused by the lack of rank of the digital encoding matrix and reduces the overall complexity of hybrid precoding. Secondly, the analog coding matrix and the millimeter-wave sparse channel matrix are combined into an equivalent channel, and then the equivalent channel is subjected to Singular Value Decomposition (SVD) to obtain a digital coding matrix, and then the derived pseudo-inverse iterative function is used to iteratively regenerate the simulated encoding matrix. The simulation results show that the proposed algorithm improves the system spectral efficiency by 10~20%compared with other algorithms and the stability is also improved.Keywords: mmWave, massive MIMO, hybrid precoding, singular value decompositing, equivalent channel
Procedia PDF Downloads 97