Search results for: fuzzy partitioning problem
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7740

Search results for: fuzzy partitioning problem

5820 The Inverse Problem in the Process of Heat and Moisture Transfer in Multilayer Walling

Authors: Bolatbek Rysbaiuly, Nazerke Rysbayeva, Aigerim Rysbayeva

Abstract:

Relevance: Energy saving elevated to public policy in almost all developed countries. One of the areas for energy efficiency is improving and tightening design standards. In the tie with the state standards, make high demands for thermal protection of buildings. Constructive arrangement of layers should ensure normal operation in which the humidity of materials of construction should not exceed a certain level. Elevated levels of moisture in the walls can be attributed to a defective condition, as moisture significantly reduces the physical, mechanical and thermal properties of materials. Absence at the design stage of modeling the processes occurring in the construction and predict the behavior of structures during their work in the real world leads to an increase in heat loss and premature aging structures. Method: To solve this problem, widely used method of mathematical modeling of heat and mass transfer in materials. The mathematical modeling of heat and mass transfer are taken into the equation interconnected layer [1]. In winter, the thermal and hydraulic conductivity characteristics of the materials are nonlinear and depends on the temperature and moisture in the material. In this case, the experimental method of determining the coefficient of the freezing or thawing of the material becomes much more difficult. Therefore, in this paper we propose an approximate method for calculating the thermal conductivity and moisture permeability characteristics of freezing or thawing material. Questions. Following the development of methods for solving the inverse problem of mathematical modeling allows us to answer questions that are closely related to the rational design of fences: Where the zone of condensation in the body of the multi-layer fencing; How and where to apply insulation rationally his place; Any constructive activities necessary to provide for the removal of moisture from the structure; What should be the temperature and humidity conditions for the normal operation of the premises enclosing structure; What is the longevity of the structure in terms of its components frost materials. Tasks: The proposed mathematical model to solve the following problems: To assess the condition of the thermo-physical designed structures at different operating conditions and select appropriate material layers; Calculate the temperature field in a structurally complex multilayer structures; When measuring temperature and moisture in the characteristic points to determine the thermal characteristics of the materials constituting the surveyed construction; Laboratory testing to significantly reduce test time, and eliminates the climatic chamber and expensive instrumentation experiments and research; Allows you to simulate real-life situations that arise in multilayer enclosing structures associated with freezing, thawing, drying and cooling of any layer of the building material.

Keywords: energy saving, inverse problem, heat transfer, multilayer walling

Procedia PDF Downloads 387
5819 Physics-Informed Machine Learning for Displacement Estimation in Solid Mechanics Problem

Authors: Feng Yang

Abstract:

Machine learning (ML), especially deep learning (DL), has been extensively applied to many applications in recently years and gained great success in solving different problems, including scientific problems. However, conventional ML/DL methodologies are purely data-driven which have the limitations, such as need of ample amount of labelled training data, lack of consistency to physical principles, and lack of generalizability to new problems/domains. Recently, there is a growing consensus that ML models need to further take advantage of prior knowledge to deal with these limitations. Physics-informed machine learning, aiming at integration of physics/domain knowledge into ML, has been recognized as an emerging area of research, especially in the recent 2 to 3 years. In this work, physics-informed ML, specifically physics-informed neural network (NN), is employed and implemented to estimate the displacements at x, y, z directions in a solid mechanics problem that is controlled by equilibrium equations with boundary conditions. By incorporating the physics (i.e. the equilibrium equations) into the learning process of NN, it is showed that the NN can be trained very efficiently with a small set of labelled training data. Experiments with different settings of the NN model and the amount of labelled training data were conducted, and the results show that very high accuracy can be achieved in fulfilling the equilibrium equations as well as in predicting the displacements, e.g. in setting the overall displacement of 0.1, a root mean square error (RMSE) of 2.09 × 10−4 was achieved.

Keywords: deep learning, neural network, physics-informed machine learning, solid mechanics

Procedia PDF Downloads 141
5818 Computer-Aided Classification of Liver Lesions Using Contrasting Features Difference

Authors: Hussein Alahmer, Amr Ahmed

Abstract:

Liver cancer is one of the common diseases that cause the death. Early detection is important to diagnose and reduce the incidence of death. Improvements in medical imaging and image processing techniques have significantly enhanced interpretation of medical images. Computer-Aided Diagnosis (CAD) systems based on these techniques play a vital role in the early detection of liver disease and hence reduce liver cancer death rate.  This paper presents an automated CAD system consists of three stages; firstly, automatic liver segmentation and lesion’s detection. Secondly, extracting features. Finally, classifying liver lesions into benign and malignant by using the novel contrasting feature-difference approach. Several types of intensity, texture features are extracted from both; the lesion area and its surrounding normal liver tissue. The difference between the features of both areas is then used as the new lesion descriptors. Machine learning classifiers are then trained on the new descriptors to automatically classify liver lesions into benign or malignant. The experimental results show promising improvements. Moreover, the proposed approach can overcome the problems of varying ranges of intensity and textures between patients, demographics, and imaging devices and settings.

Keywords: CAD system, difference of feature, fuzzy c means, lesion detection, liver segmentation

Procedia PDF Downloads 313
5817 Climate Change Law and Transnational Corporations

Authors: Manuel Jose Oyson

Abstract:

The Intergovernmental Panel on Climate Change (IPCC) warned in its most recent report for the entire world “to both mitigate and adapt to climate change if it is to effectively avoid harmful climate impacts.” The IPCC observed “with high confidence” a more rapid rise in total anthropogenic greenhouse gas emissions (GHG) emissions from 2000 to 2010 than in the past three decades that “were the highest in human history”, which if left unchecked will entail a continuing process of global warming and can alter the climate system. Current efforts, however, to respond to the threat of global warming, such as the United Nations Framework Convention on Climate Change and the Kyoto Protocol, have focused on states, and fail to involve Transnational Corporations (TNCs) which are responsible for a vast amount of GHG emissions. Involving TNCs in the search for solutions to climate change is consistent with an acknowledgment by contemporary international law that there is an international role for other international persons, including TNCs, and departs from the traditional “state-centric” response to climate change. Putting the focus of GHG emissions away from states recognises that the activities of TNCs “are not bound by national borders” and that the international movement of goods meets the needs of consumers worldwide. Although there is no legally-binding instrument that covers TNC activities or legal responsibilities generally, TNCs have increasingly been made legally responsible under international law for violations of human rights, exploitation of workers and environmental damage, but not for climate change damage. Imposing on TNCs a legally-binding obligation to reduce their GHG emissions or a legal liability for climate change damage is arguably formidable and unlikely in the absence a recognisable source of obligation in international law or municipal law. Instead a recourse to “soft law” and non-legally binding instruments may be a way forward for TNCs to reduce their GHG emissions and help in addressing climate change. Positive effects have been noted by various studies to voluntary approaches. TNCs have also in recent decades voluntarily committed to “soft law” international agreements. This development reflects a growing recognition among corporations in general and TNCs in particular of their corporate social responsibility (CSR). While CSR used to be the domain of “small, offbeat companies”, it has now become part of mainstream organization. The paper argues that TNCs must voluntarily commit to reducing their GHG emissions and helping address climate change as part of their CSR. One, as a serious “global commons problem”, climate change requires international cooperation from multiple actors, including TNCs. Two, TNCs are not innocent bystanders but are responsible for a large part of GHG emissions across their vast global operations. Three, TNCs have the capability to help solve the problem of climate change. Assuming arguendo that TNCs did not strongly contribute to the problem of climate change, society would have valid expectations for them to use their capabilities, knowledge-base and advanced technologies to help address the problem. It would seem unthinkable for TNCs to do nothing while the global environment fractures.

Keywords: climate change law, corporate social responsibility, greenhouse gas emissions, transnational corporations

Procedia PDF Downloads 340
5816 Variable Mapping: From Bibliometrics to Implications

Authors: Przemysław Tomczyk, Dagmara Plata-Alf, Piotr Kwiatek

Abstract:

Literature review is indispensable in research. One of the key techniques used in it is bibliometric analysis, where one of the methods is science mapping. The classic approach that dominates today in this area consists of mapping areas, keywords, terms, authors, or citations. This approach is also used in relation to the review of literature in the field of marketing. The development of technology has resulted in the fact that researchers and practitioners use the capabilities of software available on the market for this purpose. The use of science mapping software tools (e.g., VOSviewer, SciMAT, Pajek) in recent publications involves the implementation of a literature review, and it is useful in areas with a relatively high number of publications. Despite this well-grounded science mapping approach having been applied in the literature reviews, performing them is a painstaking task, especially if authors would like to draw precise conclusions about the studied literature and uncover potential research gaps. The aim of this article is to identify to what extent a new approach to science mapping, variable mapping, takes advantage of the classic science mapping approach in terms of research problem formulation and content/thematic analysis for literature reviews. To perform the analysis, a set of 5 articles on customer ideation was chosen. Next, the analysis of key words mapping results in VOSviewer science mapping software was performed and compared with the variable map prepared manually on the same articles. Seven independent expert judges (management scientists on different levels of expertise) assessed the usability of both the stage of formulating, the research problem, and content/thematic analysis. The results show the advantage of variable mapping in the formulation of the research problem and thematic/content analysis. First, the ability to identify a research gap is clearly visible due to the transparent and comprehensive analysis of the relationships between the variables, not only keywords. Second, the analysis of relationships between variables enables the creation of a story with an indication of the directions of relationships between variables. Demonstrating the advantage of the new approach over the classic one may be a significant step towards developing a new approach to the synthesis of literature and its reviews. Variable mapping seems to allow scientists to build clear and effective models presenting the scientific achievements of a chosen research area in one simple map. Additionally, the development of the software enabling the automation of the variable mapping process on large data sets may be a breakthrough change in the field of conducting literature research.

Keywords: bibliometrics, literature review, science mapping, variable mapping

Procedia PDF Downloads 113
5815 Cooperation of Unmanned Vehicles for Accomplishing Missions

Authors: Ahmet Ozcan, Onder Alparslan, Anil Sezgin, Omer Cetin

Abstract:

The use of unmanned systems for different purposes has become very popular over the past decade. Expectations from these systems have also shown an incredible increase in this parallel. But meeting the demands of the tasks are often not possible with the usage of a single unmanned vehicle in a mission, so it is necessary to use multiple autonomous vehicles with different abilities together in coordination. Therefore the usage of the same type of vehicles together as a swarm is helped especially to satisfy the time constraints of the missions effectively. In other words, it allows sharing the workload by the various numbers of homogenous platforms together. Besides, it is possible to say there are many kinds of problems that require the usage of the different capabilities of the heterogeneous platforms together cooperatively to achieve successful results. In this case, cooperative working brings additional problems beyond the homogeneous clusters. In the scenario presented as an example problem, it is expected that an autonomous ground vehicle, which is lack of its position information, manage to perform point-to-point navigation without losing its way in a previously unknown labyrinth. Furthermore, the ground vehicle is equipped with very limited sensors such as ultrasonic sensors that can detect obstacles. It is very hard to plan or complete the mission for the ground vehicle by self without lost its way in the unknown labyrinth. Thus, in order to assist the ground vehicle, the autonomous air drone is also used to solve the problem cooperatively. The autonomous drone also has limited sensors like downward looking camera and IMU, and it also lacks computing its global position. In this context, it is aimed to solve the problem effectively without taking additional support or input from the outside, just benefiting capabilities of two autonomous vehicles. To manage the point-to-point navigation in a previously unknown labyrinth, the platforms have to work together coordinated. In this paper, cooperative work of heterogeneous unmanned systems is handled in an applied sample scenario, and it is mentioned that how to work together with an autonomous ground vehicle and the autonomous flying platform together in a harmony to take advantage of different platform-specific capabilities. The difficulties of using heterogeneous multiple autonomous platforms in a mission are put forward, and the successful solutions are defined and implemented against the problems like spatially distributed tasks planning, simultaneous coordinated motion, effective communication, and sensor fusion.

Keywords: unmanned systems, heterogeneous autonomous vehicles, coordination, task planning

Procedia PDF Downloads 121
5814 Soft Computing Approach for Diagnosis of Lassa Fever

Authors: Roseline Oghogho Osaseri, Osaseri E. I.

Abstract:

Lassa fever is an epidemic hemorrhagic fever caused by the Lassa virus, an extremely virulent arena virus. This highly fatal disorder kills 10% to 50% of its victims, but those who survive its early stages usually recover and acquire immunity to secondary attacks. One of the major challenges in giving proper treatment is lack of fast and accurate diagnosis of the disease due to multiplicity of symptoms associated with the disease which could be similar to other clinical conditions and makes it difficult to diagnose early. This paper proposed an Adaptive Neuro Fuzzy Inference System (ANFIS) for the prediction of Lass Fever. In the design of the diagnostic system, four main attributes were considered as the input parameters and one output parameter for the system. The input parameters are Temperature on admission (TA), White Blood Count (WBC), Proteinuria (P) and Abdominal Pain (AP). Sixty-one percent of the datasets were used in training the system while fifty-nine used in testing. Experimental results from this study gave a reliable and accurate prediction of Lassa fever when compared with clinically confirmed cases. In this study, we have proposed Lassa fever diagnostic system to aid surgeons and medical healthcare practictionals in health care facilities who do not have ready access to Polymerase Chain Reaction (PCR) diagnosis to predict possible Lassa fever infection.

Keywords: anfis, lassa fever, medical diagnosis, soft computing

Procedia PDF Downloads 254
5813 The Problem of Reconciling the Principle of Confidentiality in Foreign Investment Arbitration with the Public Interest

Authors: Bárbara Magalhães Bravo, Cláudia Figueiras

Abstract:

The economical globalization through the liberalization of the markets and capitals boosted the economical development of the nations and the needs for sorting out the disputes arising from the foreign investment. The arbitration, for all the inherent advantages, such as swiftness, arbitrators’ specialise skills and impartiality sets a pacifier tool for the interest in account. Safeguarded the public interest, we face the problem of the confidentiality in the arbitration. The urgent development of impelling mechanisms concerning transparency, guaranty and protection of the interest in account, reveals itself urgent. Through a bibliography review, we will dense the state of art, by going through the several solutions concerning, and pointing out the most suitable. Through the jurisprudential analysis we will point out the solution for the conflict confidentiality/public interest. The transparency, inextricable from the public interest, imposes the arbitration process can be open to all citizens. Transparency rules have been considered at the UNCITRAL in attempting to conciliate the necessity of publicity and the public interest, however still insufficient. The arbitration of foreign investment carries consequences to the citizens of the State. Articulating mechanisms between the arbitral procedures secrecy and the public interest should be adopted. The arbitration of foreign investment, being a tertius genius between the international arbitration and the administrative arbitration would claim its own regulation in each and every States where the confidentiality rules and its exceptions could be identified. One should enquiry where the limit of the citizens’ individual rights protection and the public interest should give way to the principle of transparency

Keywords: arbitration, foreign investment, transparency, confidenciality, International Centre for Settlement of Investment Disputes UNCITRAL

Procedia PDF Downloads 203
5812 Root Cause Analysis of Excessive Vibration in a Feeder Pump of a Large Thermal Electric Power Plant: A Simulation Approach

Authors: Kavindan Balakrishnan

Abstract:

Root cause Identification of the Vibration phenomenon in a feedwater pumping station was the main objective of this research. First, the mode shapes of the pumping structure were investigated using numerical and analytical methods. Then the flow pressure and streamline distribution in the pump sump were examined using C.F.D. simulation, which was hypothesized can be a cause of vibration in the pumping station. As the problem specification of this research states, the vibration phenomenon in the pumping station, with four parallel pumps operating at the same time and heavy vibration recorded even after several maintenance steps. They also specified that a relatively large amplitude of vibration exited by pumps 1 and 4 while others remain normal. As a result, the focus of this research was on determining the cause of such a mode of vibration in the pump station with the assistance of Finite Element Analysis tools and Analytical methods. Major outcomes were observed in structural behavior which is favorable to the vibration pattern phenomenon in the pumping structure as a result of this research. Behaviors of the numerical and analytical models of the pump structure have similar characteristics in their mode shapes, particularly in their 2nd mode shape, which is considerably related to the exact cause of the research problem statement. Since this study reveals several possible points of flow visualization in the pump sump model that can be a favorable cause of vibration in the system, there is more room for improved investigation on flow conditions relating to pump vibrations.

Keywords: vibration, simulation, analysis, Ansys, Matlab, mode shapes, pressure distribution, structure

Procedia PDF Downloads 117
5811 Moderating Effect of Owner's Influence on the Relationship between the Probability of Client Failure and Going Concern Opinion Issuance

Authors: Mohammad Noor Hisham Osman, Ahmed Razman Abdul Latiff, Zaidi Mat Daud, Zulkarnain Muhamad Sori

Abstract:

The problem that Malaysian auditors do not issue going concern opinion (GC opinion) to seriously financially distressed companies is still a pressing issue. Policy makers, particularly the Financial Statement Review Committee (FSRC) of Malaysian Institute of Accountant, have raised this issue as early as in 2009. Similar problem happened in the US, UK, and many developing countries. It is important for auditors to issue GC opinion properly because such opinion is one signal about the viability of a company much needed by stakeholders. There are at least two unanswered questions or research gaps in the literature on determinants of GC opinion. Firstly, is client’s probability of failure associated with GC opinion issuance? Secondly, to what extent influential owners (management, family, and institution) moderate the association between client probability of failure and GC opinion issuance. The objective of this study is, therefore, twofold; (1) To examine the extent of the relationship between the probability of client failure and the issuance of GC opinion and (2) To examine the level of management, family, and institutional ownerships moderate the association between client probability of failure and the issuance of GC opinion. This study is quantitative in nature, and the sources of data are secondary (mainly company’s annual reports). A total of four hypotheses have been developed and tested on data accumulated from annual reports of seriously financially distressed Malaysian public listed companies. Data from 2006 to 2012 on a sample of 644 observations have been analyzed using panel logistic regression. It is found that certainty (rather than probability) of client failure affects the issuance of GC opinion. In addition, it is found that only the level of family ownership does positively moderate the relationship between client probability of failure and GC opinion issuance. This study is a contribution to auditing literature as its findings can enhance our understanding about audit quality; particularly on the variables that are associated with the issuance of GC opinion. The findings of this study shed light on the roles family owners in GC opinion issuance process, and this would open ways for the researcher to suggest measures that can be used to tackle the problem of auditors do not want to issue GC opinion to financially distressed clients. The measures to be suggested can be useful to policy makers in formulating future promulgations.

Keywords: audit quality, auditing, auditor characteristics, going concern opinion, Malaysia

Procedia PDF Downloads 253
5810 Development of Industry Oriented Undergraduate Research Program

Authors: Sung Ryong Kim, Hyung Sup Han, Jae-Yup Kim

Abstract:

Many engineering students feel uncomfortable in solving the industry related problems. There are many ways to strengthen the engineering student’s ability to solve the assigned problem when they get a job. Korea National University of Transportation has developed an industry-oriented undergraduate research program (URP). An URP program is designed for engineering students to provide an experience of solving a company’s research problem. The URP project is carried out for 6 months. Each URP team consisted of 1 company mentor, 1 professor, and 3-4 engineering students. A team of different majors is strongly encouraged to integrate different perspectives of multidisciplinary background. The corporate research projects proposed by companies are chosen by the major-related student teams. A company mentor gives the detailed technical background of the project to the students, and he/she also provides a basic data, raw materials and so forth. The company allows students to use the company's research equipment. An assigned professor has adjusted the project scope and level to the student’s ability after discussing with a company mentor. Monthly meeting is used to check the progress, to exchange ideas, and to help the students. It is proven as an effective engineering education program not only to provide an experience of company research but also to motivate the students in their course work. This program provides a premier interdisciplinary platform for undergraduate students to perform the practical challenges encountered in their major-related companies and it is especially helpful for students who want to get a job from a company that proposed the project.

Keywords: company mentor, industry oriented, interdisciplinary platform, undergraduate research program

Procedia PDF Downloads 239
5809 The Intersection/Union Region Computation for Drosophila Brain Images Using Encoding Schemes Based on Multi-Core CPUs

Authors: Ming-Yang Guo, Cheng-Xian Wu, Wei-Xiang Chen, Chun-Yuan Lin, Yen-Jen Lin, Ann-Shyn Chiang

Abstract:

With more and more Drosophila Driver and Neuron images, it is an important work to find the similarity relationships among them as the functional inference. There is a general problem that how to find a Drosophila Driver image, which can cover a set of Drosophila Driver/Neuron images. In order to solve this problem, the intersection/union region for a set of images should be computed at first, then a comparison work is used to calculate the similarities between the region and other images. In this paper, three encoding schemes, namely Integer, Boolean, Decimal, are proposed to encode each image as a one-dimensional structure. Then, the intersection/union region from these images can be computed by using the compare operations, Boolean operators and lookup table method. Finally, the comparison work is done as the union region computation, and the similarity score can be calculated by the definition of Tanimoto coefficient. The above methods for the region computation are also implemented in the multi-core CPUs environment with the OpenMP. From the experimental results, in the encoding phase, the performance by the Boolean scheme is the best than that by others; in the region computation phase, the performance by Decimal is the best when the number of images is large. The speedup ratio can achieve 12 based on 16 CPUs. This work was supported by the Ministry of Science and Technology under the grant MOST 106-2221-E-182-070.

Keywords: Drosophila driver image, Drosophila neuron images, intersection/union computation, parallel processing, OpenMP

Procedia PDF Downloads 229
5808 Teaching, Learning and Evaluation Enhancement of Information Communication Technology Education in Schools through Pedagogical and E-Learning Techniques in the Sri Lankan Context

Authors: M. G. N. A. S. Fernando

Abstract:

This study uses a researchable framework to improve the quality of ICT education and the Teaching Learning Assessment/ Evaluation (TLA/TLE) process. It utilizes existing resources while improving the methodologies along with pedagogical techniques and e-Learning approaches used in the secondary schools of Sri Lanka. The study was carried out in two phases. Phase I focused on investigating the factors which affect the quality of ICT education. Based on the key factors of phase I, the Phase II focused on the design of an Experimental Application Model with 6 activity levels. Each Level in the Activity Model covers one or more levels in the Revised Bloom’s Taxonomy. Towards further enhancement of activity levels, other pedagogical techniques (activity based learning, e-learning techniques, problem solving activities and peer discussions etc.) were incorporated to each level in the activity model as appropriate. The application model was validated by a panel of teachers including a domain expert and was tested in the school environment too. The validity of performance was proved using 6 hypotheses testing and other methodologies. The analysis shows that student performance with problem solving activities increased by 19.5% due to the different treatment levels used. Compared to existing process it was also proved that the embedded techniques (mixture of traditional and modern pedagogical methods and their applications) are more effective with skills development of teachers and students.

Keywords: activity models, Bloom’s taxonomy, ICT education, pedagogies

Procedia PDF Downloads 155
5807 Parallel Pipelined Conjugate Gradient Algorithm on Heterogeneous Platforms

Authors: Sergey Kopysov, Nikita Nedozhogin, Leonid Tonkov

Abstract:

The article presents a parallel iterative solver for large sparse linear systems which can be used on a heterogeneous platform. Traditionally, the problem of solving linear systems does not scale well on multi-CPU/multi-GPUs clusters. For example, most of the attempts to implement the classical conjugate gradient method were at best counted in the same amount of time as the problem was enlarged. The paper proposes the pipelined variant of the conjugate gradient method (PCG), a formulation that is potentially better suited for hybrid CPU/GPU computing since it requires only one synchronization point per one iteration instead of two for standard CG. The standard and pipelined CG methods need the vector entries generated by the current GPU and other GPUs for matrix-vector products. So the communication between GPUs becomes a major performance bottleneck on multi GPU cluster. The article presents an approach to minimize the communications between parallel parts of algorithms. Additionally, computation and communication can be overlapped to reduce the impact of data exchange. Using the pipelined version of the CG method with one synchronization point, the possibility of asynchronous calculations and communications, load balancing between the CPU and GPU for solving the large linear systems allows for scalability. The algorithm is implemented with the combined use of technologies: MPI, OpenMP, and CUDA. We show that almost optimum speed up on 8-CPU/2GPU may be reached (relatively to a one GPU execution). The parallelized solver achieves a speedup of up to 5.49 times on 16 NVIDIA Tesla GPUs, as compared to one GPU.

Keywords: conjugate gradient, GPU, parallel programming, pipelined algorithm

Procedia PDF Downloads 150
5806 The Morality of the Sensitive in Adorno: Suffering and Recognition in the Mimesis Model

Authors: Talita Cavaignac

Abstract:

Adorno's critique of totality, especially in a split society marked by reification, also rests on the impossibility of generalizing normative principles. Given the unfeasibility of normative universalizations, which conditions can justify the possibility of criticism and normativity in Adorno's thought? If reason itself is still entangled in alienation from the model of the domination of nature, how could be possible a critical theory? In political terms, if the notion of totality is challenged by the critique of identity, how can Adorno maintain the ideal of liberation and reconciliation between private interests and the possibility of some sort of ethics without giving up a materialist theory of society and without betting in a necessary link between redemption and history? Faced with this complex of questions, it is intended to reflect on the sense in which the notion of ‘suffering’ could throw help to the epistemological problem of the foundations of criticism in Adorno's work. The idea is that, in contrast to a universalizable model of justice, Adorno mobilizes in the notion of ‘suffering’ a gateway to the critical reflection of society. He would thus develop an approach to moral problems through the sensual-bodily perspective, fear, pain, and somatic factors. Nevertheless, due to the attention to the damaged experience and to the constitution of subjectivity -a sense in which the concept of mimesis continues to stand out- we understand suffering as an expression of an objective reification. Following the statement of other authors, the intention is to think how the resources linked to the idea of ‘suffering’ in Adorno's writings are engaged in the reflection of the problem of morality and of the contradictions between universal and particular (articulated in Hegel's tradition).

Keywords: ethics, morality, sensitive, Theodor Adorno

Procedia PDF Downloads 128
5805 Computer Network Applications, Practical Implementations and Structural Control System Representations

Authors: El Miloudi Djelloul

Abstract:

The computer network play an important position for practical implementations of the differently system. To implement a system into network above all is needed to know all the configurations, which is responsible to be a part of the system, and to give adequate information and solution in realtime. So if want to implement this system for example in the school or relevant institutions, the first step is to analyze the types of model which is needed to be configured and another important step is to organize the works in the context of devices, as a part of the general system. Often before configuration, as important point is descriptions and documentations from all the works into the respective process, and then to organize in the aspect of problem-solving. The computer network as critic infrastructure is very specific so the paper present the effectiveness solutions in the structured aspect viewed from one side, and another side is, than the paper reflect the positive aspect in the context of modeling and block schema presentations as an better alternative to solve the specific problem because of continually distortions of the system from the line of devices, programs and signals or packed collisions, which are in movement from one computer node to another nodes.

Keywords: local area networks, LANs, block schema presentations, computer network system, computer node, critical infrastructure packed collisions, structural control system representations, computer network, implementations, modeling structural representations, companies, computers, context, control systems, internet, software

Procedia PDF Downloads 349
5804 Anlaytical Studies on Subgrade Soil Using Jute Geotextile

Authors: A. Vinod Kumar, G. Sunny Deol, Rakesh Kumar, B. Chandra

Abstract:

Application of fiber reinforcement in road construction is gaining some interest in enhancing soil strength. In this paper, the natural geotextile material obtained from gunny bags was used due to its vast local availability. Construction of flexible pavement on weaker soil such as clay soils is a significant problem in construction as well as in design due to its expansive characteristics. Jute geotextile (JGT) was used on a foundation layer of flexible pavement on rural roads. This problem will be conquered by increasing the subgrade strength by decreasing sub-base layer thickness by improving their overall pavement strength characteristics which ultimately reduces the cost of construction and leads to an economical design. California Bearing Ratio (CBR), unconfined compressive strength (UCS) and triaxial laboratory tests were conducted on two different soil samples, CI and MI. Weaker soil is reinforced with JGT, JGT+Bitumen. JGT+polythene sheet was varied with heights while performing the laboratory tests. Subgrade strength evaluation was investigated by conducting soak CBR test in the laboratory for clayey and silt soils. Laboratory results reveal that reinforced soak CBR value of clayey soil (CI) observed was 10.35%, and silty soil (MI) was 15.6%. This study intends to develop new technique for reinforcing weaker soil with JGT varying parameters for the need of low volume flexible pavements. It was observed that the performance of JGT is inferior when used with bitumen and polyethylene sheets.

Keywords: CBR, jute geotextile, low volume road, weaker soil

Procedia PDF Downloads 428
5803 Digital Joint Equivalent Channel Hybrid Precoding for Millimeterwave Massive Multiple Input Multiple Output Systems

Authors: Linyu Wang, Mingjun Zhu, Jianhong Xiang, Hanyu Jiang

Abstract:

Aiming at the problem that the spectral efficiency of hybrid precoding (HP) is too low in the current millimeter wave (mmWave) massive multiple input multiple output (MIMO) system, this paper proposes a digital joint equivalent channel hybrid precoding algorithm, which is based on the introduction of digital encoding matrix iteration. First, the objective function is expanded to obtain the relation equation, and the pseudo-inverse iterative function of the analog encoder is derived by using the pseudo-inverse method, which solves the problem of greatly increasing the amount of computation caused by the lack of rank of the digital encoding matrix and reduces the overall complexity of hybrid precoding. Secondly, the analog coding matrix and the millimeter-wave sparse channel matrix are combined into an equivalent channel, and then the equivalent channel is subjected to Singular Value Decomposition (SVD) to obtain a digital coding matrix, and then the derived pseudo-inverse iterative function is used to iteratively regenerate the simulated encoding matrix. The simulation results show that the proposed algorithm improves the system spectral efficiency by 10~20%compared with other algorithms and the stability is also improved.

Keywords: mmWave, massive MIMO, hybrid precoding, singular value decompositing, equivalent channel

Procedia PDF Downloads 85
5802 Academic Staff Development: A Lever to Address the Challenges of the 21st Century University Classroom

Authors: Severino Machingambi

Abstract:

Most academics entering Higher education as lecturers in South Africa do not have qualifications in Education or teaching. This creates serious problems since they are not sufficiently equipped with pedagogical approaches and theories that inform their facilitation of learning strategies. This, arguably, is one of the reasons why higher education institutions are experiencing high student failure rate. In order to mitigate this problem, it is critical that higher education institutions devise internal academic staff development programmes to capacitate academics with pedagogical skills and competencies so as to enhance the quality of student learning. This paper reported on how the Teaching and Learning Development Centre of a university used design-based research methodology to conceptualise and implement an academic staff development programme for new academics at a university of technology. This approach revolves around the designing, testing and refining of an educational intervention. Design-based research is an important methodology for understanding how, when, and why educational innovations work in practice. The need for a professional development course for academics arose due to the fact that most academics at the university did not have teaching qualifications and many of them were employed straight from industry with little understanding of pedagogical approaches. This paper examines three key aspects of the programme namely, the preliminary phase, the teaching experiment and the retrospective analysis. The preliminary phase is the stage in which the problem identification takes place. The problem that this research sought to address relates to the unsatisfactory academic performance of the majority of the students in the institution. It was therefore hypothesized that the problem could be dealt with by professionalising new academics through engagement in an academic staff development programme. The teaching experiment phase afforded researchers and participants in the programme the opportunity to test and refine the proposed intervention and the design principles upon which it was based. The teaching experiment phase revolved around the testing of the new academics professional development programme. This phase created a platform for researchers and academics in the programme to experiment with various activities and instructional strategies such as case studies, observations, discussions and portfolio building. The teaching experiment phase was followed by the retrospective analysis stage in which the research team looked back and tried to give a trustworthy account of the teaching/learning process that had taken place. A questionnaire and focus group discussions were used to collect data from participants that helped to evaluate the programme and its implementation. One of the findings of this study was that academics joining university really need an academic induction programme that inducts them into the discourse of teaching and learning. The study also revealed that existing academics can be placed on formal study programmes in which they acquire educational qualifications with a view to equip them with useful classroom discourses. The study, therefore, concludes that new and existing academics in universities should be supported through induction programmes and placement on formal studies in teaching and learning so that they are capacitated as facilitators of learning.

Keywords: academic staff, pedagogy, programme, staff development

Procedia PDF Downloads 123
5801 Designing Ecologically and Economically Optimal Electric Vehicle Charging Stations

Authors: Y. Ghiassi-Farrokhfal

Abstract:

The number of electric vehicles (EVs) is increasing worldwide. Replacing gas fueled cars with EVs reduces carbon emission. However, the extensive energy consumption of EVs stresses the energy systems, requiring non-green sources of energy (such as gas turbines) to compensate for the new energy demand caused by EVs in the energy systems. To make EVs even a greener solution for the future energy systems, new EV charging stations are equipped with solar PV panels and batteries. This will help serve the energy demand of EVs through the green energy of solar panels. To ensure energy availability, solar panels are combined with batteries. The energy surplus at any point is stored in batteries and is used when there is not enough solar energy to serve the demand. While EV charging stations equipped with solar panels and batteries are green and ecologically optimal, they might not be financially viable solutions, due to battery prices. To make the system viable, we should size the battery economically and operate the system optimally. This is, in general, a challenging problem because of the stochastic nature of the EV arrivals at the charging station, the available solar energy, and the battery operating system. In this work, we provide a mathematical model for this problem and we compute the return on investment (ROI) of such a system, which is designed to be ecologically and financially optimal. We also quantify the minimum required investment in terms of battery and solar panels along with the operating strategy to ensure that a charging station has enough energy to serve its EV demand at any time.

Keywords: solar energy, battery storage, electric vehicle, charging stations

Procedia PDF Downloads 210
5800 A Radiographic Superimposition in Orthognathic Surgery of Class III Skeletal Malocclusion

Authors: Albert Suryaprawira

Abstract:

Patients requiring correction of severe Class III skeletal discrepancy historically has been among the most challenging treatments for orthodontists. Correction of an aesthetic and functional problem is crucially important. This is a case report of an adult male aged 18 years who complained of difficulty in chewing and speaking. Patient has a prominent profile with mandibular excess. The pre-treatment cephalometric radiograph was taken to analyse the skeletal problem and to measure the amount of bone movement and the prediction soft tissue response. The panoramic radiograph was also taken to analyse bone quality, bone abnormality, third molar impaction, etc. Before the surgery, the pre-surgical cephalometric radiograph was taken to re-evaluate the plan and to settle the final amount of bone cut. After the surgery, the post-surgical cephalometric radiograph was taken to confirm the result with the plan. The superimposition between those radiographs was performed to analyse the outcome. It includes the superimposition of the cranial base, maxilla, and mandible. Superimposition is important to describe the amount of hard and soft tissue movement. It is also important to predict the possibility of relapse after the surgery. The patient needs to understand all the surgical plan, outcome and relapse prevention. The surgery included mandibular set back by bilateral sagittal split osteotomies. Although the discrepancy was severe using this combination of treatment and the use of radiographic superimposition, an aesthetically pleasing and stable result was achieved.

Keywords: cephalometric, mandibular set back, orthognathic, superimposition

Procedia PDF Downloads 252
5799 Modelling Mode Choice Behaviour Using Cloud Theory

Authors: Leah Wright, Trevor Townsend

Abstract:

Mode choice models are crucial instruments in the analysis of travel behaviour. These models show the relationship between an individual’s choice of transportation mode for a given O-D pair and the individual’s socioeconomic characteristics such as household size and income level, age and/or gender, and the features of the transportation system. The most popular functional forms of these models are based on Utility-Based Choice Theory, which addresses the uncertainty in the decision-making process with the use of an error term. However, with the development of artificial intelligence, many researchers have started to take a different approach to travel demand modelling. In recent times, researchers have looked at using neural networks, fuzzy logic and rough set theory to develop improved mode choice formulas. The concept of cloud theory has recently been introduced to model decision-making under uncertainty. Unlike the previously mentioned theories, cloud theory recognises a relationship between randomness and fuzziness, two of the most common types of uncertainty. This research aims to investigate the use of cloud theory in mode choice models. This paper highlights the conceptual framework of the mode choice model using cloud theory. Merging decision-making under uncertainty and mode choice models is state of the art. The cloud theory model is expected to address the issues and concerns with the nested logit and improve the design of mode choice models and their use in travel demand.

Keywords: Cloud theory, decision-making, mode choice models, travel behaviour, uncertainty

Procedia PDF Downloads 373
5798 Saltwater Intrusion Studies in the Cai River in the Khanh Hoa Province, Vietnam

Authors: B. Van Kessel, P. T. Kockelkorn, T. R. Speelman, T. C. Wierikx, C. Mai Van, T. A. Bogaard

Abstract:

Saltwater intrusion is a common problem in estuaries around the world, as it could hinder the freshwater supply of coastal zones. This problem is likely to grow due to climate change and sea-level rise. The influence of these factors on the saltwater intrusion was investigated for the Cai River in the Khanh Hoa province in Vietnam. In addition, the Cai River has high seasonal fluctuations in discharge, leading to increased saltwater intrusion during the dry season. Sea level rise, river discharge changes, river mouth widening and a proposed saltwater intrusion prevention dam can have influences on the saltwater intrusion but have not been quantified for the Cai River estuary. This research used both an analytical and numerical model to investigate the effect of the aforementioned factors. The analytical model was based on a model proposed by Savenije and was calibrated using limited in situ data. The numerical model was a 3D hydrodynamic model made using the Delft3D4 software. The analytical model and numerical model agreed with in situ data, mostly for tidally average data. Both models indicated a roughly similar dependence on discharge, also agreeing that this parameter had the most severe influence on the modeled saltwater intrusion. Especially for discharges below 10 m/s3, the saltwater was predicted to reach further than 10 km. In the models, both sea-level rise and river widening mainly resulted in salinity increments up to 3 kg/m3 in the middle part of the river. The predicted sea-level rise in 2070 was simulated to lead to an increase of 0.5 km in saltwater intrusion length. Furthermore, the effect of the saltwater intrusion dam seemed significant in the model used, but only for the highest position of the gate.

Keywords: Cai River, hydraulic models, river discharge, saltwater intrusion, tidal barriers

Procedia PDF Downloads 102
5797 Towards Modern Approaches of Intelligence Measurement for Clinical and Educational Practices

Authors: Alena Kulikova, Tatjana Kanonire

Abstract:

Intelligence research is one of the oldest fields of psychology. Many factors have made a research on intelligence, defined as reasoning and problem solving [1, 2], a very acute and urgent problem. Thus, it has been repeatedly shown that intelligence is a predictor of academic, professional, and social achievement in adulthood (for example, [3]); Moreover, intelligence predicts these achievements better than any other trait or ability [4]. The individual level, a comprehensive assessment of intelligence is a necessary criterion for the diagnosis of various mental conditions. For example, it is a necessary condition for psychological, medical and pedagogical commissions when deciding on educational needs and the most appropriate educational programs for school children. Assessment of intelligence is crucial in clinical psychodiagnostic and needs high-quality intelligence measurement tools. Therefore, it is not surprising that the development of intelligence tests is an essential part of psychological science and practice. Many modern intelligence tests have a long history and have been used for decades, for example, the Stanford-Binet test or the Wechsler test. However, the vast majority of these tests are based on the classic linear test structure, in which all respondents receive all tasks (see, for example, a critical review by [5]). This understanding of the testing procedure is a legacy of the pre-computer era, in which blank testing was the only diagnostic procedure available [6] and has some significant limitations that affect the reliability of the data obtained [7] and increased time costs. Another problem with measuring IQ is that classical line-structured tests do not fully allow to measure respondent's intellectual progress [8], which is undoubtedly a critical limitation. Advances in modern psychometrics allow for avoiding the limitations of existing tools. However, as in any rapidly developing industry, at the moment, psychometrics does not offer ready-made and straightforward solutions and requires additional research. In our presentation we would like to discuss the strengths and weaknesses of the current approaches to intelligence measurement and highlight “points of growth” for creating a test in accordance with modern psychometrics. Whether it is possible to create the instrument that will use all achievements of modern psychometric and remain valid and practically oriented. What would be the possible limitations for such an instrument? The theoretical framework and study design to create and validate the original Russian comprehensive computer test for measuring the intellectual development in school-age children will be presented.

Keywords: Intelligence, psychometrics, psychological measurement, computerized adaptive testing, multistage testing

Procedia PDF Downloads 70
5796 Neural Network Models for Actual Cost and Actual Duration Estimation in Construction Projects: Findings from Greece

Authors: Panagiotis Karadimos, Leonidas Anthopoulos

Abstract:

Predicting the actual cost and duration in construction projects concern a continuous and existing problem for the construction sector. This paper addresses this problem with modern methods and data available from past public construction projects. 39 bridge projects, constructed in Greece, with a similar type of available data were examined. Considering each project’s attributes with the actual cost and the actual duration, correlation analysis is performed and the most appropriate predictive project variables are defined. Additionally, the most efficient subgroup of variables is selected with the use of the WEKA application, through its attribute selection function. The selected variables are used as input neurons for neural network models through correlation analysis. For constructing neural network models, the application FANN Tool is used. The optimum neural network model, for predicting the actual cost, produced a mean squared error with a value of 3.84886e-05 and it was based on the budgeted cost and the quantity of deck concrete. The optimum neural network model, for predicting the actual duration, produced a mean squared error with a value of 5.89463e-05 and it also was based on the budgeted cost and the amount of deck concrete.

Keywords: actual cost and duration, attribute selection, bridge construction, neural networks, predicting models, FANN TOOL, WEKA

Procedia PDF Downloads 124
5795 An Approach for Vocal Register Recognition Based on Spectral Analysis of Singing

Authors: Aleksandra Zysk, Pawel Badura

Abstract:

Recognizing and controlling vocal registers during singing is a difficult task for beginner vocalist. It requires among others identifying which part of natural resonators is being used when a sound propagates through the body. Thus, an application has been designed allowing for sound recording, automatic vocal register recognition (VRR), and a graphical user interface providing real-time visualization of the signal and recognition results. Six spectral features are determined for each time frame and passed to the support vector machine classifier yielding a binary decision on the head or chest register assignment of the segment. The classification training and testing data have been recorded by ten professional female singers (soprano, aged 19-29) performing sounds for both chest and head register. The classification accuracy exceeded 93% in each of various validation schemes. Apart from a hard two-class clustering, the support vector classifier returns also information on the distance between particular feature vector and the discrimination hyperplane in a feature space. Such an information reflects the level of certainty of the vocal register classification in a fuzzy way. Thus, the designed recognition and training application is able to assess and visualize the continuous trend in singing in a user-friendly graphical mode providing an easy way to control the vocal emission.

Keywords: classification, singing, spectral analysis, vocal emission, vocal register

Procedia PDF Downloads 294
5794 The Impact of Mother Tongue Interference on Students' Performance in English Language in Bauchi State

Authors: Mairo Musa Galadima

Abstract:

This paper examines the impact of Mother tongue interference on students’ performance in English Language in Bauchi State. It is observed that the students of Bauchi district share the same problem with Hausa native speakers of Kano dialect which is the standard form. It is observed that there are some phonemes which are present in English but absent in Hausa so the Hausa speakers of Bauchi district also replace these sounds with similar ones present in Hausa. Students in Bauchi district fail English language because they transfer features of their mother tongue (MT) into English. The data is obtained through unobtrusive observation of the English speech of about fifty Hausa native speakers of Bauchi district which is similar to Kano dialect from Abubakar Tatari Ali Polytechnic, Bauchi since only those who have had some good background of secondary education are used because uneducated Nigeria English of whatever geographical location is more likely to be unintelligible as cockney or uneducated African-American English. For instance /Ə:/ is absent in Hausa so the speakers find it difficult to distinguish between such pairs of words as /bƏ:d / and /bΛst/, /fa:st/ and /fƏ:st / hence /a:/ is generally used wherever /Ə:/ is present regardless of the spelling, that is why words like ‘work’, ‘first’ and ‘person’ all have / a:/. In Hausa most speakers use /P/ in place of, or in alternation with /f/, e.g. ‘few’ is pronounced as ‘pew’, or ‘pen’, as ‘fen’, /b/ for /v/, /s/ for /z/ and /z/ for /ᵹ/. Also the word vision/visn/ is pronounced as /vidzn/. Therefore, there is confusion in spellings and pronunciation of words. One solution out of the problem is having constant practice with a qualified consistent staff and making use of standard textbooks in the learning process.

Keywords: English, failure, mother tongue, interference, students

Procedia PDF Downloads 204
5793 An Assessment of Bathymetric Changes in the Lower Usuma Reservoir, Abuja, Nigera

Authors: Rayleigh Dada Abu, Halilu Ahmad Shaba

Abstract:

Siltation is a serious problem that affects public water supply infrastructures such as dams and reservoirs. It is a major problem which threatens the performance and sustainability of dams and reservoirs. It reduces the dam capacity for flood control, potable water supply, changes water stage, reduces water quality and recreational benefits. The focus of this study is the Lower Usuma reservoir. At completion the reservoir had a gross storage capacity of 100 × 106 m3 (100 million cubic metres), a maximum operational level of 587.440 m a.s.l., with a maximum depth of 49 m and a catchment area of 241 km2 at dam site with a daily designed production capacity of 10,000 cubic metres per hour. The reservoir is 1,300 m long and feeds the treatment plant mainly by gravity. The reservoir became operational in 1986 and no survey has been conducted to determine its current storage capacity and rate of siltation. Hydrographic survey of the reservoir by integrated acoustic echo-sounding technique was conducted in November 2012 to determine the level and rate of siltation. The result obtained shows that the reservoir has lost 12.0 meters depth to siltation in 26 years of its operation; indicating 24.5% loss in installed storage capacity. The present bathymetric survey provides baseline information for future work on siltation depth and annual rates of storage capacity loss for the Lower Usuma reservoir.

Keywords: sedimentation, lower Usuma reservoir, acoustic echo sounder, bathymetric survey

Procedia PDF Downloads 505
5792 Feedback Matrix Approach for Relativistic Runaway Electron Avalanches Dynamics in Complex Electric Field Structures

Authors: Egor Stadnichuk

Abstract:

Relativistic runaway electron avalanches (RREA) are a widely accepted source of thunderstorm gamma-radiation. In regions with huge electric field strength, RREA can multiply via relativistic feedback. The relativistic feedback is caused both by positron production and by runaway electron bremsstrahlung gamma-rays reversal. In complex multilayer thunderstorm electric field structures, an additional reactor feedback mechanism appears due to gamma-ray exchange between separate strong electric field regions with different electric field directions. The study of this reactor mechanism in conjunction with the relativistic feedback with Monte Carlo simulations or by direct solution of the kinetic Boltzmann equation requires a significant amount of computational time. In this work, a theoretical approach to study feedback mechanisms in RREA physics is developed. It is based on the matrix of feedback operators construction. With the feedback matrix, the problem of the dynamics of avalanches in complex electric structures is reduced to the problem of finding eigenvectors and eigenvalues. A method of matrix elements calculation is proposed. The proposed concept was used to study the dynamics of RREAs in multilayer thunderclouds.

Keywords: terrestrial Gamma-ray flashes, thunderstorm ground enhancement, relativistic runaway electron avalanches, gamma-rays, high-energy atmospheric physics, TGF, TGE, thunderstorm, relativistic feedback, reactor feedback, reactor model

Procedia PDF Downloads 158
5791 Extension of the Simplified Theory of Plastic Zones for Analyzing Elastic Shakedown in a Multi-Dimensional Load Domain

Authors: Bastian Vollrath, Hartwig Hubel

Abstract:

In case of over-elastic and cyclic loading, strain may accumulate due to a ratcheting mechanism until the state of shakedown is possibly achieved. Load history dependent numerical investigations by a step-by-step analysis are rather costly in terms of engineering time and numerical effort. In the case of multi-parameter loading, where various independent loadings affect the final state of shakedown, the computational effort becomes an additional challenge. Therefore, direct methods like the Simplified Theory of Plastic Zones (STPZ) are developed to solve the problem with a few linear elastic analyses. Post-shakedown quantities such as strain ranges and cyclic accumulated strains are calculated approximately by disregarding the load history. The STPZ is based on estimates of a transformed internal variable, which can be used to perform modified elastic analyses, where the elastic material parameters are modified, and initial strains are applied as modified loading, resulting in residual stresses and strains. The STPZ already turned out to work well with respect to cyclic loading between two states of loading. Usually, few linear elastic analyses are sufficient to obtain a good approximation to the post-shakedown quantities. In a multi-dimensional load domain, the approximation of the transformed internal variable transforms from a plane problem into a hyperspace problem, where time-consuming approximation methods need to be applied. Therefore, a solution restricted to structures with four stress components was developed to estimate the transformed internal variable by means of three-dimensional vector algebra. This paper presents the extension to cyclic multi-parameter loading so that an unlimited number of load cases can be taken into account. The theoretical basis and basic presumptions of the Simplified Theory of Plastic Zones are outlined for the case of elastic shakedown. The extension of the method to many load cases is explained, and a workflow of the procedure is illustrated. An example, adopting the FE-implementation of the method into ANSYS and considering multilinear hardening is given which highlights the advantages of the method compared to incremental, step-by-step analysis.

Keywords: cyclic loading, direct method, elastic shakedown, multi-parameter loading, STPZ

Procedia PDF Downloads 154