Search results for: weakness planes wellbore
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 435

Search results for: weakness planes wellbore

195 Temperature and Substrate Orientation Effects on the Thermal Stability of Graphene Sheet Attached on the Si Surface

Authors: Wen-Jay Lee, Kuo-Ning Chiang

Abstract:

The graphene binding with silicon substrate has apparently Schottky barriers property, which can be used in the application of solar cell and light source. Because graphene has only one atom layer, the atomistic structure of graphene binding with the silicon surface plays an important role to affect the properties of graphene. In this work, temperature effect on the morphology of graphene sheet attached on different crystal planes of silicon substrates are investigated by Molecular dynamics (MD) (LAMMPS, developed by Sandia National Laboratories). The results show that the covered graphene sheet would cause the structural deformation of the surface Si atoms of stubtrate. To achieve a stable state in the binding process, the surface Si atoms would adjust their position and fit the honeycomb structure of graphene after the graphene attaches to the Si surface. The height contour of graphene on different plane of silicon surfaces presents different pattern, leading the local residual stress at the interface. Due to the high density of dangling bond on the Si (111)7x7 surface, the surface of Si(111)7x7 is not matching with the graphene so well in contrast with Si(100)2x1and Si(111)2x1. Si(111)7x7 is found that only partial silicon adatoms are rearranged on surface after the attachment when the temperature is lower than 200K, As the temperature gradually increases, the deformation of surface structure becomes significant, as well as the residue stress. With increasing temperature till the 815K, the graphene sheet begins to destroy and mixes with the silicon atoms. For the Si(100)2x1 and Si(111)2x1, the silicon surface structure keep its structural arrangement with a higher temperature. With increasing temperature, the residual stress gradually decrease till a critical temperatures. When the temperature is higher than the critical temperature, the residual stress gradually increases and the structural deformation is found on the surface of the Si substrates.

Keywords: molecular dynamics, graphene, silicon, Schottky barriers, interface

Procedia PDF Downloads 297
194 Biomechanical Evaluation of the Chronic Stroke with 3D-Printed Hand Device

Authors: Chen-Sheng Chen, Tsung-Yi Huang, Pi-Chang Sun

Abstract:

Chronic stroke patients often have complaints about hand dysfunction due to flexor hypertonia and extensor weakness, which makes it difficult to open their affected hand for functional grasp. Hand rehabilitation after stroke is essential for restoring functional independence. Constraint-induced movement therapy has shown to be a successful treatment for patients who have acquired certain level of wrist and finger extension. The goal of this study was to investigate the feasibility of task-oriented approach incorporating 3D-printed dynamic hand device by evaluating hand functional performance. This study manufactured a hand device using 3d printer for chronic stroke. The experimental group engaged task-oriented approach with dynamic hand device, but the control group only received task-oriented approach. Outcome measurements include palmar pinch force (PPF), lateral pinch force (LPF), grip force (GF), and Box and Blocks Test (BBT). The results of study revealed the improvement of PPF in experimental group but not in control group. Meanwhile, improvement in LPF, GF and BBT can be found in both groups. This study demonstrates that the 3D-printed dynamic hand device is an effective therapeutic assistive device to improve pinch force, grasp force, and dexterity and facilitate motivation during home program in individuals with chronic stroke.

Keywords: 3D printing, biomechanics, hand orthosis, stroke

Procedia PDF Downloads 239
193 Numerical investigation of Hydrodynamic and Parietal Heat Transfer to Bingham Fluid Agitated in a Vessel by Helical Ribbon Impeller

Authors: Mounir Baccar, Amel Gammoudi, Abdelhak Ayadi

Abstract:

The efficient mixing of highly viscous fluids is required for many industries such as food, polymers or paints production. The homogeneity is a challenging operation for this fluids type since they operate at low Reynolds number to reduce the required power of the used impellers. Particularly, close-clearance impellers, mainly helical ribbons, are chosen for highly viscous fluids agitated in laminar regime which is currently heated through vessel wall. Indeed, they are characterized by high shear strains closer to the vessel wall, which causes a disturbing thermal boundary layer and ensures the homogenization of the bulk volume by axial and radial vortices. The hydrodynamic and thermal behaviors of Newtonian fluids in vessels agitated by helical ribbon impellers, has been mostly studied by many researchers. However, rarely researchers investigated numerically the agitation of yield stress fluid by means of helical ribbon impellers. This paper aims to study the effect of the Double Helical Ribbon (DHR) stirrers on both the hydrodynamic and the thermal behaviors of yield stress fluids treated in a cylindrical vessel by means of numerical simulation approach. For this purpose, continuity, momentum, and thermal equations were solved by means of 3D finite volume technique. The effect of Oldroyd (Od) and Reynolds (Re) numbers on the power (Po) and Nusselt (Nu) numbers for the mentioned stirrer type have been studied. Also, the velocity and thermal fields, the dissipation function and the apparent viscosity have been presented in different (r-z) and (r-θ) planes.

Keywords: Bingham fluid, Hydrodynamic and thermal behavior, helical ribbon, mixing, numerical modelling

Procedia PDF Downloads 272
192 Robust Segmentation of Salient Features in Automatic Breast Ultrasound (ABUS) Images

Authors: Lamees Nasser, Yago Diez, Robert Martí, Joan Martí, Ibrahim Sadek

Abstract:

Automated 3D breast ultrasound (ABUS) screening is a novel modality in medical imaging because of its common characteristics shared with other ultrasound modalities in addition to the three orthogonal planes (i.e., axial, sagittal, and coronal) that are useful in analysis of tumors. In the literature, few automatic approaches exist for typical tasks such as segmentation or registration. In this work, we deal with two problems concerning ABUS images: nipple and rib detection. Nipple and ribs are the most visible and salient features in ABUS images. Determining the nipple position plays a key role in some applications for example evaluation of registration results or lesion follow-up. We present a nipple detection algorithm based on color and shape of the nipple, besides an automatic approach to detect the ribs. In point of fact, rib detection is considered as one of the main stages in chest wall segmentation. This approach consists of four steps. First, images are normalized in order to minimize the intensity variability for a given set of regions within the same image or a set of images. Second, the normalized images are smoothed by using anisotropic diffusion filter. Next, the ribs are detected in each slice by analyzing the eigenvalues of the 3D Hessian matrix. Finally, a breast mask and a probability map of regions detected as ribs are used to remove false positives (FP). Qualitative and quantitative evaluation obtained from a total of 22 cases is performed. For all cases, the average and standard deviation of the root mean square error (RMSE) between manually annotated points placed on the rib surface and detected points on rib borders are 15.1188 mm and 14.7184 mm respectively.

Keywords: Automated 3D Breast Ultrasound, Eigenvalues of Hessian matrix, Nipple detection, Rib detection

Procedia PDF Downloads 305
191 Presenting a Model Of Empowering New Knowledge-based Companies In Iran Insurance Industry

Authors: Pedram Saadati, Zahra Nazari

Abstract:

In the last decade, the role and importance of knowledge-based technological businesses in the insurance industry has greatly increased, and due to the weakness of previous studies in Iran, the current research deals with the design of the InsurTech empowerment model. In order to obtain the conceptual model of the research, a hybrid framework has been used. The statistical population of the research in the qualitative part were experts, and in the quantitative part, the InsurTech activists. The tools of data collection in the qualitative part were in-depth and semi-structured interviews and structured self-interaction matrix, and in the quantitative part, a researcher-made questionnaire. In the qualitative part, 55 indicators, 20 components and 8 concepts (dimensions) were obtained by the content analysis method, then the relationships of the concepts with each other and the levels of the components were investigated. In the quantitative part, the information was analyzed using the descriptive analytical method in the way of path analysis and confirmatory factor analysis. The proposed model consists of eight dimensions of supporter capability, supervisor of insurance innovation ecosystem, managerial, financial, technological, marketing, opportunity identification, innovative InsurTech capabilities. The results of statistical tests in identifying the relationships of the concepts with each other have been examined in detail and suggestions have been presented in the conclusion section.

Keywords: insurTech, knowledge-base, empowerment model, factor analysis, insurance

Procedia PDF Downloads 20
190 Buckling of Plates on Foundation with Different Types of Sides Support

Authors: Ali N. Suri, Ahmad A. Al-Makhlufi

Abstract:

In this paper the problem of buckling of plates on foundation of finite length and with different side support is studied. The Finite Strip Method is used as tool for the analysis. This method uses finite strip elastic, foundation, and geometric matrices to build the assembly matrices for the whole structure, then after introducing boundary conditions at supports, the resulting reduced matrices is transformed into a standard Eigenvalue-Eigenvector problem. The solution of this problem will enable the determination of the buckling load, the associated buckling modes and the buckling wave length. To carry out the buckling analysis starting from the elastic, foundation, and geometric stiffness matrices for each strip a computer program FORTRAN list is developed. Since stiffness matrices are function of wave length of buckling, the computer program used an iteration procedure to find the critical buckling stress for each value of foundation modulus and for each boundary condition. The results showed the use of elastic medium to support plates subject to axial load increase a great deal the buckling load, the results found are very close with those obtained by other analytical methods and experimental work. The results also showed that foundation compensates the effect of the weakness of some types of constraint of side support and maximum benefit found for plate with one side simply supported the other free.

Keywords: buckling, finite strip, different sides support, plates on foundation

Procedia PDF Downloads 219
189 Substation Automation, Digitization, Cyber Risk and Chain Risk Management Reliability

Authors: Serzhan Ashirov, Dana Nour, Rafat Rob, Khaled Alotaibi

Abstract:

There has been a fast growth in the introduction and use of communications, information, monitoring, and sensing technologies. The new technologies are making their way to the Industrial Control Systems as embedded in products, software applications, IT services, or commissioned to enable integration and automation of increasingly global supply chains. As a result, the lines that separated the physical, digital, and cyber world have diminished due to the vast implementation of the new, disruptive digital technologies. The variety and increased use of these technologies introduce many cybersecurity risks affecting cyber-resilience of the supply chain, both in terms of the product or service delivered to a customer and members of the supply chain operation. US department of energy considers supply chain in the IR4 space to be the weakest link in cybersecurity. The IR4 identified the digitization of the field devices, followed by digitalization that eventually moved through the digital transformation space with little care for the new introduced cybersecurity risks. This paper will examine the best methodologies for securing the electrical substations from cybersecurity attacks due to supply chain risks, and due to digitization effort. SCADA systems are the most vulnerable part of the power system infrastructure due to digitization and due to the weakness and vulnerabilities in the supply chain security. The paper will discuss in details how create a secure supply chain methodology, secure substations, and mitigate the risks due to digitization

Keywords: cybersecurity, supply chain methodology, secure substation, digitization

Procedia PDF Downloads 43
188 Severity Index Level in Effectively Managing Medium Voltage Underground Power Cable

Authors: Mohd Azraei Pangah Pa'at, Mohd Ruzlin Mohd Mokhtar, Norhidayu Rameli, Tashia Marie Anthony, Huzainie Shafi Abd Halim

Abstract:

Partial Discharge (PD) diagnostic mapping testing is one of the main diagnostic testing techniques that are widely used in the field or onsite testing for underground power cable in medium voltage level. The existence of PD activities is an early indication of insulation weakness hence early detection of PD activities can be determined and provides an initial prediction on the condition of the cable. To effectively manage the results of PD Mapping test, it is important to have acceptable criteria to facilitate prioritization of mitigation action. Tenaga Nasional Berhad (TNB) through Distribution Network (DN) division have developed PD severity model name Severity Index (SI) for offline PD mapping test since 2007 based on onsite test experience. However, this severity index recommendation action had never been revised since its establishment. At presence, PD measurements data have been extensively increased, hence the severity level indication and the effectiveness of the recommendation actions can be analyzed and verified again. Based on the new revision, the recommended action to be taken will be able to reflect the actual defect condition. Hence, will be accurately prioritizing preventive action plan and minimizing maintenance expenditure.

Keywords: partial discharge, severity index, diagnostic testing, medium voltage, power cable

Procedia PDF Downloads 146
187 Automatic Tuning for a Systemic Model of Banking Originated Losses (SYMBOL) Tool on Multicore

Authors: Ronal Muresano, Andrea Pagano

Abstract:

Nowadays, the mathematical/statistical applications are developed with more complexity and accuracy. However, these precisions and complexities have brought as result that applications need more computational power in order to be executed faster. In this sense, the multicore environments are playing an important role to improve and to optimize the execution time of these applications. These environments allow us the inclusion of more parallelism inside the node. However, to take advantage of this parallelism is not an easy task, because we have to deal with some problems such as: cores communications, data locality, memory sizes (cache and RAM), synchronizations, data dependencies on the model, etc. These issues are becoming more important when we wish to improve the application’s performance and scalability. Hence, this paper describes an optimization method developed for Systemic Model of Banking Originated Losses (SYMBOL) tool developed by the European Commission, which is based on analyzing the application's weakness in order to exploit the advantages of the multicore. All these improvements are done in an automatic and transparent manner with the aim of improving the performance metrics of our tool. Finally, experimental evaluations show the effectiveness of our new optimized version, in which we have achieved a considerable improvement on the execution time. The time has been reduced around 96% for the best case tested, between the original serial version and the automatic parallel version.

Keywords: algorithm optimization, bank failures, OpenMP, parallel techniques, statistical tool

Procedia PDF Downloads 345
186 Decentralization and Participatory Approach in the Cultural Heritage Management in Local Thailand

Authors: Amorn Kritsanaphan

Abstract:

This paper illustrates the decentralization of cultural heritage management in local Thailand, a place similar to other middle- income developing countries characterized by rapid tourism-industrialization, weakness formal state institutions and procedures, and intensity use of the cultural heritage resources. The author conducted field research in local Thailand, principally using qualitative primary data gathering. These were combined with records reviews and content analysis of documents. The author also attended local public meetings, and social activities, and interacted casually with local residents and governments. Cultural heritage management has been supposed to improve through multi-stakeholder participation and decentralization. However, processes and outcomes are far from being straightforward and depend on a variety of contingencies and contexts involved. Multi-stakeholder and participatory approach in decentralization of the cultural heritage management in Thailand have pushed to the forefront and sharpened a number of existing problems. However, under the decentralization, the most significant contribution has been in creating real political space where various local stakeholders have become active, respond and address their concerns in various ways vis-à-vis cultural heritage problems. Improving cultural heritage sustainability and viability of local livelihoods through decentralization and participatory approach is by no means certain. However, the shift instead creates spaces potent with possibilities for a meaningful and constructive engagement between and among local state and non-state actors that can lead to synergies and positive outcomes.

Keywords: decentralization, participatory approach, cultural heritage management, multi-stakeholder approach

Procedia PDF Downloads 123
185 Comparative Electrochemical Studies of Enzyme-Based and Enzyme-less Graphene Oxide-Based Nanocomposite as Glucose Biosensor

Authors: Chetna Tyagi. G. B. V. S. Lakshmi, Ambuj Tripathi, D. K. Avasthi

Abstract:

Graphene oxide provides a good host matrix for preparing nanocomposites due to the different functional groups attached to its edges and planes. Being biocompatible, it is used in therapeutic applications. As enzyme-based biosensor requires complicated enzyme purification procedure, high fabrication cost and special storage conditions, we need enzyme-less biosensors for use even in a harsh environment like high temperature, varying pH, etc. In this work, we have prepared both enzyme-based and enzyme-less graphene oxide-based biosensors for glucose detection using glucose-oxidase as enzyme and gold nanoparticles, respectively. These samples were characterized using X-ray diffraction, UV-visible spectroscopy, scanning electron microscopy, and transmission electron microscopy to confirm the successful synthesis of the working electrodes. Electrochemical measurements were performed for both the working electrodes using a 3-electrode electrochemical cell. Cyclic voltammetry curves showed the homogeneous transfer of electron on the electrodes in the scan range between -0.2V to 0.6V. The sensing measurements were performed using differential pulse voltammetry for the glucose concentration varying from 0.01 mM to 20 mM, and sensing was improved towards glucose in the presence of gold nanoparticles. Gold nanoparticles in graphene oxide nanocomposite played an important role in sensing glucose in the absence of enzyme, glucose oxidase, as evident from these measurements. The selectivity was tested by measuring the current response of the working electrode towards glucose in the presence of the other common interfering agents like cholesterol, ascorbic acid, citric acid, and urea. The enzyme-less working electrode also showed storage stability for up to 15 weeks, making it a suitable glucose biosensor.

Keywords: electrochemical, enzyme-less, glucose, gold nanoparticles, graphene oxide, nanocomposite

Procedia PDF Downloads 114
184 Software-Defined Networking: A New Approach to Fifth Generation Networks: Security Issues and Challenges Ahead

Authors: Behrooz Daneshmand

Abstract:

Software Defined Networking (SDN) is designed to meet the future needs of 5G mobile networks. The SDN architecture offers a new solution that involves separating the control plane from the data plane, which is usually paired together. Network functions traditionally performed on specific hardware can now be abstracted and virtualized on any device, and a centralized software-based administration approach is based on a central controller, facilitating the development of modern applications and services. These plan standards clear the way for a more adaptable, speedier, and more energetic network beneath computer program control compared with a conventional network. We accept SDN gives modern inquire about openings to security, and it can significantly affect network security research in numerous diverse ways. Subsequently, the SDN architecture engages systems to effectively screen activity and analyze threats to facilitate security approach modification and security benefit insertion. The segregation of the data planes and control and, be that as it may, opens security challenges, such as man-in-the-middle attacks (MIMA), denial of service (DoS) attacks, and immersion attacks. In this paper, we analyze security threats to each layer of SDN - application layer - southbound interfaces/northbound interfaces - controller layer and data layer. From a security point of see, the components that make up the SDN architecture have a few vulnerabilities, which may be abused by aggressors to perform noxious activities and hence influence the network and its administrations. Software-defined network assaults are shockingly a reality these days. In a nutshell, this paper highlights architectural weaknesses and develops attack vectors at each layer, which leads to conclusions about further progress in identifying the consequences of attacks and proposing mitigation strategies.

Keywords: software-defined networking, security, SDN, 5G/IMT-2020

Procedia PDF Downloads 66
183 A Case Study for User Rating Prediction on Automobile Recommendation System Using Mapreduce

Authors: Jiao Sun, Li Pan, Shijun Liu

Abstract:

Recommender systems have been widely used in contemporary industry, and plenty of work has been done in this field to help users to identify items of interest. Collaborative Filtering (CF, for short) algorithm is an important technology in recommender systems. However, less work has been done in automobile recommendation system with the sharp increase of the amount of automobiles. What’s more, the computational speed is a major weakness for collaborative filtering technology. Therefore, using MapReduce framework to optimize the CF algorithm is a vital solution to this performance problem. In this paper, we present a recommendation of the users’ comment on industrial automobiles with various properties based on real world industrial datasets of user-automobile comment data collection, and provide recommendation for automobile providers and help them predict users’ comment on automobiles with new-coming property. Firstly, we solve the sparseness of matrix using previous construction of score matrix. Secondly, we solve the data normalization problem by removing dimensional effects from the raw data of automobiles, where different dimensions of automobile properties bring great error to the calculation of CF. Finally, we use the MapReduce framework to optimize the CF algorithm, and the computational speed has been improved times. UV decomposition used in this paper is an often used matrix factorization technology in CF algorithm, without calculating the interpolation weight of neighbors, which will be more convenient in industry.

Keywords: collaborative filtering, recommendation, data normalization, mapreduce

Procedia PDF Downloads 188
182 LAMOS - Layered Amorphous Metal Oxide Gas Sensors: New Interfaces for Gas Sensing Applications

Authors: Valentina Paolucci, Jessica De Santis, Vittorio Ricci, Giacomo Giorgi, Carlo Cantalini

Abstract:

Despite their potential in gas sensing applications, the major drawback of 2D exfoliated metal dichalcogenides (MDs) is that they suffer from spontaneous oxidation in air, showing poor chemical stability under dry/wet conditions even at room temperature, limiting their practical exploitation. The aim of this work is to validate a synthesis strategy allowing microstructural and electrical stabilization of the oxides that inevitably form on the surface of 2D dichalcogenides. Taking advantage of spontaneous oxidation of MDs in air, we report on liquid phase exfoliated 2D-SnSe2 flakes annealed in static air at a temperature below the crystallization temperature of the native a-SnO2 oxide. This process yields a new class of 2D Layered Amorphous Metal Oxides Sensors (LAMOS), specifically few-layered amorphous a-SnO2, showing excellent gas sensing properties. Sensing tests were carried out at low operating temperature (i.e. 100°C) by exposing a-SnO2 to both oxidizing and reducing gases (i.e. NO2, H2S and H2) and different relative humidities ranging from 40% to 80% RH. The formation of stable nanosheets of amorphous a-SnO2 guarantees excellent reproducibility and stability of the response over one year. These results pave the way to new interesting research perspectives out considering the opportunity to synthesize homogeneous amorphous textures with no grain boundaries, no grains, no crystalline planes with different orientations, etc., following gas sensing mechanisms that likely differ from that of traditional crystalline metal oxide sensors. Moreover, the controlled annealing process could likely be extended to a large variety of Transition Metal Dichalcogenides (TMDs) and Metal Chalcogenides (MCs), where sulfur, selenium, or tellurium atoms can be easily displaced by O2 atoms (ΔG < 0), enabling the synthesis of a new family of amorphous interfaces.

Keywords: layered 2D materials, exfoliation, lamos, amorphous metal oxide sensors

Procedia PDF Downloads 95
181 Applying Critical Realism to Qualitative Social Work Research: A Critical Realist Approach for Social Work Thematic Analysis Method

Authors: Lynne Soon-Chean Park

Abstract:

Critical Realism (CR) has emerged as an alternative to both the positivist and constructivist perspectives that have long dominated social work research. By unpacking the epistemic weakness of two dogmatic perspectives, CR provides a useful philosophical approach that incorporates the ontological objectivist and subjectivist stance. The CR perspective suggests an alternative approach for social work researchers who have long been looking to engage in the complex interplay between perceived reality at the empirical level and the objective reality that lies behind the empirical event as a causal mechanism. However, despite the usefulness of CR in informing social work research, little practical guidance is available about how CR can inform methodological considerations in social work research studies. This presentation aims to provide a detailed description of CR-informed thematic analysis by drawing examples from a social work doctoral research of Korean migrants’ experiences and understanding of trust associated with their settlement experience in New Zealand. Because of its theoretical flexibility and accessibility as a qualitative analysis method, thematic analysis can be applied as a method that works both to search for the demi-regularities of the collected data and to identify the causal mechanisms that lay behind the empirical data. In so doing, this presentation seeks to provide a concrete and detailed exemplar for social work researchers wishing to employ CR in their qualitative thematic analysis process.

Keywords: critical Realism, data analysis, epistemology, research methodology, social work research, thematic analysis

Procedia PDF Downloads 190
180 Design & Development of a Static-Thrust Test-Bench for Aviation/UAV Based Piston Engines

Authors: Syed Muhammad Basit Ali, Usama Saleem, Irtiza Ali

Abstract:

Internal combustion engines have been pioneers in the aviation industry, use of piston engines for aircraft propulsion, from propeller-driven bi-planes to turbo-prop, commercial, and cargo airliners. To provide an adequate amount of thrust piston engine rotates the propeller at a specific rpm, allowing enough mass airflow. Thrust is the only forward-acting force of an aircraft that helps heavier than air bodies to fly, depending on the mathematical model and variables included in that with the correct measurement. Test-benches have been a bench-mark in the aerospace industry to analyse the results before a flight, having paramount significance in reliability and safety engineering, depending on the mathematical model and variables included in that with the correct measurement. Calculation of thrust from a piston engine also depends on environmental changes, the diameter of the propeller, and the density of air. The project would be centered on piston engines used in the aviation industry for light aircraft and UAVs. A static thrust test bench involves various units, each performing a designed purpose to monitor and display. Static thrust tests are performed on the ground, and safety concerns hold paramount importance. The execution of this study involves research, design, manufacturing, and results based on reverse engineering initiating from virtual design, analytical analysis, and simulations. The final evaluation of results gathered from various methods such as co-relation between conventional mass-spring and digital loadcell. On average, we received 17.5kg of thrust (25+ engine run-ups – around 40 hours of engine run), only 10% deviation from analytically calculated thrust –providing 90% accuracy.

Keywords: aviation, aeronautics, static thrust, test bench, aircraft maintenance

Procedia PDF Downloads 352
179 Electroencephalography (EEG) Analysis of Alcoholic and Control Subjects Using Multiscale Permutation Entropy

Authors: Lal Hussain, Wajid Aziz, Sajjad Ahmed Nadeem, Saeed Arif Shah, Abdul Majid

Abstract:

Brain electrical activity as reflected in Electroencephalography (EEG) have been analyzed and diagnosed using various techniques. Among them, complexity measure, nonlinearity, disorder, and unpredictability play vital role due to the nonlinear interconnection between functional and anatomical subsystem emerged in brain in healthy state and during various diseases. There are many social and economical issues of alcoholic abuse as memory weakness, decision making, impairments, and concentrations etc. Alcoholism not only defect the brains but also associated with emotional, behavior, and cognitive impairments damaging the white and gray brain matters. A recently developed signal analysis method i.e. Multiscale Permutation Entropy (MPE) is proposed to estimate the complexity of long-range temporal correlation time series EEG of Alcoholic and Control subjects acquired from University of California Machine Learning repository and results are compared with MSE. Using MPE, coarsed grained series is first generated and the PE is computed for each coarsed grained time series against the electrodes O1, O2, C3, C4, F2, F3, F4, F7, F8, Fp1, Fp2, P3, P4, T7, and T8. The results computed against each electrode using MPE gives higher significant values as compared to MSE as well as mean rank differences accordingly. Likewise, ROC and Area under the ROC also gives higher separation against each electrode using MPE in comparison to MSE.

Keywords: electroencephalogram (EEG), multiscale permutation entropy (MPE), multiscale sample entropy (MSE), permutation entropy (PE), mann whitney test (MMT), receiver operator curve (ROC), complexity measure

Procedia PDF Downloads 463
178 Neotectonic Characteristics of the Western Part of Konya, Central Anatolia, Turkey

Authors: Rahmi Aksoy

Abstract:

The western part of Konya consists of an area of block faulted basin and ranges. Present day topography is characterized by alternating elongate mountains and depressions trending east-west. A number of depressions occur in the region. One of the large depressions is the E-W trending Kızılören-Küçükmuhsine (KK basin) basin bounded on both sides by normal faults and located on the west of the Konya city. The basin is about 5-12 km wide and 40 km long. Ranges north and south of the basin are composed of undifferentiated low grade metamorphic rocks of Silurian-Cretaceous age and smaller bodies of ophiolites of probable Cretaceous age. The basin fill consists of the upper Miocene-lower Pliocene fluvial, lacustrine, alluvial sediments and volcanic rocks. The younger and undeformed Plio-Quaternary basin fill unconformably overlies the older basin fill and is composed predominantly of conglomerate, mudstone, silt, clay and recent basin floor deposits. The paleostress data on the striated fault planes in the basin indicates NW-SE extension and associated with an NE-SW compression. The eastern end of the KK basin is cut and terraced by the active Konya fault zone. The Konya fault zone is NE trending, east dipping normal fault forming the western boundary of the Konya depression. The Konya depression consists mainly of Plio-Quaternary alluvial complex and recent basin floor sediments. The structural data gathered from the Konya fault zone support normal faulting with a small amount of dextral strike-slip tensional tectonic regime that shaped under the WNW-ESE extensional stress regime.

Keywords: central Anatolia, fault kinematics, Kızılören-Küçükmuhsine basin, Konya fault zone, neotectonics

Procedia PDF Downloads 336
177 A Technique for Planning the Application of Buttress Plate in the Medial Tibial Plateau Using the Preoperative CT Scan

Authors: P. Panwalkar, K. Veravalli, R. Gwynn, M. Tofighi, R. Clement, A. Mofidi

Abstract:

When operating on tibial plateau fracture especially medial tibial plateau, it has regularly been said “where do I put my thumb to reduce the fracture”. This refers to the ideal placement of the buttress device to hold the fracture till union. The aim of this study was to see if one can identify this sweet spot using a CT scan. Methods: Forty-five tibial plateau fractures with medial plateau involvement were identified and included in the study. The preoperative CT scans were analysed and the medial plateau involvement pattern was classified based on modified radiological classification by Yukata et-al of stress fracture of medial tibial plateau. The involvement of part of plateau was compared with position of buttress plate position which was classified as medial posteromedial or both. Presence and position of the buttress was compared with ability to achieve and hold the reduction of the fracture till union. Results: Thirteen fractures were type-1 fracture, 19 fractures were type-2 fracture and 13 fractures were type-3 fracture. Sixteen fractures were buttressed correctly according to the potential deformity and twenty-six fractures were not buttressed and three fractures were partly buttressed correctly. No fracture was over butressed! When the fracture was buttressed correctly the rate of the malunion was 0%. When fracture was partly buttressed 33% were anatomically united and 66% were united in the plane of buttress. When buttress was not used, 14 were malunited, one malunited in one of the two planes of deformity and eleven anatomically healed (of which 9 were non displaced!). Buttressing resulted in statistically significant lower mal-union rate (x2=7.8, p=0.0052). Conclusion: The classification based on involvement of medial condyle can identify the placement of buttress plate in the tibial plateau. The correct placement of the buttress plate results in predictably satisfactory union. There may be a correlation between injury shape of the tibial plateau and the fracture type.

Keywords: knee, tibial plateau, trauma, CT scan, surgery

Procedia PDF Downloads 124
176 An Efficient Backward Semi-Lagrangian Scheme for Nonlinear Advection-Diffusion Equation

Authors: Soyoon Bak, Sunyoung Bu, Philsu Kim

Abstract:

In this paper, a backward semi-Lagrangian scheme combined with the second-order backward difference formula is designed to calculate the numerical solutions of nonlinear advection-diffusion equations. The primary aims of this paper are to remove any iteration process and to get an efficient algorithm with the convergence order of accuracy 2 in time. In order to achieve these objects, we use the second-order central finite difference and the B-spline approximations of degree 2 and 3 in order to approximate the diffusion term and the spatial discretization, respectively. For the temporal discretization, the second order backward difference formula is applied. To calculate the numerical solution of the starting point of the characteristic curves, we use the error correction methodology developed by the authors recently. The proposed algorithm turns out to be completely iteration-free, which resolves the main weakness of the conventional backward semi-Lagrangian method. Also, the adaptability of the proposed method is indicated by numerical simulations for Burgers’ equations. Throughout these numerical simulations, it is shown that the numerical results are in good agreement with the analytic solution and the present scheme offer better accuracy in comparison with other existing numerical schemes. Semi-Lagrangian method, iteration-free method, nonlinear advection-diffusion equation, second-order backward difference formula

Keywords: Semi-Lagrangian method, iteration free method, nonlinear advection-diffusion equation, second-order backward difference formula

Procedia PDF Downloads 302
175 Free Energy Computation of A G-Quadruplex-Ligand Structure: A Classical Molecular Dynamics and Metadynamics Simulation Study

Authors: Juan Antonio Mondragon Sanchez, Ruben Santamaria

Abstract:

The DNA G-quadruplex is a four-stranded DNA structure formed by stacked planes of four base paired guanines (G-quartet). Guanine rich DNA sequences appear in many sites of genomic DNA and can potential form G-quadruplexes, such as those occurring at 3'-terminus of the human telomeric DNA. The formation and stabilization of a G-quadruplex by small ligands at the telomeric region can inhibit the telomerase activity. In turn, the ligands can be used to down regulate oncogene expression making G-quadruplex an attractive target for anticancer therapy. Many G-quadruplex ligands have been proposed with a planar core to facilitate the pi–pi stacking and electrostatic interactions with the G-quartets. However, many drug candidates are impossibilitated to discriminate a G-quadruplex from a double helix DNA structure. In this context, it is important to investigate the site topology for the interaction of a G-quadruplex with a ligand. In this work, we determine the free energy surface of a G-quadruplex-ligand to study the binding modes of the G-quadruplex (TG4T) with the daunomycin (DM) drug. The complex TG4T-DM is studied using classical molecular dynamics in combination with metadynamics simulations. The metadynamics simulations permit an enhanced sampling of the conformational space with a modest computational cost and obtain free energy surfaces in terms of the collective variables (CV). The free energy surfaces of TG4T-DM exhibit other local minima, indicating the presence of additional binding modes of daunomycin that are not observed in short MD simulations without the metadynamics approach. The results are compared with similar calculations on a different structure (the mutated mu-G4T-DM where the 5' thymines on TG4T-DM have been deleted). The results should be of help to design new G-quadruplex drugs, and understand the differences in the recognition topology sites of the duplex and quadruplex DNA structures in their interaction with ligands.

Keywords: g-quadruplex, cancer, molecular dynamics, metadynamics

Procedia PDF Downloads 432
174 Performance Analysis of Geophysical Database Referenced Navigation: The Combination of Gravity Gradient and Terrain Using Extended Kalman Filter

Authors: Jisun Lee, Jay Hyoun Kwon

Abstract:

As an alternative way to compensate the INS (inertial navigation system) error in non-GNSS (Global Navigation Satellite System) environment, geophysical database referenced navigation is being studied. In this study, both gravity gradient and terrain data were combined to complement the weakness of sole geophysical data as well as to improve the stability of the positioning. The main process to compensate the INS error using geophysical database was constructed on the basis of the EKF (Extended Kalman Filter). In detail, two type of combination method, centralized and decentralized filter, were applied to check the pros and cons of its algorithm and to find more robust results. The performance of each navigation algorithm was evaluated based on the simulation by supposing that the aircraft flies with precise geophysical DB and sensors above nine different trajectories. Especially, the results were compared to the ones from sole geophysical database referenced navigation to check the improvement due to a combination of the heterogeneous geophysical database. It was found that the overall navigation performance was improved, but not all trajectories generated better navigation result by the combination of gravity gradient with terrain data. Also, it was found that the centralized filter generally showed more stable results. It is because that the way to allocate the weight for the decentralized filter could not be optimized due to the local inconsistency of geophysical data. In the future, switching of geophysical data or combining different navigation algorithm are necessary to obtain more robust navigation results.

Keywords: Extended Kalman Filter, geophysical database referenced navigation, gravity gradient, terrain

Procedia PDF Downloads 319
173 Evaluation of Sequential Polymer Flooding in Multi-Layered Heterogeneous Reservoir

Authors: Panupong Lohrattanarungrot, Falan Srisuriyachai

Abstract:

Polymer flooding is a well-known technique used for controlling mobility ratio in heterogeneous reservoirs, leading to improvement of sweep efficiency as well as wellbore profile. However, low injectivity of viscous polymer solution attenuates oil recovery rate and consecutively adds extra operating cost. An attempt of this study is to improve injectivity of polymer solution while maintaining recovery factor, enhancing effectiveness of polymer flooding method. This study is performed by using reservoir simulation program to modify conventional single polymer slug into sequential polymer flooding, emphasizing on increasing of injectivity and also reduction of polymer amount. Selection of operating conditions for single slug polymer including pre-injected water, polymer concentration and polymer slug size is firstly performed for a layered-heterogeneous reservoir with Lorenz coefficient (Lk) of 0.32. A selected single slug polymer flooding scheme is modified into sequential polymer flooding with reduction of polymer concentration in two different modes: Constant polymer mass and reduction of polymer mass. Effects of Residual Resistance Factor (RRF) is also evaluated. From simulation results, it is observed that first polymer slug with the highest concentration has the main function to buffer between displacing phase and reservoir oil. Moreover, part of polymer from this slug is also sacrificed for adsorption. Reduction of polymer concentration in the following slug prevents bypassing due to unfavorable mobility ratio. At the same time, following slugs with lower viscosity can be injected easily through formation, improving injectivity of the whole process. A sequential polymer flooding with reduction of polymer mass shows great benefit by reducing total production time and amount of polymer consumed up to 10% without any downside effect. The only advantage of using constant polymer mass is slightly increment of recovery factor (up to 1.4%) while total production time is almost the same. Increasing of residual resistance factor of polymer solution yields a benefit on mobility control by reducing effective permeability to water. Nevertheless, higher adsorption results in low injectivity, extending total production time. Modifying single polymer slug into sequence of reduced polymer concentration yields major benefits on reducing production time as well as polymer mass. With certain design of polymer flooding scheme, recovery factor can even be further increased. This study shows that application of sequential polymer flooding can be certainly applied to reservoir with high value of heterogeneity since it requires nothing complex for real implementation but just a proper design of polymer slug size and concentration.

Keywords: polymer flooding, sequential, heterogeneous reservoir, residual resistance factor

Procedia PDF Downloads 449
172 Applying Big Data Analysis to Efficiently Exploit the Vast Unconventional Tight Oil Reserves

Authors: Shengnan Chen, Shuhua Wang

Abstract:

Successful production of hydrocarbon from unconventional tight oil reserves has changed the energy landscape in North America. The oil contained within these reservoirs typically will not flow to the wellbore at economic rates without assistance from advanced horizontal well and multi-stage hydraulic fracturing. Efficient and economic development of these reserves is a priority of society, government, and industry, especially under the current low oil prices. Meanwhile, society needs technological and process innovations to enhance oil recovery while concurrently reducing environmental impacts. Recently, big data analysis and artificial intelligence become very popular, developing data-driven insights for better designs and decisions in various engineering disciplines. However, the application of data mining in petroleum engineering is still in its infancy. The objective of this research aims to apply intelligent data analysis and data-driven models to exploit unconventional oil reserves both efficiently and economically. More specifically, a comprehensive database including the reservoir geological data, reservoir geophysical data, well completion data and production data for thousands of wells is firstly established to discover the valuable insights and knowledge related to tight oil reserves development. Several data analysis methods are introduced to analysis such a huge dataset. For example, K-means clustering is used to partition all observations into clusters; principle component analysis is applied to emphasize the variation and bring out strong patterns in the dataset, making the big data easy to explore and visualize; exploratory factor analysis (EFA) is used to identify the complex interrelationships between well completion data and well production data. Different data mining techniques, such as artificial neural network, fuzzy logic, and machine learning technique are then summarized, and appropriate ones are selected to analyze the database based on the prediction accuracy, model robustness, and reproducibility. Advanced knowledge and patterned are finally recognized and integrated into a modified self-adaptive differential evolution optimization workflow to enhance the oil recovery and maximize the net present value (NPV) of the unconventional oil resources. This research will advance the knowledge in the development of unconventional oil reserves and bridge the gap between the big data and performance optimizations in these formations. The newly developed data-driven optimization workflow is a powerful approach to guide field operation, which leads to better designs, higher oil recovery and economic return of future wells in the unconventional oil reserves.

Keywords: big data, artificial intelligence, enhance oil recovery, unconventional oil reserves

Procedia PDF Downloads 262
171 Minimizing the Drilling-Induced Damage in Fiber Reinforced Polymeric Composites

Authors: S. D. El Wakil, M. Pladsen

Abstract:

Fiber reinforced polymeric (FRP) composites are finding wide-spread industrial applications because of their exceptionally high specific strength and specific modulus of elasticity. Nevertheless, it is very seldom to get ready-for-use components or products made of FRP composites. Secondary processing by machining, particularly drilling, is almost always required to make holes for fastening components together to produce assemblies. That creates problems since the FRP composites are neither homogeneous nor isotropic. Some of the problems that are encountered include the subsequent damage in the region around the drilled hole and the drilling – induced delamination of the layer of ply, that occurs both at the entrance and the exit planes of the work piece. Evidently, the functionality of the work piece would be detrimentally affected. The current work was carried out with the aim of eliminating or at least minimizing the work piece damage associated with drilling of FPR composites. Each test specimen involves a woven reinforced graphite fiber/epoxy composite having a thickness of 12.5 mm (0.5 inch). A large number of test specimens were subjected to drilling operations with different combinations of feed rates and cutting speeds. The drilling induced damage was taken as the absolute value of the difference between the drilled hole diameter and the nominal one taken as a percentage of the nominal diameter. The later was determined for each combination of feed rate and cutting speed, and a matrix comprising those values was established, where the columns indicate varying feed rate while and rows indicate varying cutting speeds. Next, the analysis of variance (ANOVA) approach was employed using Minitab software, in order to obtain the combination that would improve the drilling induced damage. Experimental results show that low feed rates coupled with low cutting speeds yielded the best results.

Keywords: drilling of composites, dimensional accuracy of holes drilled in composites, delamination and charring, graphite-epoxy composites

Procedia PDF Downloads 370
170 Ankle Arthroscopy: Indications, Patterns of Admissions, Surgical Outcomes, and Associated Complications Among Saudi Patients at King Abdul-Aziz Medical City in Riyadh

Authors: Mohammad Abdullah Almalki

Abstract:

Background: Despite the frequent usage of ankle arthroscopy, there is limited medical literature regarding its indications, patterns of admissions, surgical outcomes, and associated complicated at Saudi Arabia. Hence, this study would highlight the surgical outcomes of such surgical approach that will assist orthopedic surgeons to detect which surgical procedure needs to be done as well as to help them regarding their diagnostic workups. Methods: At the Orthopedic Division of King Abdul‑Aziz Medical City in Riyadh and through a cross‑sectional design and convenient sampling techniques, the present study had recruited 20 subjects who fulfill the inclusion and exclusion criteria between 2016 and 2018. Data collection was carried out by a questionnaire designed and revised by an expert panel of health professionals. Results: Twenty patients were reviewed (11M and 9F) with an average age of 40.1 ± 12.2. Only 30% of the patients (5M, 1F) have no comorbidity, but 70% of patients (7M, 8F) were having at least one comorbidity. The most common indications were osteochondritis dissecans (n = 7, 35%), ankle fracture without dislocation (n = 4, 20%), and tibiotalar impingement (n = 3, 15%). Patients recorded pain in all cases (100%). The top four symptoms after pain were instability (30%, n = 6), muscle weakness (15%, n = 3) swelling (15%, n = 3), and stiffness (5%, n = 1). Two‑third of cases reached to their full healthy status and toe‑touch weight‑bearing was seen in two patients (10%). Conclusion: Ankle arthroscopy improved the rehabilitation rates in our tertiary care center. In addition, the surgical outcomes are favorable in our hospital since it has a very short length of stay, unexpended surgery, and fewest physiotherapy sessions.

Keywords: ankle, arthroscopy, indications, patterns

Procedia PDF Downloads 58
169 The Connection between Heroism and Violence in War Narratives from the Aspect of Rituals

Authors: Rita Fofai

Abstract:

The aim of the study is to help peacebuilding by analyzing the symbolical level of fights in the war. Despite the sufferings, war heroism still represents such a noble value in war narratives (especially in literature and films, whether it is high- or popular culture) which can make warfare attractive for every age-group. The questions of the study will revolve around the events when heroism is not a necessary and unselfish act for a greater good, but when the primary aim is to express strength in order to build self-mythology. Since war is a scene where the mythological level can meet reality, and even modern narratives use the elements of rituals and sacral references in even secular contexts, understanding the connection between rites and modern battles will ground this study, and the analysis will follow the logic of the violent rites. From this aspect, war is not merely the fight for different countries and ideas, but the fight of mankind with superhuman and natural or supernatural phenomena, as well. In this context, enemy symbolizes the threat of the world which is unpredictable for mankind, and the fight becomes a ritual combat; therefore the winner’s symbolic reward is to redefine himself or herself not only in the human environment but in the context of the whole world. The analysis of the study reveals that this kind of violence does not represents real heroism and rarely results in recruitment, on the contrary, conserves fear and the feeling of weakness, which is the root cause of this kind of act. The result of this study is a way to reshape the attitude toward so-called heroic war violence which is often a part of war narratives even nowadays. Since stepping out of the war tradition is mainly a cultural question, redefining the connection between society and narratives which has an effect on mentality and emotions, giving a clear guide to making difference between heroism and useless violence is very important in peacebuilding.

Keywords: war, ritual, heroism, violence, narratives, culture

Procedia PDF Downloads 107
168 Association between Healthy Eating Index-2015 Scores and the Probability of Sarcopenia in Community-Dwelling Iranian Elderly

Authors: Zahra Esmaeily, Zahra Tajari, Shahrzad Daei, Mahshid Rezaei, Atefeh Eyvazkhani, Marjan Mansouri Dara, Ahmad Reza Dorosty Motlagh, Andriko Palmowski

Abstract:

Objective: Sarcopenia (SPA) is associated with frailty and disability in the elderly. Adherence to current dietary guidelines in addition to physical activity could play a role in the prevention of muscle wasting and weakness. The Healthy Eating Index-2015 (HEI) is a tool to assess diet quality as recommended in the U.S. Dietary Guidelines for Americans. This study aimed to investigate whether there is a relationship between HEI scores and the probability of SPA (PS) among the Tehran elderly. Method: A previously validated semi-quantitative food frequency questionnaire was used to assess HEI and the dietary intake of randomly selected elderly people living in Tehran, Iran. Handgrip strength (HGS) was measured to evaluate the PS. Statistical evaluation included descriptive analysis and standard test procedures. Result: 201 subjects were included. Those probably suffering from SPA (as determined by HGS) had significantly lower HEI scores (p = 0.02). After adjusting for confounders, HEI scores and HGS were still significantly associated (adjusted R2 = 0.56, slope β = 0.03, P = 0.09). Elderly people with a low probability of SPA consumed more monounsaturated and polyunsaturated fatty acids (P = 0.06) and ingested less added sugars and saturated fats (P = 0.01 and P = 0.02, respectively). Conclusion: In this cross-sectional study, HEI scores are associated with the probability of SPA. Adhering to current dietary guidelines might contribute to ameliorating muscle strength and mass in aging individuals.

Keywords: aging, HEI-2015, Iranian, sarcopenic

Procedia PDF Downloads 174
167 Argumentation Frameworks and Theories of Judging

Authors: Sonia Anand Knowlton

Abstract:

With the rise of artificial intelligence, computer science is becoming increasingly integrated in virtually every area of life. Of course, the law is no exception. Through argumentation frameworks (AFs), computer scientists have used abstract algebra to structure the legal reasoning process in a way that allows conclusions to be drawn from a formalized system of arguments. In AFs, arguments compete against each other for logical success and are related to one another through the binary operation of the attack. The prevailing arguments make up the preferred extension of the given argumentation framework, telling us what set of arguments must be accepted from a logical standpoint. There have been several developments of AFs since its original conception in the early 90’s in efforts to make them more aligned with the human reasoning process. Generally, these developments have sought to add nuance to the factors that influence the logical success of competing arguments (e.g., giving an argument more logical strength based on the underlying value it promotes). The most cogent development was that of the Extended Argumentation Framework (EAF), in which attacks can themselves be attacked by other arguments, and the promotion of different competing values can be formalized within the system. This article applies the logical structure of EAFs to current theoretical understandings of judicial reasoning to contribute to theories of judging and to the evolution of AFs simultaneously. The argument is that the main limitation of EAFs, when applied to judicial reasoning, is that they require judges to themselves assign values to different arguments and then lexically order these values to determine the given framework’s preferred extension. Drawing on John Rawls’ Theory of Justice, the examination that follows is whether values are lexical and commensurable to this extent. The analysis that follows then suggests a potential extension of the EAF system with an approach that formalizes different “planes of attack” for competing arguments that promote lexically ordered values. This article concludes with a summary of how these insights contribute to theories of judging and of legal reasoning more broadly, specifically in indeterminate cases where judges must turn to value-based approaches.

Keywords: computer science, mathematics, law, legal theory, judging

Procedia PDF Downloads 39
166 Effectiveness of Gamified Virtual Physiotherapy Patients with Shoulder Problems

Authors: A. Barratt, M. H. Granat, S. Buttress, B. Roy

Abstract:

Introduction: Physiotherapy is an essential part of the treatment of patients with shoulder problems. The focus of treatment is usually centred on addressing specific physiotherapy goals, ultimately resulting in the improvement in pain and function. This study investigates if computerised physiotherapy using gamification principles are as effective as standard physiotherapy. Methods: Physiotherapy exergames were created using a combination of commercially available hardware, the Microsoft Kinect, and bespoke software. The exergames used were validated by mapping physiotherapy goals of physiotherapy which included; strength, range of movement, control, speed, and activation of the kinetic chain. A multicenter, randomised prospective controlled trial investigated the use of exergames on patients with Shoulder Impingement Syndrome who had undergone Arthroscopic Subacromial Decompression surgery. The intervention group was provided with the automated sensor-based technology, allowing them to perform exergames and track their rehabilitation progress. The control group was treated with standard physiotherapy protocols. Outcomes from different domains were used to compare the groups. An important metric was the assessment of shoulder range of movement pre- and post-operatively. The range of movement data included abduction, forward flexion and external rotation which were measured by the software, pre-operatively, 6 weeks and 12 weeks post-operatively. Results: Both groups show significant improvement from pre-operative to 12 weeks in elevation in forward flexion and abduction planes. Results for abduction showed an improvement for the interventional group (p < 0.015) as well as the test group (p < 0.003). Forward flexion improvement was interventional group (p < 0.0201) with the control group (p < 0.004). There was however no significant difference between the groups at 12 weeks for abduction (p < 0.118067) , forward flexion (p < 0.189755) or external rotation (p < 0.346967). Conclusion: Exergames may be used as an alternative to standard physiotherapy regimes; however, further analysis is required focusing on patient engagement.

Keywords: shoulder, physiotherapy, exergames, gamification

Procedia PDF Downloads 156