Search results for: geological feature
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1994

Search results for: geological feature

644 Low-Cost Parking Lot Mapping and Localization for Home Zone Parking Pilot

Authors: Hongbo Zhang, Xinlu Tang, Jiangwei Li, Chi Yan

Abstract:

Home zone parking pilot (HPP) is a fast-growing segment in low-speed autonomous driving applications. It requires the car automatically cruise around a parking lot and park itself in a range of up to 100 meters inside a recurrent home/office parking lot, which requires precise parking lot mapping and localization solution. Although Lidar is ideal for SLAM, the car OEMs favor a low-cost fish-eye camera based visual SLAM approach. Recent approaches have employed segmentation models to extract semantic features and improve mapping accuracy, but these AI models are memory unfriendly and computationally expensive, making deploying on embedded ADAS systems difficult. To address this issue, we proposed a new method that utilizes object detection models to extract robust and accurate parking lot features. The proposed method could reduce computational costs while maintaining high accuracy. Once combined with vehicles’ wheel-pulse information, the system could construct maps and locate the vehicle in real-time. This article will discuss in detail (1) the fish-eye based Around View Monitoring (AVM) with transparent chassis images as the inputs, (2) an Object Detection (OD) based feature point extraction algorithm to generate point cloud, (3) a low computational parking lot mapping algorithm and (4) the real-time localization algorithm. At last, we will demonstrate the experiment results with an embedded ADAS system installed on a real car in the underground parking lot.

Keywords: ADAS, home zone parking pilot, object detection, visual SLAM

Procedia PDF Downloads 61
643 Impact of Agriculture on the Groundwater Quality: Case of the Alluvial Plain of Nil River (North-Eastern Algerian)

Authors: S. Benessam, T. H. Debieche, A. Drouiche, F. Zahi, S. Mahdid

Abstract:

The intensive use of the chemical fertilizers and the pesticides in agriculture often produces a contamination of the groundwater by organic pollutants. The irrigation and/or rainwater transport the pollutants towards groundwater or water surface. Among these pollutants, one finds the nitrogen, often observed in the agricultural zones in the nitrate form. In order to understand the form and chemical mobility of nitrogen in groundwater, this study was conducted. A two-monthly monitoring of the parameters physicochemical and chemistry of water of the alluvial plain of Nil river (North-eastern Algerian) were carried out during the period from November 2013 to January 2015 as well as an in-situ investigation of the various chemical products used by the farmers. The results show a raise concentration of nitrates in the wells (depth < 20 m) of the plain, which the concentrations arrive at 50 mg/L (standard of potable water). On the other hand in drillings (depth > 20 m), one observes two behaviors. The first in the upstream part, where the aquifer is unconfined and the medium is oxidizing, one observes the weak nitrate concentrations, indicating its absorption by the ground during the infiltration of water towards the groundwater. The second in the central and downstream parts, where the groundwater is locally confined and the reducing medium, one observes an absence of nitrates and the appearance of nitrites and ammonium, indicating the reduction of nitrates. The projection of the analyses on diagrams Eh-pH of nitrogen has enabled to us to determine the intervals of variation of the nitrogen forms. This study also highlighted the effect of the rains, the pumping and the nature of the geological formations in the form and the mobility of nitrogen in the plain.

Keywords: groundwater, nitrogen, mobility, speciation

Procedia PDF Downloads 241
642 Application of Federated Learning in the Health Care Sector for Malware Detection and Mitigation Using Software-Defined Networking Approach

Authors: A. Dinelka Panagoda, Bathiya Bandara, Chamod Wijetunga, Chathura Malinda, Lakmal Rupasinghe, Chethana Liyanapathirana

Abstract:

This research takes us forward with the concepts of Federated Learning and Software-Defined Networking (SDN) to introduce an efficient malware detection technique and provide a mitigation mechanism to give birth to a resilient and automated healthcare sector network system by also adding the feature of extended privacy preservation. Due to the daily transformation of new malware attacks on hospital Integrated Clinical Environment (ICEs), the healthcare industry is at an undefinable peak of never knowing its continuity direction. The state of blindness by the array of indispensable opportunities that new medical device inventions and their connected coordination offer daily, a factor that should be focused driven is not yet entirely understood by most healthcare operators and patients. This solution has the involvement of four clients in the form of hospital networks to build up the federated learning experimentation architectural structure with different geographical participation to reach the most reasonable accuracy rate with privacy preservation. While the logistic regression with cross-entropy conveys the detection, SDN comes in handy in the second half of the research to stack up the initial development phases of the system with malware mitigation based on policy implementation. The overall evaluation sums up with a system that proves the accuracy with the added privacy. It is no longer needed to continue with traditional centralized systems that offer almost everything but not privacy.

Keywords: software-defined network, federated learning, privacy, integrated clinical environment, decentralized learning, malware detection, malware mitigation

Procedia PDF Downloads 171
641 Virtual Team Management in Companies and Organizations

Authors: Asghar Zamani, Mostafa Falahmorad

Abstract:

Virtualization is established to combine and use the unique capabilities of employees to increase productivity and agility to provide services regardless of location. Adapting to fast and continuous change and getting maximum access to human resources are reasons why virtualization is happening. The distance problem is solved by information. Flexibility is the most important feature of virtualization, and information will be the main focus of virtualized companies. In this research, we used the Covid-19 opportunity window to assess the productivity of the companies that had been going through more virtualized management before the Covid-19 in comparison with those that just started planning on developing infrastructures on virtual management after the crises of pandemic occurred. The research process includes financial (profitability and customer satisfaction) and behavioral (organizational culture and reluctance to change) metrics assessment. In addition to financial and CRM KPIs, a questionnaire is devised to assess how manager and employees’ attitude has been changing towards the migration to virtualization. The sample companies and questions are selected by asking from experts in the IT industry of Iran. In this article, the conclusion is that companies open to virtualization based on accurate strategic planning or willing to pay to train their employees for virtualization before the pandemic are more agile in adapting to change and moving forward in recession. The prospective companies in this research, not only could compensate for the short period loss from the first shock of the Covid-19, but they could also foresee new needs of their customer sooner than other competitors, resulting in the need to employ new staff for executing the emerging demands. Findings were aligned with the literature review. Results can be a wake-up call for business owners especially in developing countries to be more resilient toward modern management styles instead of continuing with traditional ones.

Keywords: virtual management, virtual organization, competitive advantage, KPI, profit

Procedia PDF Downloads 76
640 Thermal Maturity and Hydrocarbon Generation Histories of the Silurian Tannezuft Shale Formation, Ghadames Basin, Northwestern Libya

Authors: Emir Borovac, Sedat İnan

Abstract:

The Silurian Tannezuft Formation within the Ghadames Basin of Northwestern Libya, like other Silurian shales in North Africa and the Middle East, represents a significant prospect for unconventional hydrocarbon exploration. Unlike the more popular and extensively studied Sirt Basin, the Ghadames Basin remains underexplored, presenting untapped potential that warrants further investigation. This study focuses on the thermal maturity and hydrocarbon generation histories of the Tannezuft shales, utilizing calibrated basin modeling approaches. The Tannezuft shales are organic-rich and primarily contain Type II kerogen, especially in the basal layer, which contains up to 10 wt. % TOC, leading to its designation as ‘hot shale’. The research integrates geological, geochemical, and basin modeling data to elucidate the unconventional hydrocarbon potential of this formation, which is crucial given the global demand for energy and the need for new resources. By employing PetroMod software from Schlumberger, calibrated modeling results simulate hydrocarbon generation and migration within the Tannezuft shales. The findings suggest dual-phase hydrocarbon generation from the Lower Silurian Tannezuft source rock, related to deep burial prior to Hercynian orogeny and subsequent Alpine orogeny events. The Ghadames Basin's tectonic history, including major Hercynian and Alpine orogenies, has significantly influenced the generation, migration, and preservation of hydrocarbons, making the Ghadames Basin a promising area for further exploration.

Keywords: tanezzuft formation, ghadames basin, silurian hot shale, unconventional hydrocarbon

Procedia PDF Downloads 8
639 A Multilayer Perceptron Neural Network Model Optimized by Genetic Algorithm for Significant Wave Height Prediction

Authors: Luis C. Parra

Abstract:

The significant wave height prediction is an issue of great interest in the field of coastal activities because of the non-linear behavior of the wave height and its complexity of prediction. This study aims to present a machine learning model to forecast the significant wave height of the oceanographic wave measuring buoys anchored at Mooloolaba of the Queensland Government Data. Modeling was performed by a multilayer perceptron neural network-genetic algorithm (GA-MLP), considering Relu(x) as the activation function of the MLPNN. The GA is in charge of optimized the MLPNN hyperparameters (learning rate, hidden layers, neurons, and activation functions) and wrapper feature selection for the window width size. Results are assessed using Mean Square Error (MSE), Root Mean Square Error (RMSE), and Mean Absolute Error (MAE). The GAMLPNN algorithm was performed with a population size of thirty individuals for eight generations for the prediction optimization of 5 steps forward, obtaining a performance evaluation of 0.00104 MSE, 0.03222 RMSE, 0.02338 MAE, and 0.71163% of MAPE. The results of the analysis suggest that the MLPNNGA model is effective in predicting significant wave height in a one-step forecast with distant time windows, presenting 0.00014 MSE, 0.01180 RMSE, 0.00912 MAE, and 0.52500% of MAPE with 0.99940 of correlation factor. The GA-MLP algorithm was compared with the ARIMA forecasting model, presenting better performance criteria in all performance criteria, validating the potential of this algorithm.

Keywords: significant wave height, machine learning optimization, multilayer perceptron neural networks, evolutionary algorithms

Procedia PDF Downloads 95
638 Modeling Sediment Transports under Extreme Storm Situation along Persian Gulf North Coast

Authors: Majid Samiee Zenoozian

Abstract:

The Persian Gulf is a bordering sea with an normal depth of 35 m and a supreme depth of 100 m near its narrow appearance. Its lengthen bathymetric axis divorces two main geological shires — the steady Arabian Foreland and the unbalanced Iranian Fold Belt — which are imitated in the conflicting shore and bathymetric morphologies of Arabia and Iran. The sediments were experimented with from 72 offshore positions through an oceanographic cruise in the winter of 2018. Throughout the observation era, several storms and river discharge actions happened, as well as the major flood on record since 1982. Suspended-sediment focus at all three sites varied in reaction to both wave resuspension and advection of river-derived sediments. We used hydrological models to evaluation and associate the wave height and inundation distance required to carriage the rocks inland. Our results establish that no known or possible storm happening on the Makran coast is accomplished of detaching and transporting the boulders. The fluid mud consequently is conveyed seaward due to gravitational forcing. The measured sediment focus and velocity profiles on the shelf provide a strong indication to provision this assumption. The sediment model is joined with a 3D hydrodynamic module in the Environmental Fluid Dynamics Code (EFDC) model that offers data on estuarine rotation and salinity transport under normal temperature conditions. 3-D sediment transport from model simulations specify dynamic sediment resuspension and transport near zones of highly industrious oyster beds.

Keywords: sediment transport, storm, coast, fluid dynamics

Procedia PDF Downloads 102
637 Evaluation of Shale Gas Resource Potential of Cambay Basin, Gujarat, India

Authors: Vaishali Sharma, Anirbid Sircar

Abstract:

Energy is one of the most eminent and fundamental strategic commodity, scarcity of which may poses great impact on the functioning of the entire commodity. According to the present study, the estimated reserves of gas in India as on 31.03.2015 stood at 1427.15 BCM. It is expected that the gas demand is set to grow significantly at a CAGR of 7% from 226.7 MMSCMD in 2012-13 to 713.5 MMSCMD in 2009-30. To bridge the gap between the demand and supply of energy, the interest towards the exploration and exploitation of unconventional resources like – Shale gas, Coal bed methane, Gas hydrates, tight gas etc has immensed. Nowadays, Shale gas prospects are emerging rapidly as a promising energy source globally. The United States of America (USA) has 240 TCF of proved reserves of shale gas and presently contributed more than 17% of total gas production. As compared to USA, shale gas production in India is at nascent stage. A resource potential of around 2000 TCF is estimated and according to preliminary data analysis, basins like Gondwana, Cambay, Krishna – Godavari, Cauvery, Assam-Arakan, Rajasthan, Vindhyan, and Bengal are the most promising shale gas basins. In the present study, the careful evaluation of Cambay Shale (Indian Shale) properties like geological age, lithology, depth, organically rich thickness, TOC, thermal maturity, porosity, permeability, clay content, quartz content, Kerogen type, Hydrocarbon window etc. has been done. And then the detailed comparison of Indian shale with USA shale will be discussed. This study investigates qualitative and quantitative nature of potential shale basins which will be helpful from exploration and exploitation point of view.

Keywords: shale, shale gas, energy source, lithology

Procedia PDF Downloads 280
636 A Critical Discourse Analysis of the Construction of Artists' Reputation by Online Art Magazines

Authors: Thomas Soro, Tim Stott, Brendan O'Rourke

Abstract:

The construction of artistic reputation has been examined within sociology, philosophy, and economics but, baring a few noteworthy exceptions its discursive aspect has been largely ignored. This is particularly surprising given that contemporary artworks primarily rely on discourse to construct their ontological status. This paper contributes a discourse analytical perspective to the broad body of literature on artistic reputation by providing an understanding of how it is discursively constructed within the institutional context of online contemporary art magazines. This paper uses corpora compiled from the websites of e-flux and ARTnews, two leading online contemporary art magazines, to examine how these organisations discursively construct the reputation of artists. By constructing word-sketches of the term 'Artist', the paper identified the most significant modifiers attributed to artists and the most significant verbs which have 'artist' as an object or subject. The most significant results were analysed through concordances and demonstrated a somewhat surprising lack of evaluative representation. To examine this feature more closely, the paper then analysed three announcement texts from e-flux’s site and three review texts from ARTnews' site, comparing the use of modifiers and verbs in the representation of artists, artworks, and institutions. The results of this analysis support the corpus findings, suggesting that artists are rarely represented in evaluative terms. Based on the relatively high frequency of evaluation in the representation of artworks and institutions, these results suggest that there may be discursive norms at work in the field of online contemporary art magazines which regulate the use of verbs and modifiers in the evaluation of artists.

Keywords: contemporary art, corpus linguistics, critical discourse analysis, symbolic capital

Procedia PDF Downloads 150
635 Geochemistry and Petrogenesis of Anorogenic Acid Plutonic Rocks of Khanak and Devsar of Southwestern Haryana

Authors: Naresh Kumar, Radhika Sharma, A. K. Singh

Abstract:

Acid plutonic rocks from the Khanak and Devsar areas of southwestern Haryana were investigated to understand their geochemical and petrogenetic characteristics and tectonic environments. Three dominant rock types (grey, grayish green and pink granites) are the principal geochemical features of Khanak and Devsar areas which reflect the dependencies of their composition on varied geological environment during the anorogenic magmatism. These rocks are enriched in SiO₂, Na₂O+K₂O, Fe/Mg, Rb, Zr, Y, Th, U, REE (Rare Earth Elements) enriched and depleted in MgO, CaO, Sr, P, Ti, Ni, Cr, V and Eu and exhibit a clear affinity to the within-plate granites that were emplaced in an extensional tectonic environment. Chondrite-normalized REE patterns show enriched LREE (Light Rare Earth Elements), moderate to strong negative Eu anomalies and flat heavy REE and grey and grayish green is different from pink granite which is enriched by Rb, Ga, Nb, Th, U, Y and HREE (Heavy Rare Earth Elements) concentrations. The composition of parental magma of both areas corresponds to mafic source contaminated with crustal materials. Petrogenetic modelling suggest that the acid plutonic rocks might have been generated from a basaltic source by partial melting (15-25%) leaving a residue with 35% plagioclase, 25% alkali feldspar, 25% quartz, 7% orthopyroxene, 5% biotite and 3% hornblende. Granites from both areas might be formed from different sources with different degree of melting for grey, grayish green and pink granites.

Keywords: A-type granite, anorogenic, Malani igneous suite, Khanak and Devsar

Procedia PDF Downloads 166
634 A Design for Customer Preferences Model by Cluster Analysis of Geometric Features and Customer Preferences

Authors: Yuan-Jye Tseng, Ching-Yen Chen

Abstract:

In the design cycle, a main design task is to determine the external shape of the product. The external shape of a product is one of the key factors that can affect the customers’ preferences linking to the motivation to buy the product, especially in the case of a consumer electronic product such as a mobile phone. The relationship between the external shape and the customer preferences needs to be studied to enhance the customer’s purchase desire and action. In this research, a design for customer preferences model is developed for investigating the relationships between the external shape and the customer preferences of a product. In the first stage, the names of the geometric features are collected and evaluated from the data of the specified internet web pages using the developed text miner. The key geometric features can be determined if the number of occurrence on the web pages is relatively high. For each key geometric feature, the numerical values are explored using the text miner to collect the internet data from the web pages. In the second stage, a cluster analysis model is developed to evaluate the numerical values of the key geometric features to divide the external shapes into several groups. Several design suggestion cases can be proposed, for example, large model, mid-size model, and mini model, for designing a mobile phone. A customer preference index is developed by evaluating the numerical data of each of the key geometric features of the design suggestion cases. The design suggestion case with the top ranking of the customer preference index can be selected as the final design of the product. In this paper, an example product of a notebook computer is illustrated. It shows that the external shape of a product can be used to drive customer preferences. The presented design for customer preferences model is useful for determining a suitable external shape of the product to increase customer preferences.

Keywords: cluster analysis, customer preferences, design evaluation, design for customer preferences, product design

Procedia PDF Downloads 178
633 Forecasting Impacts on Vulnerable Shorelines: Vulnerability Assessment Along the Coastal Zone of Messologi Area - Western Greece

Authors: Evangelos Tsakalos, Maria Kazantzaki, Eleni Filippaki, Yannis Bassiakos

Abstract:

The coastal areas of the Mediterranean have been extensively affected by the transgressive event that followed the Last Glacial Maximum, with many studies conducted regarding the stratigraphic configuration of coastal sediments around the Mediterranean. The coastal zone of the Messologi area, western Greece, consists of low relief beaches containing low cliffs and eroded dunes, a fact which, in combination with the rising sea level and tectonic subsidence of the area, has led to substantial coastal. Coastal vulnerability assessment is a useful means of identifying areas of coastline that are vulnerable to impacts of climate change and coastal processes, highlighting potential problem areas. Commonly, coastal vulnerability assessment takes the form of an ‘index’ that quantifies the relative vulnerability along a coastline. Here we make use of the coastal vulnerability index (CVI) methodology by Thieler and Hammar-Klose, by considering geological features, coastal slope, relative sea-level change, shoreline erosion/accretion rates, and mean significant wave height as well as mean tide range to assess the present-day vulnerability of the coastal zone of Messologi area. In light of this, an impact assessment is performed under three different sea level rise scenarios, and adaptation measures to control climate change events are proposed. This study contributes toward coastal zone management practices in low-lying areas that have little data information, assisting decision-makers in adopting best adaptations options to overcome sea level rise impact on vulnerable areas similar to the coastal zone of Messologi.

Keywords: coastal vulnerability index, coastal erosion, sea level rise, GIS

Procedia PDF Downloads 169
632 A Comparative Study of Dengue Fever in Taiwan and Singapore Based on Open Data

Authors: Wei Wen Yang, Emily Chia Yu Su

Abstract:

Dengue fever is a mosquito-borne tropical infectious disease caused by the dengue virus. After infection, symptoms usually start from three to fourteen days. Dengue virus may cause a high fever and at least two of the following symptoms, severe headache, severe eye pain, joint pains, muscle or bone pain, vomiting, feature skin rash, and mild bleeding manifestation. In addition, recovery will take at least two to seven days. Dengue fever has rapidly spread in tropical and subtropical areas in recent years. Several phenomena around the world such as global warming, urbanization, and international travel are the main reasons in boosting the spread of dengue. In Taiwan, epidemics occur annually, especially during summer and fall seasons. On the other side, Singapore government also has announced the amounts number of dengue cases spreading in Singapore. As the serious epidemic of dengue fever outbreaks in Taiwan and Singapore, countries around the Asia-Pacific region are becoming high risks of susceptible to the outbreaks and local hub of spreading the virus. To improve public safety and public health issues, firstly, we are going to use Microsoft Excel and SAS EG to do data preprocessing. Secondly, using support vector machines and decision trees builds predict model, and analyzes the infectious cases between Taiwan and Singapore. By comparing different factors causing vector mosquito from model classification and regression, we can find similar spreading patterns where the disease occurred most frequently. The result can provide sufficient information to predict the future dengue infection outbreaks and control the diffusion of dengue fever among countries.

Keywords: dengue fever, Taiwan, Singapore, Aedes aegypti

Procedia PDF Downloads 224
631 Analysis of Practical Guidelines for Mobile Device Security in Indonesia Based on NIST SP 1800-4

Authors: Mardiyansyah Mardiyansyah, Hendrik Maulana, Eka Kurnia Sari, Imam Baehaki, Mohammad Agus Prihandono

Abstract:

Mobile device has become a key feature in Indonesian society and the economy, including government and private sector. Enterprises and government agencies already have a concern about mobile device security. However, small and medium enterprises (SME) do not have that sense yet, especially the new startups company. Indonesia has several laws, regulations, and standards for managing security in mobile devices. Currently, Indonesian information security policies have not been harmonized, each government organization and large enterprise has its own rules and policies. It leads to a conflict of interest among government agencies. This will certainly cause ineffectiveness in the implementation of policies. Therefore, an analysis of various government policies, regulations, and standards related to information security, especially on mobile devices, is carried out. This analysis is conducted to map the existing regulatory policies and standards into practical guidelines regarding NIST's information security to show the effectiveness of NIST SP 1800-4 towards existing policies. This work focused on the mapping of the NIST SP 1800-4 framework towards existing regulations, standards, and guidelines in Indonesia. The research approach is literature study to identify existing regulations, standards, and guidelines then the regulation mapped into the NIST SP 1800-4 framework and analyzed whether the framework could be applied to the organization in Indonesia. Finally, the finding and recommendations by documenting the security characteristics can be concluded. Based on the research finding, some of the regulations, standards, and guidelines in Indonesia are relevant to the elements in the NIST SP 1800-4 framework. From mapping analysis, the strength and weakness of mobile device security in Indonesia can be reported. It also can be concluded that the application of NIST SP 1800-4 can improve the effectiveness of mobile device security policies in Indonesia.

Keywords: mobile security, mobile security framework, NIST SP 1800-4, regulations

Procedia PDF Downloads 142
630 Lower Limb Oedema in Beckwith-Wiedemann Syndrome

Authors: Mihai-Ionut Firescu, Mark A. P. Carson

Abstract:

We present a case of inferior vena cava agenesis (IVCA) associated with bilateral deep venous thrombosis (DVT) in a patient with Beckwith-Wiedemann syndrome (BWS). In adult patients with BWS presenting with bilateral lower limb oedema, specific aetiological factors should be considered. These include cardiomyopathy and intraabdominal tumours. Congenital malformations of the IVC, through causing relative venous stasis, can lead to lower limb oedema either directly or indirectly by favouring lower limb venous thromboembolism; however, they are yet to be reported as an associated feature of BWS. Given its life-threatening potential, the prompt initiation of treatment for bilateral DVT is paramount. In BWS patients, however, this can prove more complicated. Due to overgrowth, the above-average birth weight can continue throughout childhood. In this case, the patient’s weight reached 170 kg, impacting on anticoagulation choice, as direct oral anticoagulants have a limited evidence base in patients with a body mass above 120 kg. Furthermore, the presence of IVCA leads to a long-term increased venous thrombosis risk. Therefore, patients with IVCA and bilateral DVT warrant specialist consideration and may benefit from multidisciplinary team management, with hematology and vascular surgery input. Conclusion: Here, we showcased a rare cause for bilateral lower limb oedema, respectively bilateral deep venous thrombosis complicating IVCA in a patient with Beckwith-Wiedemann syndrome. The importance of this case lies in its novelty, as the association between IVC agenesis and BWS has not yet been described. Furthermore, the treatment of DVT in such situations requires special consideration, taking into account the patient’s weight and the presence of a significant, predisposing vascular abnormality.

Keywords: Beckwith-Wiedemann syndrome, bilateral deep venous thrombosis, inferior vena cava agenesis, venous thromboembolism

Procedia PDF Downloads 224
629 Development of Materials Based on Phosphates of NaZr2(PO4)3 with Low Thermal Expansion

Authors: V. Yu. Volgutov, A. I. Orlova, S. A. Khainakov

Abstract:

NaZr2(PO4)3 (NZP) and their structural analogues are characterized by a peculiar behaviors on heating – they have different expansion and contraction along different crystallographic directions due to specific arrangements of crystal structure in these compounds. An important feature of such structures is the ability to incorporate into their structural analogues wide variety of metal cations having different size and oxidation states, with different combinations and concentrations. These cations are located in different crystallographic non-equivalent positions of octahedral tetrahedral crystal framework as well as in inter-framework cavities. Through, due to iso- and hetero-valent isomorphism of the cations (and the anions) in NZP, it becomes possible to tuning the compositions and to obtain the compounds with ‘on a plan’ properties. For the design of compounds with low and ultra-low thermal expansion including those with tailored thermal expansion properties, the following crystallochemical principles it seems are promising: 1) Insertion into crystal M1 position the cations having different sizes and, 2) the variation in the composition of compounds, providing different occupation of crystal M1 position. Following these principles we have designed and synthesized the next NZP-type phosphates series: a) where radii of the cations in the M1 crystal position was varied: Zr1/4Zr2(PO4)3 - Th1/4Zr2(PO4)3 (series I); R1/3Zr2(PO4)3 where R= Nd, Eu, Er (series II), b) where the occupation of M1 crystal position was varied: Zr1/4Zr2(PO4)3-Er1/3Zr2(PO4)3 (series III) and Zr1/4Zr2(PO4)3-Sr1/2Zr2(PO4)3 (series IV). The thermal expansion parameters were determined over the range of 25-800ºC. For each series the minimum axial coefficient of thermal expansion αa = αb, αc and their anisotropy Δα = Iαa - αcI, 10-6 K-1 was found as next: -1.51, 1.07, 2.58 for Th1/4Zr2(PO4)3 (series I); -0.72, 0.10, 0.81 for Nd1/3Zr2(PO4)3 (series II); -2.78, 1.35, 4.12 for Er1/6Zr1/8Zr2(PO4)3 (series III); 2.23, 1.32, 0.91 for Sr1/2Zr2(PO4)3 (series IV). The measured tendencies of the thermal expansion of crystals were in good agreement with predicted ones. For one of the members from the studied phosphates namely Th1/16Zr3/16Zr2(PO4)3 structural refinement have been carried out at 25, 200, 600, and 800°C. The dependencies of the structural parameters with the temperature have been determined.

Keywords: high-temperature crystallography, NaZr2(PO4)3, (NZP) analogs, structural-chemical principles, tuning thermal expansion

Procedia PDF Downloads 228
628 An Assessment of Inland Transport Operator's Competitiveness in Phnom Penh, Cambodia

Authors: Savin Phoeun

Abstract:

Long time civil war, economic, infrastructure, social, and political structure were destroyed and everything starts from zero. Transport and communication are the key feature of the national economic growth, especially inland transport and other mode take a complementary role which supported by government and international organization both direct and indirect to private sector and small and medium size enterprises. The objectives of this study are to study the general characteristics, capacity and competitive KPIs of Cambodian Inland Transport Operators. Questionnaire and interview were formed from capacity and competitiveness key performance indicators to take apart in survey to Inland Transport Companies in Phnom Penh capital city of Cambodia. And descriptive statistics was applied to identify the data. The result of this study divided into three distinct sectors: 1). Management ability of transport operators – capital management, financial and qualification are in similar level which can compete between local competitors (moderated level). 2). Ability in operation: customer service providing is better but seemed in high cost operation because mostly they are in family size. 3). Local Cambodian Inland Transport Service Providers are able to compete with each other because they are in similar operation level while Thai competitors mostly higher than. The suggestion and recommendation from the result that inland transport companies should access to new technology, improve strategic management, build partnership (join/corporate) to be bigger size of capital and company in order to attract truthfulness from customers and customize the services to satisfy. Inland Service Providers should change characteristic from only cost competitive to cost saving and service enhancement.

Keywords: assessment, competitiveness, inland transport, operator

Procedia PDF Downloads 255
627 Bayesian System and Copula for Event Detection and Summarization of Soccer Videos

Authors: Dhanuja S. Patil, Sanjay B. Waykar

Abstract:

Event detection is a standout amongst the most key parts for distinctive sorts of area applications of video data framework. Recently, it has picked up an extensive interest of experts and in scholastics from different zones. While detecting video event has been the subject of broad study efforts recently, impressively less existing methodology has considered multi-model data and issues related efficiency. Start of soccer matches different doubtful circumstances rise that can't be effectively judged by the referee committee. A framework that checks objectively image arrangements would prevent not right interpretations because of some errors, or high velocity of the events. Bayesian networks give a structure for dealing with this vulnerability using an essential graphical structure likewise the probability analytics. We propose an efficient structure for analysing and summarization of soccer videos utilizing object-based features. The proposed work utilizes the t-cherry junction tree, an exceptionally recent advancement in probabilistic graphical models, to create a compact representation and great approximation intractable model for client’s relationships in an interpersonal organization. There are various advantages in this approach firstly; the t-cherry gives best approximation by means of junction trees class. Secondly, to construct a t-cherry junction tree can be to a great extent parallelized; and at last inference can be performed utilizing distributed computation. Examination results demonstrates the effectiveness, adequacy, and the strength of the proposed work which is shown over a far reaching information set, comprising more soccer feature, caught at better places.

Keywords: summarization, detection, Bayesian network, t-cherry tree

Procedia PDF Downloads 308
626 MRI Quality Control Using Texture Analysis and Spatial Metrics

Authors: Kumar Kanudkuri, A. Sandhya

Abstract:

Typically, in a MRI clinical setting, there are several protocols run, each indicated for a specific anatomy and disease condition. However, these protocols or parameters within them can change over time due to changes to the recommendations by the physician groups or updates in the software or by the availability of new technologies. Most of the time, the changes are performed by the MRI technologist to account for either time, coverage, physiological, or Specific Absorbtion Rate (SAR ) reasons. However, giving properly guidelines to MRI technologist is important so that they do not change the parameters that negatively impact the image quality. Typically a standard American College of Radiology (ACR) MRI phantom is used for Quality Control (QC) in order to guarantee that the primary objectives of MRI are met. The visual evaluation of quality depends on the operator/reviewer and might change amongst operators as well as for the same operator at various times. Therefore, overcoming these constraints is essential for a more impartial evaluation of quality. This makes quantitative estimation of image quality (IQ) metrics for MRI quality control is very important. So in order to solve this problem, we proposed that there is a need for a robust, open-source, and automated MRI image control tool. The Designed and developed an automatic analysis tool for measuring MRI image quality (IQ) metrics like Signal to Noise Ratio (SNR), Signal to Noise Ratio Uniformity (SNRU), Visual Information Fidelity (VIF), Feature Similarity (FSIM), Gray level co-occurrence matrix (GLCM), slice thickness accuracy, slice position accuracy, High contrast spatial resolution) provided good accuracy assessment. A standardized quality report has generated that incorporates metrics that impact diagnostic quality.

Keywords: ACR MRI phantom, MRI image quality metrics, SNRU, VIF, FSIM, GLCM, slice thickness accuracy, slice position accuracy

Procedia PDF Downloads 153
625 Virtue, Truth, Freedom, And The History Of Philosophy

Authors: Ashley DelCorno

Abstract:

GEM Anscombe’s 1958 essay Modern Moral Philosophy and the tradition of virtue ethics that followed has given rise to the restoration (or, more plainly, the resurrection) of Aristotle as something of an authority figure. Alisdair MacIntyre and Martha Nussbaum are proponents, for example, not just of Aristotle’s relevancy but also of his apparent implicit authority. That said, it’s not clear that the schema imagined by virtue ethicists accurately describes moral life or that it does not inadvertently work to impoverish genuine decision-making. If the label ‘virtue’ is categorically denied to some groups (while arbitrarily afforded to others), it can only turn on itself, thus rendering ridiculous its own premise. Likewise, as an inescapable feature of virtue ethics, Aristotelean binaries like ‘virtue/vice’ and ‘voluntary/involuntary’ offer up false dichotomies that may seriously compromise an agent’s ability to conceptualize choices that are truly free and rooted in meaningful criteria. Here, this topic is analyzed through a feminist lens predicated on the known paradoxes of patriarchy. The work of feminist theorists Jacqui Alexander, Katharine Angel, Simone de Beauvoir, bell hooks, Audre Lorde, Imani Perry, and Amia Srinivasan serves as important guideposts, and the argument here is built from a key tenet of black feminist thought regarding scarcity and possibility. Above all, it’s clear that though the philosophical tradition of virtue ethics presents itself as recovering the place of agency in ethics, its premises possess crippling limitations toward the achievement of this goal. These include, most notably, virtue ethics’ binding analysis of history, as well as its axiomatic attachment to obligatory clauses, problematic reading-in of Aristotle and arbitrary commitment to predetermined and competitively patriarchal ideas of what counts as a virtue.

Keywords: feminist history, the limits of utopic imagination, curatorial creation, truth, virtue, freedom

Procedia PDF Downloads 78
624 The Suitability of Agile Practices in Healthcare Industry with Regard to Healthcare Regulations

Authors: Mahmood Alsaadi, Alexei Lisitsa

Abstract:

Nowadays, medical devices rely completely on software whether as whole software or as embedded software, therefore, the organization that develops medical device software can benefit from adopting agile practices. Using agile practices in healthcare software development industries would bring benefits such as producing a product of a high-quality with low cost and in short period. However, medical device software development companies faced challenges in adopting agile practices. These due to the gaps that exist between agile practices and the requirements of healthcare regulations such as documentation, traceability, and formality. This research paper will conduct a study to investigate the adoption rate of agile practice in medical device software development, and they will extract and outline the requirements of healthcare regulations such as Food and Drug Administration (FDA), Health Insurance Portability and Accountability Act (HIPAA), and Medical Device Directive (MDD) that affect directly or indirectly on software development life cycle. Moreover, this research paper will evaluate the suitability of using agile practices in healthcare industries by analyzing the most popular agile practices such as eXtream Programming (XP), Scrum, and Feature-Driven Development (FDD) from healthcare industry point of view and in comparison with the requirements of healthcare regulations. Finally, the authors propose an agile mixture model that consists of different practices from different agile methods. As result, the adoption rate of agile practices in healthcare industries still low and agile practices should enhance with regard to requirements of the healthcare regulations in order to be used in healthcare software development organizations. Therefore, the proposed agile mixture model may assist in minimizing the gaps existing between healthcare regulations and agile practices and increase the adoption rate in the healthcare industry. As this research paper part of the ongoing project, an evaluation of agile mixture model will be conducted in the near future.

Keywords: adoption of agile, agile gaps, agile mixture model, agile practices, healthcare regulations

Procedia PDF Downloads 229
623 Litho-Structural Variations and Gold Mineralization around Wonaka Schist Belt, North West Nigeria

Authors: Umar Sambo Umar, Ahmad Isah Haruna, Abubakar Sadik Maigari, Muhammad Bello Abubakar

Abstract:

Schist belts in Nigeria occur prominently west of longitude 80 E and sporadic to the east, they are upper Proterozioc low-medium grade deformed metasediments and metavolcanics that were intruded by Pan-African granitoids. The Wonaka schist belt, though reportedly distinctive in composition and metamorphism, is the least understood; the host for primary gold were not defined, structures which may control primary enrichment have not been delineated. The aim of this work is to determine the relationship between litho-structures and the gold around Wonaka schist belt through geological field mapping, petrographic studies and structural data analysis via ArcGis 10.2, Surfer 11.0 and Stereopro 2.0. The results show that the major rock types are mica schist and migmatites, muscovites detected during microstructural analysis suggests low-grade metamorphism in the metapelites. The shear zones identified were trending North Northeast – South Southwest (NNE-SSW), fractures trend mostly Northeast-Southwest (NE-SW) perpendicular to planes of gneissic foliations, these conform to the late Pan-African deformational episode. Pegmatite lodes, net self-cross cutting quartz veins as well as the quartz stringers hosted by both migmatites and schist are delineated as targets for primary gold mineralization, while major confluences of the streams serve as zones for secondary (placer) gold targets since the streams are dendritic and intermittent.

Keywords: gold mineralization, Nigeria, migmatites, Wonaka schist belt

Procedia PDF Downloads 182
622 Multivariate Data Analysis for Automatic Atrial Fibrillation Detection

Authors: Zouhair Haddi, Stephane Delliaux, Jean-Francois Pons, Ismail Kechaf, Jean-Claude De Haro, Mustapha Ouladsine

Abstract:

Atrial fibrillation (AF) has been considered as the most common cardiac arrhythmia, and a major public health burden associated with significant morbidity and mortality. Nowadays, telemedical approaches targeting cardiac outpatients situate AF among the most challenged medical issues. The automatic, early, and fast AF detection is still a major concern for the healthcare professional. Several algorithms based on univariate analysis have been developed to detect atrial fibrillation. However, the published results do not show satisfactory classification accuracy. This work was aimed at resolving this shortcoming by proposing multivariate data analysis methods for automatic AF detection. Four publicly-accessible sets of clinical data (AF Termination Challenge Database, MIT-BIH AF, Normal Sinus Rhythm RR Interval Database, and MIT-BIH Normal Sinus Rhythm Databases) were used for assessment. All time series were segmented in 1 min RR intervals window and then four specific features were calculated. Two pattern recognition methods, i.e., Principal Component Analysis (PCA) and Learning Vector Quantization (LVQ) neural network were used to develop classification models. PCA, as a feature reduction method, was employed to find important features to discriminate between AF and Normal Sinus Rhythm. Despite its very simple structure, the results show that the LVQ model performs better on the analyzed databases than do existing algorithms, with high sensitivity and specificity (99.19% and 99.39%, respectively). The proposed AF detection holds several interesting properties, and can be implemented with just a few arithmetical operations which make it a suitable choice for telecare applications.

Keywords: atrial fibrillation, multivariate data analysis, automatic detection, telemedicine

Procedia PDF Downloads 257
621 Multi-Impairment Compensation Based Deep Neural Networks for 16-QAM Coherent Optical Orthogonal Frequency Division Multiplexing System

Authors: Ying Han, Yuanxiang Chen, Yongtao Huang, Jia Fu, Kaile Li, Shangjing Lin, Jianguo Yu

Abstract:

In long-haul and high-speed optical transmission system, the orthogonal frequency division multiplexing (OFDM) signal suffers various linear and non-linear impairments. In recent years, researchers have proposed compensation schemes for specific impairment, and the effects are remarkable. However, different impairment compensation algorithms have caused an increase in transmission delay. With the widespread application of deep neural networks (DNN) in communication, multi-impairment compensation based on DNN will be a promising scheme. In this paper, we propose and apply DNN to compensate multi-impairment of 16-QAM coherent optical OFDM signal, thereby improving the performance of the transmission system. The trained DNN models are applied in the offline digital signal processing (DSP) module of the transmission system. The models can optimize the constellation mapping signals at the transmitter and compensate multi-impairment of the OFDM decoded signal at the receiver. Furthermore, the models reduce the peak to average power ratio (PAPR) of the transmitted OFDM signal and the bit error rate (BER) of the received signal. We verify the effectiveness of the proposed scheme for 16-QAM Coherent Optical OFDM signal and demonstrate and analyze transmission performance in different transmission scenarios. The experimental results show that the PAPR and BER of the transmission system are significantly reduced after using the trained DNN. It shows that the DNN with specific loss function and network structure can optimize the transmitted signal and learn the channel feature and compensate for multi-impairment in fiber transmission effectively.

Keywords: coherent optical OFDM, deep neural network, multi-impairment compensation, optical transmission

Procedia PDF Downloads 133
620 Effect of Solid Waste on the Sustainability of the Water Resource Quality in the Gbarain Catchment of the Niger Delta Region of Nigeria

Authors: Davidson E. Egirani, Nanfe R. Poyi, Napoleon Wessey

Abstract:

This paper would report on the effect of solid waste on water resource quality in the Gbarain catchment of the Niger Delta Region of Nigeria. The Gbarain catchment presently hosts two waste-dump sites located along the flanks of a seasonal flow stream and perennially waterlogged terrain. The anthropogenic activity has significantly affected the quality of surface and groundwater in the Gbarain catchment. These wastes have made the water resource environment toxic leading to the poisoning of aquatic life. The contaminated water resources could lead to serious environmental and human health challenges such as low agricultural yields to loss of vital human organs. The contamination is via geological processes such as seepage and direct infiltration of contaminants into watercourses. The results obtained from field and experimental investigations followed by modeling, and graphical interpretation indicate heavy metal load and fecal pollution in some of the groundwater. The metal load, Escherichia coli, and total coliforms counts exceed the international and regional recommended limits. The contaminate values include Lead (> 0.01 mg/L), Mercury (> 0.006 mg/L), Manganese (> 0.4 mg/L and Escherichia coli (> 0 per 100ml) of the samples. Land use planning, enactment, and implementation of environmental laws are necessary for this region, for effective surface water and groundwater resource management.

Keywords: aquatic life, solid waste, environmental health, human health, waste-dump site, water-resource environment

Procedia PDF Downloads 134
619 Enhancing the Interpretation of Group-Level Diagnostic Results from Cognitive Diagnostic Assessment: Application of Quantile Regression and Cluster Analysis

Authors: Wenbo Du, Xiaomei Ma

Abstract:

With the empowerment of Cognitive Diagnostic Assessment (CDA), various domains of language testing and assessment have been investigated to dig out more diagnostic information. What is noticeable is that most of the extant empirical CDA-based research puts much emphasis on individual-level diagnostic purpose with very few concerned about learners’ group-level performance. Even though the personalized diagnostic feedback is the unique feature that differentiates CDA from other assessment tools, group-level diagnostic information cannot be overlooked in that it might be more practical in classroom setting. Additionally, the group-level diagnostic information obtained via current CDA always results in a “flat pattern”, that is, the mastery/non-mastery of all tested skills accounts for the two highest proportion. In that case, the outcome does not bring too much benefits than the original total score. To address these issues, the present study attempts to apply cluster analysis for group classification and quantile regression analysis to pinpoint learners’ performance at different proficiency levels (beginner, intermediate and advanced) thus to enhance the interpretation of the CDA results extracted from a group of EFL learners’ reading performance on a diagnostic reading test designed by PELDiaG research team from a key university in China. The results show that EM method in cluster analysis yield more appropriate classification results than that of CDA, and quantile regression analysis does picture more insightful characteristics of learners with different reading proficiencies. The findings are helpful and practical for instructors to refine EFL reading curriculum and instructional plan tailored based on the group classification results and quantile regression analysis. Meanwhile, these innovative statistical methods could also make up the deficiencies of CDA and push forward the development of language testing and assessment in the future.

Keywords: cognitive diagnostic assessment, diagnostic feedback, EFL reading, quantile regression

Procedia PDF Downloads 141
618 Evaluation of Adaptive Fitness of Indian Teak (Tectona grandis L. F.) Metapopulation through Inter Simple Sequence Repeat Markers

Authors: Vivek Vaishnav, Shamim Akhtar Ansari

Abstract:

Teak (Tectona grandis L.f.) belonging to plant family Lamiaceae and the most commercialized timber species is endemic to South-Asia. The adaptive fitness of the species metapopulation was evaluated through its genetic differentiation and assessing the influence of geo-climatic conditions. 290 genotypes were sampled from 29 locations of its natural distribution and the genetic data was incorporated with geo-climatic parameters. Through Bayesian approach based analysis of 43 highly polymorphic ISSR markers, six homogeneous clusters (0.8% genetic variability) were identified. The six clusters were found with the various regimes of the temperature range, i.e., I - 9.10±1.35⁰C, II -6.35±0.21⁰C, III -12.21±0.43⁰C, IV - 10.8±1.06⁰C, V - 11.67±3.04⁰C, and VI - 12.35±0.21⁰C. The population had a very high percentage of LD (21.48%) among the amplified loci possibly due to experiencing restricted gene flow as well as co-adaptation and association of distant/diverse loci/alleles as a result of the stabilized climatic conditions and countless cycles of historical recombination events on a large geological timescale. The same possibly accounts for the narrow distribution of teak as a climax species in the tropical deciduous forests of the country. The regions of strong LD in teak genome significantly associated with climatic parameters also reflect that the species is tolerant to the wide regimes of the temperature range and may possibly withstand global warming and climate change in the coming millennium.

Keywords: Bayesian analysis, inter simple sequence repeat, linkage disequilibrium, marker-geoclimatic association

Procedia PDF Downloads 255
617 Hydrological Modelling of Geological Behaviours in Environmental Planning for Urban Areas

Authors: Sheetal Sharma

Abstract:

Runoff,decreasing water levels and recharge in urban areas have been a complex issue now a days pointing defective urban design and increasing demography as cause. Very less has been discussed or analysed for water sensitive Urban Master Plans or local area plans. Land use planning deals with land transformation from natural areas into developed ones, which lead to changes in natural environment. Elaborated knowledge of relationship between the existing patterns of land use-land cover and recharge with respect to prevailing soil below is less as compared to speed of development. The parameters of incompatibility between urban functions and the functions of the natural environment are becoming various. Changes in land patterns due to built up, pavements, roads and similar land cover affects surface water flow seriously. It also changes permeability and absorption characteristics of the soil. Urban planners need to know natural processes along with modern means and best technologies available,as there is a huge gap between basic knowledge of natural processes and its requirement for balanced development planning leading to minimum impact on water recharge. The present paper analyzes the variations in land use land cover and their impacts on surface flows and sub-surface recharge in study area. The methodology adopted was to analyse the changes in land use and land cover using GIS and Civil 3d auto cad. The variations were used in  computer modeling using Storm-water Management Model to find out the runoff for various soil groups and resulting recharge observing water levels in POW data for last 40 years of the study area. Results were anlayzed again to find best correlations for sustainable recharge in urban areas.

Keywords: geology, runoff, urban planning, land use-land cover

Procedia PDF Downloads 308
616 Recurrent Neural Networks for Complex Survival Models

Authors: Pius Marthin, Nihal Ata Tutkun

Abstract:

Survival analysis has become one of the paramount procedures in the modeling of time-to-event data. When we encounter complex survival problems, the traditional approach remains limited in accounting for the complex correlational structure between the covariates and the outcome due to the strong assumptions that limit the inference and prediction ability of the resulting models. Several studies exist on the deep learning approach to survival modeling; moreover, the application for the case of complex survival problems still needs to be improved. In addition, the existing models need to address the data structure's complexity fully and are subject to noise and redundant information. In this study, we design a deep learning technique (CmpXRnnSurv_AE) that obliterates the limitations imposed by traditional approaches and addresses the above issues to jointly predict the risk-specific probabilities and survival function for recurrent events with competing risks. We introduce the component termed Risks Information Weights (RIW) as an attention mechanism to compute the weighted cumulative incidence function (WCIF) and an external auto-encoder (ExternalAE) as a feature selector to extract complex characteristics among the set of covariates responsible for the cause-specific events. We train our model using synthetic and real data sets and employ the appropriate metrics for complex survival models for evaluation. As benchmarks, we selected both traditional and machine learning models and our model demonstrates better performance across all datasets.

Keywords: cumulative incidence function (CIF), risk information weight (RIW), autoencoders (AE), survival analysis, recurrent events with competing risks, recurrent neural networks (RNN), long short-term memory (LSTM), self-attention, multilayers perceptrons (MLPs)

Procedia PDF Downloads 78
615 An Approach to Correlate the Statistical-Based Lorenz Method, as a Way of Measuring Heterogeneity, with Kozeny-Carman Equation

Authors: H. Khanfari, M. Johari Fard

Abstract:

Dealing with carbonate reservoirs can be mind-boggling for the reservoir engineers due to various digenetic processes that cause a variety of properties through the reservoir. A good estimation of the reservoir heterogeneity which is defined as the quality of variation in rock properties with location in a reservoir or formation, can better help modeling the reservoir and thus can offer better understanding of the behavior of that reservoir. Most of reservoirs are heterogeneous formations whose mineralogy, organic content, natural fractures, and other properties vary from place to place. Over years, reservoir engineers have tried to establish methods to describe the heterogeneity, because heterogeneity is important in modeling the reservoir flow and in well testing. Geological methods are used to describe the variations in the rock properties because of the similarities of environments in which different beds have deposited in. To illustrate the heterogeneity of a reservoir vertically, two methods are generally used in petroleum work: Dykstra-Parsons permeability variations (V) and Lorenz coefficient (L) that are reviewed briefly in this paper. The concept of Lorenz is based on statistics and has been used in petroleum from that point of view. In this paper, we correlated the statistical-based Lorenz method to a petroleum concept, i.e. Kozeny-Carman equation and derived the straight line plot of Lorenz graph for a homogeneous system. Finally, we applied the two methods on a heterogeneous field in South Iran and discussed each, separately, with numbers and figures. As expected, these methods show great departure from homogeneity. Therefore, for future investment, the reservoir needs to be treated carefully.

Keywords: carbonate reservoirs, heterogeneity, homogeneous system, Dykstra-Parsons permeability variations (V), Lorenz coefficient (L)

Procedia PDF Downloads 208