Search results for: spatial data base
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 27267

Search results for: spatial data base

18927 Performance Evaluation of Discrete Fourier Transform Algorithm Based PMU for Wide Area Measurement System

Authors: Alpesh Adeshara, Rajendrasinh Jadeja, Praghnesh Bhatt

Abstract:

Implementation of advanced technologies requires sophisticated instruments that deal with the operation, control, restoration and protection of rapidly growing power system network under normal and abnormal conditions. Presently, the applications of Phasor Measurement Unit (PMU) are widely found in real time operation, monitoring, controlling and analysis of power system network as it eliminates the various limitations of Supervisory Control and Data Acquisition System (SCADA) conventionally used in power system. The use of PMU data is very rapidly increasing its importance for online and offline analysis. Wide Area Measurement System (WAMS) is developed as new technology by use of multiple PMUs in power system. The present paper proposes a model of MATLAB based PMU using Discrete Fourier Transform (DFT) algorithm and evaluation of its operation under different contingencies. In this paper, PMU based two bus system having WAMS network is presented as a case study.

Keywords: GPS global positioning system, PMU phasor measurement system, WAMS wide area monitoring system, DFT, PDC

Procedia PDF Downloads 477
18926 Electromyography Pattern Classification with Laplacian Eigenmaps in Human Running

Authors: Elnaz Lashgari, Emel Demircan

Abstract:

Electromyography (EMG) is one of the most important interfaces between humans and robots for rehabilitation. Decoding this signal helps to recognize muscle activation and converts it into smooth motion for the robots. Detecting each muscle’s pattern during walking and running is vital for improving the quality of a patient’s life. In this study, EMG data from 10 muscles in 10 subjects at 4 different speeds were analyzed. EMG signals are nonlinear with high dimensionality. To deal with this challenge, we extracted some features in time-frequency domain and used manifold learning and Laplacian Eigenmaps algorithm to find the intrinsic features that represent data in low-dimensional space. We then used the Bayesian classifier to identify various patterns of EMG signals for different muscles across a range of running speeds. The best result for vastus medialis muscle corresponds to 97.87±0.69 for sensitivity and 88.37±0.79 for specificity with 97.07±0.29 accuracy using Bayesian classifier. The results of this study provide important insight into human movement and its application for robotics research.

Keywords: electromyography, manifold learning, ISOMAP, Laplacian Eigenmaps, locally linear embedding

Procedia PDF Downloads 349
18925 The Web of Injustice: Untangling Violations of Personality Rights in European International Private Law

Authors: Sara Vora (Hoxha)

Abstract:

Defamation, invasion of privacy, and cyberbullying have all increased in tandem with the growth of the internet. European international private law may struggle to deal with such transgressions if they occur in many jurisdictions. The current study examines how effectively the legal system of European international private law addresses abuses of personality rights in cyberspace. The study starts by discussing how established legal frameworks are being threatened by online personality rights abuses. The article then looks into the rules and regulations of European international private law that are in place to handle overseas lawsuits. This article examines the different elements that courts evaluate when deciding which law to use in a particular case, focusing on the concepts of jurisdiction, choice of law, and recognition and execution of foreign judgements. Next, the research analyses the function of the European Union in preventing and punishing online personality rights abuses. Key pieces of law that control the collecting and processing of personal data on the Internet, including the General Data Protection Regulation (GDPR) and the e-Commerce Directive, are discussed. In addition, this article investigates how the ECtHR handles cases involving the infringement of personal freedoms, including privacy and speech. The article finishes with an assessment of how well the legal framework of European international private law protects individuals' right to privacy online. It draws attention to problems with the present legal structure, such as the inability to enforce international judgements, the inconsistency between national laws, and the necessity for stronger measures to safeguard people' rights online. This paper concludes that while European international private law provides a useful framework for dealing with violations of personality rights online, further harmonisation and stronger enforcement mechanisms are necessary to effectively protect individuals' rights in the digital age.

Keywords: European international private law, personality rights, internet, jurisdiction, cross-border disputes, data protection

Procedia PDF Downloads 62
18924 On Dynamic Chaotic S-BOX Based Advanced Encryption Standard Algorithm for Image Encryption

Authors: Ajish Sreedharan

Abstract:

Security in transmission and storage of digital images has its importance in today’s image communications and confidential video conferencing. Due to the increasing use of images in industrial process, it is essential to protect the confidential image data from unauthorized access. Advanced Encryption Standard (AES) is a well known block cipher that has several advantages in data encryption. However, it is not suitable for real-time applications. This paper presents modifications to the Advanced Encryption Standard to reflect a high level security and better image encryption. The modifications are done by adjusting the ShiftRow Transformation and using On Dynamic chaotic S-BOX. In AES the Substitute bytes, Shift row and Mix columns by themselves would provide no security because they do not use the key. In Dynamic chaotic S-BOX Based AES the Substitute bytes provide security because the S-Box is constructed from the key. Experimental results verify and prove that the proposed modification to image cryptosystem is highly secure from the cryptographic viewpoint. The results also prove that with a comparison to original AES encryption algorithm the modified algorithm gives better encryption results in terms of security against statistical attacks.

Keywords: advanced encryption standard (AES), on dynamic chaotic S-BOX, image encryption, security analysis, ShiftRow transformation

Procedia PDF Downloads 425
18923 Developing a Toolkit of Undergraduate Nursing Student’ Desirable Characteristics (TNDC) : An application Item Response Theory

Authors: Parinyaporn Thanaboonpuang, Siridej Sujiva, Shotiga Pasiphul

Abstract:

The higher education reform that integration of nursing programmes into the higher education system. Learning outcomes represent one of the essential building blocks for transparency within higher education systems and qualifications. The purpose of this study is to develop a toolkit of undergraduate nursing student’desirable characteristics assessment on Thai Qualifications Framework for Higher education and to test psychometric property for this instrument. This toolkit seeks to improve on the Computer Multimedia test. There are three skills to be examined: Cognitive skill, Responsibility and Interpersonal Skill, and Information Technology Skill. The study was conduct in 4 phases. In Phase 1. Based on developed a measurement model and Computer Multimedia test. Phase 2 two round focus group were conducted, to determine the content validity of measurement model and the toolkit. In Phase 3, data were collected using a multistage random sampling of 1,156 senior undergraduate nursing student were recruited to test psychometric property. In Phase 4 data analysis was conducted by descriptive statistics, item analysis, inter-rater reliability, exploratory factor analysis and confirmatory factor analysis. The resulting TNDC consists of 74 items across the following four domains: Cognitive skill, Interpersonal Skill, Responsibility and Information Technology Skill. The value of Cronbach’ s alpha for the four domains were .781, 807, .831, and .865, respectively. The final model in confirmatory factor analysis fit quite well with empirical data. The TNDC was found to be appropriate, both theoretically and statistically. Due to these results, it is recommended that the toolkit could be used in future studies for Nursing Program in Thailand.

Keywords: toolkit, nursing student’ desirable characteristics, Thai qualifications framework

Procedia PDF Downloads 521
18922 Antimicrobial Properties of SEBS Compounds with Copper Microparticles

Authors: Vanda Ferreira Ribeiro, Daiane Tomacheski, Douglas Naue Simões, Michele Pitto, Ruth Marlene Campomanes Santana

Abstract:

Indoor environments, such as car cabins and public transportation vehicles are places where users are subject to air quality. Microorganisms (bacteria, fungi, yeasts) enter these environments through windows, ventilation systems and may use the organic particles present as a growth substrate. In addition, atmospheric pollutants can act as potential carbon and nitrogen sources for some microorganisms. Compounds base SEBS copolymers, poly(styrene-b-(ethylene-co-butylene)-b-styrene, are a class of thermoplastic elastomers (TPEs), fully recyclable and largely used in automotive parts. Metals, such as cooper and silver, have biocidal activities and the production of the SEBS compounds by melting blending with these agents can be a good option for producing compounds for use in plastic parts of ventilation systems and automotive air-conditioning, in order to minimize the problems caused by growth of pathogenic microorganisms. In this sense, the aim of this work was to evaluate the effect of copper microparticles as antimicrobial agent in compositions based on SEBS/PP/oil/calcite. Copper microparticles were used in weight proportion of 0%, 1%, 2% and 4%. The compounds were prepared using a co-rotating double screw extruder (L/D ratio of 40/1 and 16 mm screw diameter). The processing parameters were 300 rpm of screw rotation rate, with a temperature profile between 150 to 190°C. SEBS based TPE compounds were injection molded. The compounds emission were characterized by gravimetric fogging test. Compounds were characterized by physical (density and staining by contact), mechanical (hardness and tension properties) and rheological properties (melt volume rate – MVR). Antibacterial properties were evaluated against Staphylococcus aureus (S. aureus) and Escherichia coli (E. coli) strains. To avaluate the abilities toward the fungi have been chosen Aspergillus niger (A. niger), Candida albicans (C. albicans), Cladosporium cladosporioides (C. cladosporioides) and Penicillium chrysogenum (P. chrysogenum). The results of biological tests showed a reduction on bacteria in up to 88% in E.coli and up to 93% in S. aureus. The tests with fungi showed no conclusive results because the sample without copper also demonstrated inhibition of the development of these microorganisms. The copper addition did not cause significant variations in mechanical properties, in the MVR and the emission behavior of the compounds. The density increases with the increment of copper in compounds.

Keywords: air conditioner, antimicrobial, cooper, SEBS

Procedia PDF Downloads 271
18921 Anti-Gravity to Neo-Concretism: The Epodic Spaces of Non-Objective Art

Authors: Alexandra Kennedy

Abstract:

Making use of the notion of ‘epodic spaces’ this paper presents a reconsideration of non-objective art practices, proposing alternatives to established materialist, formalist, process-based conceptualist approaches to such work. In his Neo-Concrete Manifesto (1959) Ferreira Gullar (1930-2016) sought to create a distinction between various forms of non-objective art. He distinguished the ‘geometric’ arts of neoplasticism, constructivism, and suprematism – which he described as ‘dangerously acute rationalism’ – from other non-objective practices. These alternatives, he proposed, have an expressive potential lacking in the former and this formed the basis for their categorisation as neo-concrete. Gullar prioritized the phenomenological over the rational, with an emphasis on the role of the spectator (a key concept of minimalism). Gullar highlighted the central role of sensual experience, colour and the poetic in such work. In the early twentieth century, Russian Cosmism – an esoteric philosophical movement – was highly influential on Russian avant-garde artists and can account for suprematist artists’ interest in, and approach to, planar geometry and four-dimensional space as demonstrated in the abstract paintings of Kasimir Malevich (1879-1935). Nikolai Fyodorov (1823-1903) promoted the idea of anti-gravity and cosmic space as the field for artistic activity. The artist and writer Kuzma Petrov-Vodkin (1878-1939) wrote on the concept of Euclidean space, the overcoming of such rational conceptions of space and the breaking free from the gravitational field and the earth’s sphere. These imaginary spaces, which also invoke a bodily experience, present a poetic dimension to the work of the suprematists. It is a dimension that arguably aligns more with Gullar’s formulation of his neo-concrete rather than that of his alignment of Suprematism with rationalism. While found in experiments with planar geometry, the interest in forms suggestive of an experience of breaking free–both physically from the earth and conceptually from rational, mathematical space (in a pre-occupation with non-Euclidean space and anti-geometry) and in their engagement with the spatial properties of colour, Suprematism presents itself as imaginatively epodic. The paper discusses both historical and contemporary non-objective practices in this context, drawing attention to the manner in which the category of the non-objective is used to categorise art works which are, arguably, qualitatively different.

Keywords: anti-gravity, neo-concrete, non-Euclidian geometry, non-objective painting

Procedia PDF Downloads 161
18920 A Method for Compression of Short Unicode Strings

Authors: Masoud Abedi, Abbas Malekpour, Peter Luksch, Mohammad Reza Mojtabaei

Abstract:

The use of short texts in communication has been greatly increasing in recent years. Applying different languages in short texts has led to compulsory use of Unicode strings. These strings need twice the space of common strings, hence, applying algorithms of compression for the purpose of accelerating transmission and reducing cost is worthwhile. Nevertheless, other compression methods like gzip, bzip2 or PAQ due to high overhead data size are not appropriate. The Huffman algorithm is one of the rare algorithms effective in reducing the size of short Unicode strings. In this paper, an algorithm is proposed for compression of very short Unicode strings. At first, every new character to be sent to a destination is inserted in the proposed mapping table. At the beginning, every character is new. In case the character is repeated for the same destination, it is not considered as a new character. Next, the new characters together with the mapping value of repeated characters are arranged through a specific technique and specially formatted to be transmitted. The results obtained from an assessment made on a set of short Persian and Arabic strings indicate that this proposed algorithm outperforms the Huffman algorithm in size reduction.

Keywords: Algorithms, Data Compression, Decoding, Encoding, Huffman Codes, Text Communication

Procedia PDF Downloads 337
18919 Male Sex Workers’ Constructions of Selling Sex in South Africa

Authors: Tara Panday, Despina Learmonth

Abstract:

Sex work is often constructed as being an interaction between male clients and female sex workers. As a result, street-based male sex workers are continuously overlooked in the South African literature. This qualitative study explored male sex workers’ subjective experiences and constructions of their male clients’ identities and the client-sex worker relationship. This research was conducted from a social-constructionist perspective, which allowed for a deeper understanding of the reasons and context driving the choices and actions of male sex workers. Semi-structured face-to-face interviews were conducted with 10 South African men working as sex workers in Cape Town. Data was analysed through thematic analysis. The findings of the study construct the client-sex worker relationship in terms of a professional relationship, constrained choice, sexual identity and need, as well as companionship for pay, potentially highlighting underlying reasons for supply and demand. The data which emerged around the client-sex worker relationship and the clients’ identities also served to illuminate the power-dynamics in the client-sex worker relationship. This data increases insight into the exploitation and disempowerment experienced by male sex workers through verbal abuse, physical and sexual violence, and unfairly enforced laws and regulations. The findings of this study suggest that, in the context of South Africa, male sex workers' experiences of the client-sex worker relationship cannot be completely understood without considering the intersectionality of the triple stigmatisation of: the criminality of sex work, race, and the lack of economic power, which systematically maintains marginalization. Motivating for the Law Reform Commission to continue to review all emerging research may assist with guiding related policy and thereby, the provision of equal human rights and adequate health and social interventions for all sex workers in South Africa.

Keywords: human rights, prostitution, power relations, sex work

Procedia PDF Downloads 472
18918 Engine Thrust Estimation by Strain Gauging of Engine Mount Assembly

Authors: Rohit Vashistha, Amit Kumar Gupta, G. P. Ravishankar, Mahesh P. Padwale

Abstract:

Accurate thrust measurement is required for aircraft during takeoff and after ski-jump. In a developmental aircraft, takeoff from ship is extremely critical and thrust produced by the engine should be known to the pilot before takeoff so that if thrust produced is not sufficient then take-off can be aborted and accident can be avoided. After ski-jump, thrust produced by engine is required because the horizontal speed of aircraft is less than the normal takeoff speed. Engine should be able to produce enough thrust to provide nominal horizontal takeoff speed to the airframe within prescribed time limit. The contemporary low bypass gas turbine engines generally have three mounts where the two side mounts transfer the engine thrust to the airframe. The third mount only takes the weight component. It does not take any thrust component. In the present method of thrust estimation, the strain gauging of the two side mounts is carried out. The strain produced at various power settings is used to estimate the thrust produced by the engine. The quarter Wheatstone bridge is used to acquire the strain data. The engine mount assembly is subjected to Universal Test Machine for determination of equivalent elasticity of assembly. This elasticity value is used in the analytical approach for estimation of engine thrust. The estimated thrust is compared with the test bed load cell thrust data. The experimental strain data is also compared with strain data obtained from FEM analysis. Experimental setup: The strain gauge is mounted on the tapered portion of the engine mount sleeve. Two strain gauges are mounted on diametrically opposite locations. Both of the strain gauges on the sleeve were in the horizontal plane. In this way, these strain gauges were not taking any strain due to the weight of the engine (except negligible strain due to material's poison's ratio) or the hoop's stress. Only the third mount strain gauge will show strain when engine is not running i.e. strain due to weight of engine. When engine starts running, all the load will be taken by the side mounts. The strain gauge on the forward side of the sleeve was showing a compressive strain and the strain gauge on the rear side of the sleeve shows a tensile strain. Results and conclusion: the analytical calculation shows that the hoop stresses dominate the bending stress. The estimated thrust by strain gauge shows good accuracy at higher power setting as compared to lower power setting. The accuracy of estimated thrust at max power setting is 99.7% whereas at lower power setting is 78%.

Keywords: engine mounts, finite elements analysis, strain gauge, stress

Procedia PDF Downloads 465
18917 Marketing Mix, Motivation and the Tendency of Consumer Decision Making in Buying Condominium

Authors: Bundit Pungnirund

Abstract:

This research aimed to study the relationship between marketing mix attitudes, motivation of buying decision and tendency of consumer decision making in buying the condominiums in Thailand. This study employed by survey and quantitative research. The questionnaire was used to collect the data from 400 sampled of customers who interested in buying condominium in Bangkok. The descriptive statistics and Pearson’s correlation coefficient analysis were used to analyze data. The research found that marketing mixed factors in terms of product and price were related to buying decision making tendency in terms of price and room size. Marketing mixed factors in terms of price, place and promotion were related to buying decision making tendency in term of word of mouth. Consumers’ buying motivation in terms of social acceptance, self-esteemed and self-actualization were related to buying decision making tendency in term of room size. In addition, motivation in self-esteemed was related to buying decision making tendency within a year.

Keywords: condominium, marketing mix, motivation, tendency of consumer decision making

Procedia PDF Downloads 296
18916 Effect of the Orifice Plate Specifications on Coefficient of Discharge

Authors: Abulbasit G. Abdulsayid, Zinab F. Abdulla, Asma A. Omer

Abstract:

On the ground that the orifice plate is relatively inexpensive, requires very little maintenance and only calibrated during the occasion of plant turnaround, the orifice plate has turned to be in a real prevalent use in gas industry. Inaccuracy of measurement in the fiscal metering stations may highly be accounted to be the most vital factor for mischarges in the natural gas industry in Libya. A very trivial error in measurement can add up a fast escalating financial burden to the custodian transactions. The unaccounted gas quantity transferred annually via orifice plates in Libya, could be estimated in an extent of multi-million dollars. As the oil and gas wealth is the solely source of income to Libya, every effort is now being exerted to improve the accuracy of existing orifice metering facilities. Discharge coefficient has become pivotal in current researches undertaken in this regard. Hence, increasing the knowledge of the flow field in a typical orifice meter is indispensable. Recently and in a drastic pace, the CFD has become the most time and cost efficient versatile tool for in-depth analysis of fluid mechanics, heat and mass transfer of various industrial applications. Getting deeper into the physical phenomena lied beneath and predicting all relevant parameters and variables with high spatial and temporal resolution have been the greatest weighing pros counting for CFD. In this paper, flow phenomena for air passing through an orifice meter were numerically analyzed with CFD code based modeling, giving important information about the effect of orifice plate specifications on the discharge coefficient for three different tappings locations, i.e., flange tappings, D and D/2 tappings compared with vena contracta tappings. Discharge coefficients were paralleled with discharge coefficients estimated by ISO 5167. The influences of orifice plate bore thickness, orifice plate thickness, beveled angle, perpendicularity and buckling of the orifice plate, were all duly investigated. A case of an orifice meter whose pipe diameter of 2 in, beta ratio of 0.5 and Reynolds number of 91100, was taken as a model. The results highlighted that the discharge coefficients were highly responsive to the variation of plate specifications and under all cases, the discharge coefficients for D and D/2 tappings were very close to that of vena contracta tappings which were believed as an ideal arrangement. Also, in general sense, it was appreciated that the standard equation in ISO 5167, by which the discharge coefficient was calculated, cannot capture the variation of the plate specifications and thus further thorough considerations would be still needed.

Keywords: CFD, discharge coefficients, orifice meter, orifice plate specifications

Procedia PDF Downloads 108
18915 Analysis of Mutation Associated with Male Infertility in Patients and Healthy Males in the Russian Population

Authors: Svetlana Zhikrivetskaya, Nataliya Shirokova, Roman Bikanov, Elizaveta Musatova, Yana Kovaleva, Nataliya Vetrova, Ekaterina Pomerantseva

Abstract:

Nowadays there is a growing number of couples with conceiving problems due to male or female infertility. Genetic abnormalities are responsible for about 31% of all cases of male infertility. These abnormalities include both chromosomal aberrations or aneuploidies and mutations in certain genes. Chromosomal abnormalities can be easily identified, thus the development of screening panels able to reveal genetic reasons of male infertility on gene level is of current interest. There are approximately 2,000 genes involved in male fertility that is the reason why it is very important to determine the most clinically relevant in certain population and ethnic conditions. An infertility screening panel containing 48 mutations in genes AMHR2, CFTR, DNAI1, HFE, KAL1, TSSK2 and AZF locus which are the most clinically relevant for the European population according to databases NCBI and ClinVar was designed. The aim of this research was to confirm clinic relevance of these mutations in the Russian population. Genotyping was performed in 220 patients with different types of male infertility and in 57 healthy males with normozoospermia. Mutations were identified by end-point PCR with TaqMan probes in microfluidic plates. The frequency of 5 mutations in healthy males and 13 mutations in patients with infertility was revealed and estimated. The frequency of mutation c.187C>G in HFE gene was significantly lower for healthy males (8.8%) compared with patients (17.7%) and the values for the European population according to ExAc database (13.7%) and dbSNP (17.2%). Analysis of c.3454G>C, and c.1545_1546delTA mutations in the CFTR gene revealed increased frequency (0.9 and 0.2%, respectively) in patients with infertility compared with data for the European population (0.04%, respectively (ExAc, European (Non-Finnish) and for the Aggregated Populations (0.002% (ExAc), because there is no data for European population for c.1545_1546delTA mutation. The frequency of del508 mutation (CFTR) in patients (1.59%) were lower comparing with male infertility Europeans (3.34-6.25% depending on nationality) and at the same level with healthy Europeans (1.06%, ExAc, European (Non-Finnish). Analysis of c.845G>A (HFE) mutation resulted in decreased frequency in patients (1.8%) in contrast with the European population data (5.1%, respectively, ExAc, European (Non-Finnish). Moreover, obtained data revealed no statistically significant frequency difference for c.845G>A mutation (HFE) between healthy males in the Russian and the European populations. Allele frequencies of mutations c.350G>A (CFTR), c.193A>T (HFE), c.774C>T, and c.80A>G (gene TSSK2) showed no significantly difference among patients with infertility, healthy males and Europeans. Analysis of AZF locus revealed increased frequency for AZFc microdeletion in patients with male infertility. Thereby, the new data of the allele frequencies in infertility patients in the Russian population was obtained. As well as the frequency differences of mutations associated with male infertility among patients, healthy males in the Russian population and the European one were estimated. The revealed differences showed that for high effectiveness of screening panel detecting genetically caused male infertility it is very important to consider ethnic and population characteristics of patients which will be screened.

Keywords: allele frequency, azoospermia, male infertility, mutation, population

Procedia PDF Downloads 382
18914 Scope of Rainwater Harvesting in Residential Plots of Dhaka City

Authors: Jubaida Gulshan Ara, Zebun Nasreen Ahmed

Abstract:

Urban flood and drought has been a major problem of Dhaka city, particularly in recent years. Continuous increase of the city built up area, and limiting rainwater infiltration zone, are thought to be the main causes of the problem. Proper rainwater management, even at the individual plot level, might bring significant improvement in this regard. As residential use pattern occupies a significant portion of the city surface, the scope of rainwater harvesting (RWH) in residential buildings can be investigated. This paper reports on a research which explored the scope of rainwater harvesting in residential plots, with multifamily apartment buildings, in Dhaka city. The research investigated the basics of RWH, contextual information, i.e., hydro-geological, meteorological data of Dhaka city and the rules and legislations for residential building construction. The study also explored contemporary rainwater harvesting practices in the local and international contexts. On the basis of theoretical understanding, 21 sample case-studies, in different phases of construction, were selected from seven different categories of plot sizes, in different residential areas of Dhaka city. Primary data from the 21 case-study buildings were collected from a physical survey, from design drawings, accompanied by a questionnaire survey. All necessary secondary data were gathered from published and other relevant sources. Collected primary and secondary data were used to calculate and analyze the RWH needs for each case study, based on the theoretical understanding. The main findings have been compiled and compared, to observe residential development trends with regards to building rainwater harvesting system. The study has found that, in ‘Multifamily Apartment Building’ of Dhaka city, storage, and recharge structure size for rainwater harvesting, increases along with occupants’ number, and with the increasing size of the plot. Hence, demand vs. supply ratio remains almost the same for different sizes of plots, and consequently, the size of the storage structure increases significantly, in large-scale plots. It has been found that rainwater can meet only 12%-30% of the total restricted water demand of these residential buildings of Dhaka city. Therefore, artificial groundwater recharge might be the more suitable option for RWH, than storage. The study came up with this conclusion that, in multifamily residential apartments of Dhaka city, artificial groundwater recharge might be the more suitable option for RWH, than storing the rainwater on site.

Keywords: Dhaka city, rainwater harvesting, residential plots, urban flood

Procedia PDF Downloads 177
18913 The Impact of Monetary Policy on Aggregate Market Liquidity: Evidence from Indian Stock Market

Authors: Byomakesh Debata, Jitendra Mahakud

Abstract:

The recent financial crisis has been characterized by massive monetary policy interventions by the Central bank, and it has amplified the importance of liquidity for the stability of the stock market. This paper empirically elucidates the actual impact of monetary policy interventions on stock market liquidity covering all National Stock Exchange (NSE) Stocks, which have been traded continuously from 2002 to 2015. The present study employs a multivariate VAR model along with VAR-granger causality test, impulse response functions, block exogeneity test, and variance decomposition to analyze the direction as well as the magnitude of the relationship between monetary policy and market liquidity. Our analysis posits a unidirectional relationship between monetary policy (call money rate, base money growth rate) and aggregate market liquidity (traded value, turnover ratio, Amihud illiquidity ratio, turnover price impact, high-low spread). The impulse response function analysis clearly depicts the influence of monetary policy on stock liquidity for every unit innovation in monetary policy variables. Our results suggest that an expansionary monetary policy increases aggregate stock market liquidity and the reverse is documented during the tightening of monetary policy. To ascertain whether our findings are consistent across all periods, we divided the period of study as pre-crisis (2002 to 2007) and post-crisis period (2007-2015) and ran the same set of models. Interestingly, all liquidity variables are highly significant in the post-crisis period. However, the pre-crisis period has witnessed a moderate predictability of monetary policy. To check the robustness of our results we ran the same set of VAR models with different monetary policy variables and found the similar results. Unlike previous studies, we found most of the liquidity variables are significant throughout the sample period. This reveals the predictability of monetary policy on aggregate market liquidity. This study contributes to the existing body of literature by documenting a strong predictability of monetary policy on stock liquidity in an emerging economy with an order driven market making system like India. Most of the previous studies have been carried out in developing economies with quote driven or hybrid market making system and their results are ambiguous across different periods. From an eclectic sense, this study may be considered as a baseline study to further find out the macroeconomic determinants of liquidity of stocks at individual as well as aggregate level.

Keywords: market liquidity, monetary policy, order driven market, VAR, vector autoregressive model

Procedia PDF Downloads 363
18912 Security of Database Using Chaotic Systems

Authors: Eman W. Boghdady, A. R. Shehata, M. A. Azem

Abstract:

Database (DB) security demands permitting authorized users and prohibiting non-authorized users and intruders actions on the DB and the objects inside it. Organizations that are running successfully demand the confidentiality of their DBs. They do not allow the unauthorized access to their data/information. They also demand the assurance that their data is protected against any malicious or accidental modification. DB protection and confidentiality are the security concerns. There are four types of controls to obtain the DB protection, those include: access control, information flow control, inference control, and cryptographic. The cryptographic control is considered as the backbone for DB security, it secures the DB by encryption during storage and communications. Current cryptographic techniques are classified into two types: traditional classical cryptography using standard algorithms (DES, AES, IDEA, etc.) and chaos cryptography using continuous (Chau, Rossler, Lorenz, etc.) or discreet (Logistics, Henon, etc.) algorithms. The important characteristics of chaos are its extreme sensitivity to initial conditions of the system. In this paper, DB-security systems based on chaotic algorithms are described. The Pseudo Random Numbers Generators (PRNGs) from the different chaotic algorithms are implemented using Matlab and their statistical properties are evaluated using NIST and other statistical test-suits. Then, these algorithms are used to secure conventional DB (plaintext), where the statistical properties of the ciphertext are also tested. To increase the complexity of the PRNGs and to let pass all the NIST statistical tests, we propose two hybrid PRNGs: one based on two chaotic Logistic maps and another based on two chaotic Henon maps, where each chaotic algorithm is running side-by-side and starting from random independent initial conditions and parameters (encryption keys). The resulted hybrid PRNGs passed the NIST statistical test suit.

Keywords: algorithms and data structure, DB security, encryption, chaotic algorithms, Matlab, NIST

Procedia PDF Downloads 259
18911 Using Speech Emotion Recognition as a Longitudinal Biomarker for Alzheimer’s Diseases

Authors: Yishu Gong, Liangliang Yang, Jianyu Zhang, Zhengyu Chen, Sihong He, Xusheng Zhang, Wei Zhang

Abstract:

Alzheimer’s disease (AD) is a progressive neurodegenerative disorder that affects millions of people worldwide and is characterized by cognitive decline and behavioral changes. People living with Alzheimer’s disease often find it hard to complete routine tasks. However, there are limited objective assessments that aim to quantify the difficulty of certain tasks for AD patients compared to non-AD people. In this study, we propose to use speech emotion recognition (SER), especially the frustration level, as a potential biomarker for quantifying the difficulty patients experience when describing a picture. We build an SER model using data from the IEMOCAP dataset and apply the model to the DementiaBank data to detect the AD/non-AD group difference and perform longitudinal analysis to track the AD disease progression. Our results show that the frustration level detected from the SER model can possibly be used as a cost-effective tool for objective tracking of AD progression in addition to the Mini-Mental State Examination (MMSE) score.

Keywords: Alzheimer’s disease, speech emotion recognition, longitudinal biomarker, machine learning

Procedia PDF Downloads 100
18910 Bacterial Recovery of Copper Ores

Authors: Zh. Karaulova, D. Baizhigitov

Abstract:

At the Aktogay deposit, the oxidized ore section has been developed since 2015; by now, the reserves of easily enriched ore are decreasing, and a large number of copper-poor, difficult-to-enrich ores has been accumulated in the dumps of the KAZ Minerals Aktogay deposit, which is unprofitable to mine using the traditional mining methods. Hence, another technology needs to be implemented, which will significantly expand the raw material base of copper production in Kazakhstan and ensure the efficient use of natural resources. Heap and dump bacterial recovery are the most acceptable technologies for processing low-grade secondary copper sulfide ores. Test objects were the copper ores of Aktogay deposit and chemolithotrophic bacteria Leptospirillum ferrooxidans (L.f.), Acidithiobacillus caldus (A.c.), Sulfobacillus Acidophilus (S.a.), which are mixed cultures were both used in bacterial oxidation systems. They can stay active in the 20-400C temperature range. These bacteria were the most extensively studied and widely used in sulfide mineral recovery technology. Biocatalytic acceleration was achieved as a result of bacteria oxidizing iron sulfides to form iron sulfate, which subsequently underwent chemical oxidation to become sulfate oxide. The following results have been achieved at the initial stage: the goal was to grow and maintain the life activity of bacterial cultures under laboratory conditions. These bacteria grew the best within the pH 1,2-1,8 range with light stirring and in an aerated environment. The optimal growth temperature was 30-33оC. The growth rate decreased by one-half for each 4-5°C fall in temperature from 30°C. At best, the number of bacteria doubled every 24 hours. Typically, the maximum concentration of cells that can be grown in ferrous solution is about 107/ml. A further step researched in this case was the adaptation of microorganisms to the environment of certain metals. This was followed by mass production of inoculum and maintenance for their further cultivation on a factory scale. This was done by adding sulfide concentrate, allowing the bacteria to convert the ferrous sulfate as indicated by the Eh (>600 mV), then diluting to double the volume and adding concentrate to achieve the same metal level. This process was repeated until the desired metal level and volumes were achieved. The final stage of bacterial recovery was the transportation and irrigation of secondary sulfide copper ores of the oxidized ore section. In conclusion, the project was implemented at the Aktogay mine since the bioleaching process was prolonged. Besides, the method of bacterial recovery might compete well with existing non-biological methods of extraction of metals from ores.

Keywords: bacterial recovery, copper ore, bioleaching, bacterial inoculum

Procedia PDF Downloads 61
18909 Establishment of a Classifier Model for Early Prediction of Acute Delirium in Adult Intensive Care Unit Using Machine Learning

Authors: Pei Yi Lin

Abstract:

Objective: The objective of this study is to use machine learning methods to build an early prediction classifier model for acute delirium to improve the quality of medical care for intensive care patients. Background: Delirium is a common acute and sudden disturbance of consciousness in critically ill patients. After the occurrence, it is easy to prolong the length of hospital stay and increase medical costs and mortality. In 2021, the incidence of delirium in the intensive care unit of internal medicine was as high as 59.78%, which indirectly prolonged the average length of hospital stay by 8.28 days, and the mortality rate is about 2.22% in the past three years. Therefore, it is expected to build a delirium prediction classifier through big data analysis and machine learning methods to detect delirium early. Method: This study is a retrospective study, using the artificial intelligence big data database to extract the characteristic factors related to delirium in intensive care unit patients and let the machine learn. The study included patients aged over 20 years old who were admitted to the intensive care unit between May 1, 2022, and December 31, 2022, excluding GCS assessment <4 points, admission to ICU for less than 24 hours, and CAM-ICU evaluation. The CAMICU delirium assessment results every 8 hours within 30 days of hospitalization are regarded as an event, and the cumulative data from ICU admission to the prediction time point are extracted to predict the possibility of delirium occurring in the next 8 hours, and collect a total of 63,754 research case data, extract 12 feature selections to train the model, including age, sex, average ICU stay hours, visual and auditory abnormalities, RASS assessment score, APACHE-II Score score, number of invasive catheters indwelling, restraint and sedative and hypnotic drugs. Through feature data cleaning, processing and KNN interpolation method supplementation, a total of 54595 research case events were extracted to provide machine learning model analysis, using the research events from May 01 to November 30, 2022, as the model training data, 80% of which is the training set for model training, and 20% for the internal verification of the verification set, and then from December 01 to December 2022 The CU research event on the 31st is an external verification set data, and finally the model inference and performance evaluation are performed, and then the model has trained again by adjusting the model parameters. Results: In this study, XG Boost, Random Forest, Logistic Regression, and Decision Tree were used to analyze and compare four machine learning models. The average accuracy rate of internal verification was highest in Random Forest (AUC=0.86), and the average accuracy rate of external verification was in Random Forest and XG Boost was the highest, AUC was 0.86, and the average accuracy of cross-validation was the highest in Random Forest (ACC=0.77). Conclusion: Clinically, medical staff usually conduct CAM-ICU assessments at the bedside of critically ill patients in clinical practice, but there is a lack of machine learning classification methods to assist ICU patients in real-time assessment, resulting in the inability to provide more objective and continuous monitoring data to assist Clinical staff can more accurately identify and predict the occurrence of delirium in patients. It is hoped that the development and construction of predictive models through machine learning can predict delirium early and immediately, make clinical decisions at the best time, and cooperate with PADIS delirium care measures to provide individualized non-drug interventional care measures to maintain patient safety, and then Improve the quality of care.

Keywords: critically ill patients, machine learning methods, delirium prediction, classifier model

Procedia PDF Downloads 55
18908 Exploring the Potential of Bio-Inspired Lattice Structures for Dynamic Applications in Design

Authors: Axel Thallemer, Aleksandar Kostadinov, Abel Fam, Alex Teo

Abstract:

For centuries, the forming processes in nature served as a source of inspiration for both architects and designers. It seems as most human artifacts are based on ideas which stem from the observation of the biological world and its principles of growth. As a fact, in the cultural history of Homo faber, materials have been mostly used in their solid state: From hand axe to computer mouse, the principle of employing matter has not changed ever since the first creation. In the scope of history only recently and by the help of additive-generative fabrication processes through Computer Aided Design (CAD), designers were enabled to deconstruct solid artifacts into an outer skin and an internal lattice structure. The intention behind this approach is to create a new topology which reduces resources and integrates functions into an additively manufactured component. However, looking at the currently employed lattice structures, it is very clear that those lattice structure geometries have not been thoroughly designed, but rather taken out of basic-geometry libraries which are usually provided by the CAD. In the here presented study, a group of 20 industrial design students created new and unique lattice structures using natural paragons as their models. The selected natural models comprise both the animate and inanimate world, with examples ranging from the spiraling of narwhal tusks, off-shooting of mangrove roots, minimal surfaces of soap bubbles, up to the rhythmical arrangement of molecular geometry, like in the case of SiOC (Carbon-Rich Silicon Oxicarbide). This ideation process leads to a design of a geometric cell, which served as a basic module for the lattice structure, whereby the cell was created in visual analogy to its respective natural model. The spatial lattices were fabricated additively in mostly [X]3 by [Y]3 by [Z]3 units’ volumes using selective powder bed melting in polyamide with (z-axis) 50 mm and 100 µm resolution and subdued to mechanical testing of their elastic zone in a biomedical laboratory. The results demonstrate that additively manufactured lattice structures can acquire different properties when they are designed in analogy to natural models. Several of the lattices displayed the ability to store and return kinetic energy, while others revealed a structural failure which can be exploited for purposes where a controlled collapse of a structure is required. This discovery allows for various new applications of functional lattice structures within industrially created objects.

Keywords: bio-inspired, biomimetic, lattice structures, additive manufacturing

Procedia PDF Downloads 138
18907 Ribotaxa: Combined Approaches for Taxonomic Resolution Down to the Species Level from Metagenomics Data Revealing Novelties

Authors: Oshma Chakoory, Sophie Comtet-Marre, Pierre Peyret

Abstract:

Metagenomic classifiers are widely used for the taxonomic profiling of metagenomic data and estimation of taxa relative abundance. Small subunit rRNA genes are nowadays a gold standard for the phylogenetic resolution of complex microbial communities, although the power of this marker comes down to its use as full-length. We benchmarked the performance and accuracy of rRNA-specialized versus general-purpose read mappers, reference-targeted assemblers and taxonomic classifiers. We then built a pipeline called RiboTaxa to generate a highly sensitive and specific metataxonomic approach. Using metagenomics data, RiboTaxa gave the best results compared to other tools (Kraken2, Centrifuge (1), METAXA2 (2), PhyloFlash (3)) with precise taxonomic identification and relative abundance description, giving no false positive detection. Using real datasets from various environments (ocean, soil, human gut) and from different approaches (metagenomics and gene capture by hybridization), RiboTaxa revealed microbial novelties not seen by current bioinformatics analysis opening new biological perspectives in human and environmental health. In a study focused on corals’ health involving 20 metagenomic samples (4), an affiliation of prokaryotes was limited to the family level with Endozoicomonadaceae characterising healthy octocoral tissue. RiboTaxa highlighted 2 species of uncultured Endozoicomonas which were dominant in the healthy tissue. Both species belonged to a genus not yet described, opening new research perspectives on corals’ health. Applied to metagenomics data from a study on human gut and extreme longevity (5), RiboTaxa detected the presence of an uncultured archaeon in semi-supercentenarians (aged 105 to 109 years) highlighting an archaeal genus, not yet described, and 3 uncultured species belonging to the Enorma genus that could be species of interest participating in the longevity process. RiboTaxa is user-friendly, rapid, allowing microbiota structure description from any environment and the results can be easily interpreted. This software is freely available at https://github.com/oschakoory/RiboTaxa under the GNU Affero General Public License 3.0.

Keywords: metagenomics profiling, microbial diversity, SSU rRNA genes, full-length phylogenetic marker

Procedia PDF Downloads 101
18906 A Comparative Analysis of Geometric and Exponential Laws in Modelling the Distribution of the Duration of Daily Precipitation

Authors: Mounia El Hafyani, Khalid El Himdi

Abstract:

Precipitation is one of the key variables in water resource planning. The importance of modeling wet and dry durations is a crucial pointer in engineering hydrology. The objective of this study is to model and analyze the distribution of wet and dry durations. For this purpose, the daily rainfall data from 1967 to 2017 of the Moroccan city of Kenitra’s station are used. Three models are implemented for the distribution of wet and dry durations, namely the first-order Markov chain, the second-order Markov chain, and the truncated negative binomial law. The adherence of the data to the proposed models is evaluated using Chi-square and Kolmogorov-Smirnov tests. The Akaike information criterion is applied to assess the most effective model distribution. We go further and study the law of the number of wet and dry days among k consecutive days. The calculation of this law is done through an algorithm that we have implemented based on conditional laws. We complete our work by comparing the observed moments of the numbers of wet/dry days among k consecutive days to the calculated moment of the three estimated models. The study shows the effectiveness of our approach in modeling wet and dry durations of daily precipitation.

Keywords: Markov chain, rainfall, truncated negative binomial law, wet and dry durations

Procedia PDF Downloads 112
18905 The Effect of Antibiotic Use on Blood Cultures: Implications for Future Policy

Authors: Avirup Chowdhury, Angus K. McFadyen, Linsey Batchelor

Abstract:

Blood cultures (BCs) are an important aspect of management of the septic patient, identifying the underlying pathogen and its antibiotic sensitivities. However, while the current literature outlines indications for initial BCs to be taken, there is little guidance for repeat sampling in the following 5-day period and little information on how antibiotic use can affect the usefulness of this investigation. A retrospective cohort study was conducted using inpatients who had undergone 2 or more BCs within 5 days between April 2016 and April 2017 at a 400-bed hospital in the west of Scotland and received antibiotic therapy between the first and second BCs. The data for BC sampling was collected from the electronic microbiology database, and cross-referenced with data from the hospital electronic prescribing system. Overall, 283 BCs were included in the study, taken from 92 patients (mean 3.08 cultures per patient, range 2-10). All 92 patients had initial BCs, of which 83 were positive (90%). 65 had a further sample within 24 hours of commencement of antibiotics, with 35 positive (54%). 23 had samples within 24-48 hours, with 4 (17%) positive; 12 patients had sampling at 48-72 hours, 12 at 72-96 hours, and 10 at 96-120 hours, with none positive. McNemar’s Exact Test was used to calculate statistical significance for patients who received blood cultures in multiple time blocks (Initial, < 24h, 24-120h, > 120h). For initial vs. < 24h-post BCs (53 patients tested), the proportion of positives fell from 46/53 to 29/53 (one-tailed P=0.002, OR 3.43, 95% CI 1.48-7.96). For initial vs 24-120h (n=42), the proportions were 38/42 and 4/42 respectively (P < 0.001, OR 35.0, 95% CI 4.79-255.48). For initial vs > 120h (n=36), these were 33/36 and 2/36 (P < 0.001,OR ∞). These were also calculated for a positive in initial or < 24h vs. 24-120h (n=42), with proportions of 41/42 and 4/42 (P < 0.001, OR 38.0, 95% CI 5.22-276.78); and for initial or < 24h vs > 120h (n=36), with proportions of 35/36 and 2/36 respectively (P < 0.001, OR ∞). This data appears to show that taking an initial BC followed by a BC within 24 hours of antibiotic commencement would maximise blood culture yield while minimising the risk of false negative results. This could potentially remove the need for as many as 46% of BC samples without adversely affecting patient care. BC yield decreases sharply after 48 hours of antibiotic use, and may not provide any clinically useful information after this time. Further multi-centre studies would validate these findings, and provide a foundation for future health policy generation.

Keywords: antibiotics, blood culture, efficacy, inpatient

Procedia PDF Downloads 163
18904 Data Clustering Algorithm Based on Multi-Objective Periodic Bacterial Foraging Optimization with Two Learning Archives

Authors: Chen Guo, Heng Tang, Ben Niu

Abstract:

Clustering splits objects into different groups based on similarity, making the objects have higher similarity in the same group and lower similarity in different groups. Thus, clustering can be treated as an optimization problem to maximize the intra-cluster similarity or inter-cluster dissimilarity. In real-world applications, the datasets often have some complex characteristics: sparse, overlap, high dimensionality, etc. When facing these datasets, simultaneously optimizing two or more objectives can obtain better clustering results than optimizing one objective. However, except for the objectives weighting methods, traditional clustering approaches have difficulty in solving multi-objective data clustering problems. Due to this, evolutionary multi-objective optimization algorithms are investigated by researchers to optimize multiple clustering objectives. In this paper, the Data Clustering algorithm based on Multi-objective Periodic Bacterial Foraging Optimization with two Learning Archives (DC-MPBFOLA) is proposed. Specifically, first, to reduce the high computing complexity of the original BFO, periodic BFO is employed as the basic algorithmic framework. Then transfer the periodic BFO into a multi-objective type. Second, two learning strategies are proposed based on the two learning archives to guide the bacterial swarm to move in a better direction. On the one hand, the global best is selected from the global learning archive according to the convergence index and diversity index. On the other hand, the personal best is selected from the personal learning archive according to the sum of weighted objectives. According to the aforementioned learning strategies, a chemotaxis operation is designed. Third, an elite learning strategy is designed to provide fresh power to the objects in two learning archives. When the objects in these two archives do not change for two consecutive times, randomly initializing one dimension of objects can prevent the proposed algorithm from falling into local optima. Fourth, to validate the performance of the proposed algorithm, DC-MPBFOLA is compared with four state-of-art evolutionary multi-objective optimization algorithms and one classical clustering algorithm on evaluation indexes of datasets. To further verify the effectiveness and feasibility of designed strategies in DC-MPBFOLA, variants of DC-MPBFOLA are also proposed. Experimental results demonstrate that DC-MPBFOLA outperforms its competitors regarding all evaluation indexes and clustering partitions. These results also indicate that the designed strategies positively influence the performance improvement of the original BFO.

Keywords: data clustering, multi-objective optimization, bacterial foraging optimization, learning archives

Procedia PDF Downloads 125
18903 The Usage of Bridge Estimator for Hegy Seasonal Unit Root Tests

Authors: Huseyin Guler, Cigdem Kosar

Abstract:

The aim of this study is to propose Bridge estimator for seasonal unit root tests. Seasonality is an important factor for many economic time series. Some variables may contain seasonal patterns and forecasts that ignore important seasonal patterns have a high variance. Therefore, it is very important to eliminate seasonality for seasonal macroeconomic data. There are some methods to eliminate the impacts of seasonality in time series. One of them is filtering the data. However, this method leads to undesired consequences in unit root tests, especially if the data is generated by a stochastic seasonal process. Another method to eliminate seasonality is using seasonal dummy variables. Some seasonal patterns may result from stationary seasonal processes, which are modelled using seasonal dummies but if there is a varying and changing seasonal pattern over time, so the seasonal process is non-stationary, deterministic seasonal dummies are inadequate to capture the seasonal process. It is not suitable to use seasonal dummies for modeling such seasonally nonstationary series. Instead of that, it is necessary to take seasonal difference if there are seasonal unit roots in the series. Different alternative methods are proposed in the literature to test seasonal unit roots, such as Dickey, Hazsa, Fuller (DHF) and Hylleberg, Engle, Granger, Yoo (HEGY) tests. HEGY test can be also used to test the seasonal unit root in different frequencies (monthly, quarterly, and semiannual). Another issue in unit root tests is the lag selection. Lagged dependent variables are added to the model in seasonal unit root tests as in the unit root tests to overcome the autocorrelation problem. In this case, it is necessary to choose the lag length and determine any deterministic components (i.e., a constant and trend) first, and then use the proper model to test for seasonal unit roots. However, this two-step procedure might lead size distortions and lack of power in seasonal unit root tests. Recent studies show that Bridge estimators are good in selecting optimal lag length while differentiating nonstationary versus stationary models for nonseasonal data. The advantage of this estimator is the elimination of the two-step nature of conventional unit root tests and this leads a gain in size and power. In this paper, the Bridge estimator is proposed to test seasonal unit roots in a HEGY model. A Monte-Carlo experiment is done to determine the efficiency of this approach and compare the size and power of this method with HEGY test. Since Bridge estimator performs well in model selection, our approach may lead to some gain in terms of size and power over HEGY test.

Keywords: bridge estimators, HEGY test, model selection, seasonal unit root

Procedia PDF Downloads 323
18902 Determinant Factor of Farm Household Fruit Tree Planting: The Case of Habru Woreda, North Wollo

Authors: Getamesay Kassaye Dimru

Abstract:

The cultivation of fruit tree in degraded areas has two-fold importance. Firstly, it improves food availability and income, and secondly, it promotes the conservation of soil and water improving, in turn, the productivity of the land. The main objectives of this study are to identify the determinant of farmer's fruit trees plantation decision and to major fruit production challenges and opportunities of the study area. The analysis was made using primary data collected from 60 sample household selected randomly from the study area in 2016. The primary data was supplemented by data collected from a key informant. In addition to the descriptive statistics and statistical tests (Chi-square test and t-test), a logit model was employed to identify the determinant of fruit tree plantation decision. Drought, pest incidence, land degradation, lack of input, lack of capital and irrigation schemes maintenance, lack of misuse of irrigation water and limited agricultural personnel are the major production constraints identified. The opportunities that need to further exploited are better access to irrigation, main road access, endowment of preferred guava variety, experience of farmers, and proximity of the study area to research center. The result of logit model shows that from different factors hypothesized to determine fruit tree plantation decision, age of the household head accesses to market and perception of farmers about fruits' disease and pest resistance are found to be significant. The result has revealed important implications for the promotion of fruit production for both land degradation control and rehabilitation and increasing the livelihood of farming households.

Keywords: degradation, fruit, irrigation, pest

Procedia PDF Downloads 212
18901 Calculation of Pressure-Varying Langmuir and Brunauer-Emmett-Teller Isotherm Adsorption Parameters

Authors: Trevor C. Brown, David J. Miron

Abstract:

Gas-solid physical adsorption methods are central to the characterization and optimization of the effective surface area, pore size and porosity for applications such as heterogeneous catalysis, and gas separation and storage. Properties such as adsorption uptake, capacity, equilibrium constants and Gibbs free energy are dependent on the composition and structure of both the gas and the adsorbent. However, challenges remain, in accurately calculating these properties from experimental data. Gas adsorption experiments involve measuring the amounts of gas adsorbed over a range of pressures under isothermal conditions. Various constant-parameter models, such as Langmuir and Brunauer-Emmett-Teller (BET) theories are used to provide information on adsorbate and adsorbent properties from the isotherm data. These models typically do not provide accurate interpretations across the full range of pressures and temperatures. The Langmuir adsorption isotherm is a simple approximation for modelling equilibrium adsorption data and has been effective in estimating surface areas and catalytic rate laws, particularly for high surface area solids. The Langmuir isotherm assumes the systematic filling of identical adsorption sites to a monolayer coverage. The BET model is based on the Langmuir isotherm and allows for the formation of multiple layers. These additional layers do not interact with the first layer and the energetics are equal to the adsorbate as a bulk liquid. This BET method is widely used to measure the specific surface area of materials. Both Langmuir and BET models assume that the affinity of the gas for all adsorption sites are identical and so the calculated adsorbent uptake at the monolayer and equilibrium constant are independent of coverage and pressure. Accurate representations of adsorption data have been achieved by extending the Langmuir and BET models to include pressure-varying uptake capacities and equilibrium constants. These parameters are determined using a novel regression technique called flexible least squares for time-varying linear regression. For isothermal adsorption the adsorption parameters are assumed to vary slowly and smoothly with increasing pressure. The flexible least squares for pressure-varying linear regression (FLS-PVLR) approach assumes two distinct types of discrepancy terms, dynamic and measurement for all parameters in the linear equation used to simulate the data. Dynamic terms account for pressure variation in successive parameter vectors, and measurement terms account for differences between observed and theoretically predicted outcomes via linear regression. The resultant pressure-varying parameters are optimized by minimizing both dynamic and measurement residual squared errors. Validation of this methodology has been achieved by simulating adsorption data for n-butane and isobutane on activated carbon at 298 K, 323 K and 348 K and for nitrogen on mesoporous alumina at 77 K with pressure-varying Langmuir and BET adsorption parameters (equilibrium constants and uptake capacities). This modeling provides information on the adsorbent (accessible surface area and micropore volume), adsorbate (molecular areas and volumes) and thermodynamic (Gibbs free energies) variations of the adsorption sites.

Keywords: Langmuir adsorption isotherm, BET adsorption isotherm, pressure-varying adsorption parameters, adsorbate and adsorbent properties and energetics

Procedia PDF Downloads 219
18900 Citation Analysis of New Zealand Court Decisions

Authors: Tobias Milz, L. Macpherson, Varvara Vetrova

Abstract:

The law is a fundamental pillar of human societies as it shapes, controls and governs how humans conduct business, behave and interact with each other. Recent advances in computer-assisted technologies such as NLP, data science and AI are creating opportunities to support the practice, research and study of this pervasive domain. It is therefore not surprising that there has been an increase in investments into supporting technologies for the legal industry (also known as “legal tech” or “law tech”) over the last decade. A sub-discipline of particular appeal is concerned with assisted legal research. Supporting law researchers and practitioners to retrieve information from the vast amount of ever-growing legal documentation is of natural interest to the legal research community. One tool that has been in use for this purpose since the early nineteenth century is legal citation indexing. Among other use cases, they provided an effective means to discover new precedent cases. Nowadays, computer-assisted network analysis tools can allow for new and more efficient ways to reveal the “hidden” information that is conveyed through citation behavior. Unfortunately, access to openly available legal data is still lacking in New Zealand and access to such networks is only commercially available via providers such as LexisNexis. Consequently, there is a need to create, analyze and provide a legal citation network with sufficient data to support legal research tasks. This paper describes the development and analysis of a legal citation Network for New Zealand containing over 300.000 decisions from 125 different courts of all areas of law and jurisdiction. Using python, the authors assembled web crawlers, scrapers and an OCR pipeline to collect and convert court decisions from openly available sources such as NZLII into uniform and machine-readable text. This facilitated the use of regular expressions to identify references to other court decisions from within the decision text. The data was then imported into a graph-based database (Neo4j) with the courts and their respective cases represented as nodes and the extracted citations as links. Furthermore, additional links between courts of connected cases were added to indicate an indirect citation between the courts. Neo4j, as a graph-based database, allows efficient querying and use of network algorithms such as PageRank to reveal the most influential/most cited courts and court decisions over time. This paper shows that the in-degree distribution of the New Zealand legal citation network resembles a power-law distribution, which indicates a possible scale-free behavior of the network. This is in line with findings of the respective citation networks of the U.S. Supreme Court, Austria and Germany. The authors of this paper provide the database as an openly available data source to support further legal research. The decision texts can be exported from the database to be used for NLP-related legal research, while the network can be used for in-depth analysis. For example, users of the database can specify the network algorithms and metrics to only include specific courts to filter the results to the area of law of interest.

Keywords: case citation network, citation analysis, network analysis, Neo4j

Procedia PDF Downloads 94
18899 'Sea Power: Concept, Influence and Securitization'; the Nigerian Navy's Role in a Developing State like Nigeria

Authors: William Abiodun Duyile

Abstract:

It is common knowledge that marine food has always been found from the sea, energy can also be found underneath and, to a growing extent; other mineral resources have come from the sea spaces. It is the importance of the sea and the sea lines of communication to littoral nations that has made concepts such as sea power, naval power, etc., significant to them. The study relied on documentary data. The documentary data were sourced from government annual departmental reports, newspapers and correspondence. The secondary sources used were subjected to internal and external criticism for authentication, and then to textual and contextual analyses. The study found that the differential level of seamanship amongst states defined their relationship. It was sea power that gave some states an edge over the others. The study proves that over the ages sea power has been core to the development of States or Empires. The study found that the Nigerian Navy was centre to Nigeria’s conquest of the littoral areas of Biafra, like Bonny, Port-Harcourt, and Calabar; it was also an important turning point of the Nigerian civil war since by it Biafra became landlocked. The research was able to identify succinctly the Nigerian Navy’s contribution to the security and development of the Nigerian State.

Keywords: sea power, naval power, land locked states, warship

Procedia PDF Downloads 127
18898 Heterogeneous Reactions to Digital Opportunities: A Field Study

Authors: Bangaly Kaba

Abstract:

In the global information society, the importance of the Internet cannot be overemphasized. Africa needs access to the powerful information and communication tools of the Internet in order to obtain the resources and efficiency essential for sustainable development. Unfortunately, in 2013, the data from Internetworldstats showed only 15% of African populations have access to Internet. This relative low Internet penetration rate signals a problem that may threaten the economic development, governmental efficiency, and ultimately the global competitiveness of African countries. Many initiatives were undertaken to bring the benefits of the global information revolution to the people of Africa, through connection to the Internet and other Global Information Infrastructure technologies. The purpose is to understand differences between socio-economically advantaged and disadvantaged internet users. From that, we will determine what prevents disadvantaged groups from benefiting from Internet usage. Data were collected through a survey from Internet users in Ivory Coast. The results reveal that Personal network exposure, Self-efficacy and Availability are the key drivers of continued use intention for the socio-economically disadvantaged group. The theoretical and practical implications are also described.

Keywords: digital inequality, internet, integrative model, socio-economically advantaged and disadvantaged, use continuance, Africa

Procedia PDF Downloads 463