Search results for: generalized frequency division multiplexing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5156

Search results for: generalized frequency division multiplexing

3026 A Corpus-Based Study on the Lexical, Syntactic and Sequential Features across Interpreting Types

Authors: Qianxi Lv, Junying Liang

Abstract:

Among the various modes of interpreting, simultaneous interpreting (SI) is regarded as a ‘complex’ and ‘extreme condition’ of cognitive tasks while consecutive interpreters (CI) do not have to share processing capacity between tasks. Given that SI exerts great cognitive demand, it makes sense to posit that the output of SI may be more compromised than that of CI in the linguistic features. The bulk of the research has stressed the varying cognitive demand and processes involved in different modes of interpreting; however, related empirical research is sparse. In keeping with our interest in investigating the quantitative linguistic factors discriminating between SI and CI, the current study seeks to examine the potential lexical simplification, syntactic complexity and sequential organization mechanism with a self-made inter-model corpus of transcribed simultaneous and consecutive interpretation, translated speech and original speech texts with a total running word of 321960. The lexical features are extracted in terms of the lexical density, list head coverage, hapax legomena, and type-token ratio, as well as core vocabulary percentage. Dependency distance, an index for syntactic complexity and reflective of processing demand is employed. Frequency motif is a non-grammatically-bound sequential unit and is also used to visualize the local function distribution of interpreting the output. While SI is generally regarded as multitasking with high cognitive load, our findings evidently show that CI may impose heavier or taxing cognitive resource differently and hence yields more lexically and syntactically simplified output. In addition, the sequential features manifest that SI and CI organize the sequences from the source text in different ways into the output, to minimize the cognitive load respectively. We reasoned the results in the framework that cognitive demand is exerted both on maintaining and coordinating component of Working Memory. On the one hand, the information maintained in CI is inherently larger in volume compared to SI. On the other hand, time constraints directly influence the sentence reformulation process. The temporal pressure from the input in SI makes the interpreters only keep a small chunk of information in the focus of attention. Thus, SI interpreters usually produce the output by largely retaining the source structure so as to relieve the information from the working memory immediately after formulated in the target language. Conversely, CI interpreters receive at least a few sentences before reformulation, when they are more self-paced. CI interpreters may thus tend to retain and generate the information in a way to lessen the demand. In other words, interpreters cope with the high demand in the reformulation phase of CI by generating output with densely distributed function words, more content words of higher frequency values and fewer variations, simpler structures and more frequently used language sequences. We consequently propose a revised effort model based on the result for a better illustration of cognitive demand during both interpreting types.

Keywords: cognitive demand, corpus-based, dependency distance, frequency motif, interpreting types, lexical simplification, sequential units distribution, syntactic complexity

Procedia PDF Downloads 178
3025 Comparative Morphometric Analysis of Ambardi and Mangari Watersheds of Kadvi and Kasari River Sub-Basins in Kolhapur District, Maharashtra, India: Using Geographical Information System (GIS)

Authors: Chandrakant Gurav, Md. Babar

Abstract:

In the present study, an attempt is made to delineate the comparative morphometric analysis of Ambardi and Mangari watersheds of Kadvi and Kasari rivers sub-basins, Kolhapur District, Maharashtra India, using Geographical Information System (GIS) techniques. GIS is a computer assisted information method to store, analyze and display spatial data. Both the watersheds originate from Masai plateau of Jotiba- Panhala Hill range in Panhala Taluka of Kolhapur district. Ambardi watersheds cover 42.31 Sq. km. area and occur in northern hill slope, whereas Mangari watershed covers 54.63 Sq. km. area and occur on southern hill slope. Geologically, the entire study area is covered by Deccan Basaltic Province (DBP) of late Cretaceous to early Eocene age. Laterites belonging to late Pleistocene age also occur in the top of the hills. The objective of the present study is to carry out the morphometric parameters of watersheds, which occurs in differing slopes of the hill. Morphometric analysis of Ambardi watershed indicates it is of 4th order stream and Mangari watershed is of 5th order stream. Average bifurcation ratio of both watersheds is 5.4 and 4.0 showing that in both the watersheds streams flow from homogeneous nature of lithology and there is no structural controlled in development of the watersheds. Drainage density of Ambardi and Mangari watersheds is 3.45 km/km2 and 3.81 km/km2 respectively, and Stream Frequency is 4.51 streams/ km2 and 5.97 streams/ km2, it indicates that high drainage density and high stream frequency is governed by steep slope and low infiltration rate of the area for groundwater recharge. Textural ratio of both the watersheds is 6.6 km-1 and 9.6 km-1, which indicates that the drainage texture is fine to very fine. Form factor, circularity ratio and elongation ratios of the Ambardi and Mangari watersheds shows that both the watersheds are elongated in shape. The basin relief of Ambardi watershed is 447 m, while Mangari is 456 m. Relief ratio of Ambardi is 0.0428 and Mangari is 0.040. The ruggedness number of Ambardi is 1.542 and Mangari watershed is 1.737. The ruggedness number of both the watersheds is high which indicates the relief and drainage density is high.

Keywords: Ambardi, Deccan basalt, GIS, morphometry, Mangari, watershed

Procedia PDF Downloads 301
3024 A CMOS D-Band Power Amplifier in 22FDSOI Technology for 6G Applications

Authors: Karandeep Kaur

Abstract:

This paper presents the design of power amplifier (PA) for mmWave communication systems. The designed amplifier uses GlobalFoundries 22 FDX technology and works at an operational frequency of 140 GHz in the D-Band. With a supply voltage of 0.8V for the super low threshold voltage transistors, the amplifier is biased in class AB and has a total current consumption of 50 mA. The measured saturated output power from the power amplifier is 5.6 dBm with an output-referred 1dB-compression point of 1.6dBm. The measured gain of PA is 19 dB with 3 dB-bandwidth ranging from 120 GHz to 140 GHz. The chip occupies an area of 795µm × 410µm.

Keywords: mmWave communication system, power amplifiers, 22FDX, D-Band, cross-coupled capacitive neutralization

Procedia PDF Downloads 163
3023 The Study of ZigBee Protocol Application in Wireless Networks

Authors: Ardavan Zamanpour, Somaieh Yassari

Abstract:

ZigBee protocol network was developed in industries and MIT laboratory in 1997. ZigBee is a wireless networking technology by alliance ZigBee which is designed to low board and low data rate applications. It is a Protocol which connects between electrical devises with very low energy and cost. The first version of IEEE 802.15.4 which was formed ZigBee was based on 2.4GHZ MHZ 912MHZ 868 frequency band. The name of system is often reminded random directions that bees (BEES) traversing during pollination of products. Such as alloy of the ways in which information packets are traversed within the mesh network. This paper aims to study the performance and effectiveness of this protocol in wireless networks.

Keywords: ZigBee, protocol, wireless, networks

Procedia PDF Downloads 369
3022 Instructional Leadership, Information and Communications Technology Competencies and Performance of Basic Education Teachers

Authors: Jay Martin L. Dionaldo

Abstract:

This study aimed to develop a causal model on the performance of the basic education teachers in the Division of Malaybalay City for the school year 2018-2019. This study used the responses of 300 randomly selected basic education teachers of Malaybalay City, Bukidnon. They responded to the three sets of questionnaires patterned from the National Education Association (2018) on instructional leadership of teachers, the questionnaire of Caluza et al., (2017) for information and communications technology competencies and the questionnaire on the teachers’ performance using the Individual Performance Commitment and Review Form (IPCRF) adopted by the Department of Education (DepEd). Descriptive statistics such as mean for the description, correlation for a relationship, regression for the extent influence, and path analysis for the model that best fits teachers’ performance were used. Result showed that basic education teachers have a very satisfactory level of performance. Also, the teachers highly practice instructional leadership practices in terms of coaching and mentoring, facilitating collaborative relationships, and community awareness and engagement. On the other hand, they are proficient users of ICT in terms of technology operations and concepts and basic users in terms of their pedagogical indicators. Furthermore, instructional leadership, coaching and mentoring, facilitating collaborative relationships and community awareness and engagement and information and communications technology competencies; technology operations and concept and pedagogy were significantly correlated toward teachers’ performance. Coaching and mentoring, community awareness and engagement, and technology operations and concept were the best predictors of teachers’ performance. The model that best fit teachers’ performance is anchored on coaching and mentoring of the teachers, embedded with facilitating collaborative relationships, community awareness, and engagement, technology operations, and concepts, and pedagogy.

Keywords: information and communications technology, instructional leadership, coaching and mentoring, collaborative relationship

Procedia PDF Downloads 116
3021 Determination of Authorship of the Works Created by the Artificial Intelligence

Authors: Vladimir Sharapaev

Abstract:

This paper seeks to address the question of the authorship of copyrighted works created solely by the artificial intelligence or with the use thereof, and proposes possible interpretational or legislative solutions to the problems arising from the plurality of the persons potentially involved in the ultimate creation of the work and division of tasks among such persons. Being based on the commonly accepted assumption that a copyrighted work can only be created by a natural person, the paper does not deal with the issues regarding the creativity of the artificial intelligence per se (or the lack thereof), and instead focuses on the distribution of the intellectual property rights potentially belonging to the creators of the artificial intelligence and/or the creators of the content used for the formation of the copyrighted work. Moreover, the technical development and rapid improvement of the AI-based programmes, which tend to be reaching even greater independence on a human being, give rise to the question whether the initial creators of the artificial intelligence can be entitled to the intellectual property rights to the works created by such AI at all. As the juridical practice of some European courts and legal doctrine tends to incline to the latter opinion, indicating that the works created by the AI may not at all enjoy copyright protection, the questions of authorships appear to be causing great concerns among the investors in the development of the relevant technology. Although the technology companies dispose with further instruments of protection of their investments, the risk of the works in question not being copyrighted caused by the inconsistency of the case law and a certain research gap constitutes a highly important issue. In order to assess the possible interpretations, the author adopted a doctrinal and analytical approach to the research, systematically analysing the European and Czech copyright laws and case law in some EU jurisdictions. This study aims to contribute to greater legal certainty regarding the issues of the authorship of the AI-created works and define possible clues for further research.

Keywords: artificial intelligence, copyright, authorship, copyrighted work, intellectual property

Procedia PDF Downloads 122
3020 Application of Nonparametric Geographically Weighted Regression to Evaluate the Unemployment Rate in East Java

Authors: Sifriyani Sifriyani, I Nyoman Budiantara, Sri Haryatmi, Gunardi Gunardi

Abstract:

East Java Province has a first rank as a province that has the most counties and cities in Indonesia and has the largest population. In 2015, the population reached 38.847.561 million, this figure showed a very high population growth. High population growth is feared to lead to increase the levels of unemployment. In this study, the researchers mapped and modeled the unemployment rate with 6 variables that were supposed to influence. Modeling was done by nonparametric geographically weighted regression methods with truncated spline approach. This method was chosen because spline method is a flexible method, these models tend to look for its own estimation. In this modeling, there were point knots, the point that showed the changes of data. The selection of the optimum point knots was done by selecting the most minimun value of Generalized Cross Validation (GCV). Based on the research, 6 variables were declared to affect the level of unemployment in eastern Java. They were the percentage of population that is educated above high school, the rate of economic growth, the population density, the investment ratio of total labor force, the regional minimum wage and the ratio of the number of big industry and medium scale industry from the work force. The nonparametric geographically weighted regression models with truncated spline approach had a coefficient of determination 98.95% and the value of MSE equal to 0.0047.

Keywords: East Java, nonparametric geographically weighted regression, spatial, spline approach, unemployed rate

Procedia PDF Downloads 321
3019 A Multi-Release Software Reliability Growth Models Incorporating Imperfect Debugging and Change-Point under the Simulated Testing Environment and Software Release Time

Authors: Sujit Kumar Pradhan, Anil Kumar, Vijay Kumar

Abstract:

The testing process of the software during the software development time is a crucial step as it makes the software more efficient and dependable. To estimate software’s reliability through the mean value function, many software reliability growth models (SRGMs) were developed under the assumption that operating and testing environments are the same. Practically, it is not true because when the software works in a natural field environment, the reliability of the software differs. This article discussed an SRGM comprising change-point and imperfect debugging in a simulated testing environment. Later on, we extended it in a multi-release direction. Initially, the software was released to the market with few features. According to the market’s demand, the software company upgraded the current version by adding new features as time passed. Therefore, we have proposed a generalized multi-release SRGM where change-point and imperfect debugging concepts have been addressed in a simulated testing environment. The failure-increasing rate concept has been adopted to determine the change point for each software release. Based on nine goodness-of-fit criteria, the proposed model is validated on two real datasets. The results demonstrate that the proposed model fits the datasets better. We have also discussed the optimal release time of the software through a cost model by assuming that the testing and debugging costs are time-dependent.

Keywords: software reliability growth models, non-homogeneous Poisson process, multi-release software, mean value function, change-point, environmental factors

Procedia PDF Downloads 74
3018 Automatic Target Recognition in SAR Images Based on Sparse Representation Technique

Authors: Ahmet Karagoz, Irfan Karagoz

Abstract:

Synthetic Aperture Radar (SAR) is a radar mechanism that can be integrated into manned and unmanned aerial vehicles to create high-resolution images in all weather conditions, regardless of day and night. In this study, SAR images of military vehicles with different azimuth and descent angles are pre-processed at the first stage. The main purpose here is to reduce the high speckle noise found in SAR images. For this, the Wiener adaptive filter, the mean filter, and the median filters are used to reduce the amount of speckle noise in the images without causing loss of data. During the image segmentation phase, pixel values are ordered so that the target vehicle region is separated from other regions containing unnecessary information. The target image is parsed with the brightest 20% pixel value of 255 and the other pixel values of 0. In addition, by using appropriate parameters of statistical region merging algorithm, segmentation comparison is performed. In the step of feature extraction, the feature vectors belonging to the vehicles are obtained by using Gabor filters with different orientation, frequency and angle values. A number of Gabor filters are created by changing the orientation, frequency and angle parameters of the Gabor filters to extract important features of the images that form the distinctive parts. Finally, images are classified by sparse representation method. In the study, l₁ norm analysis of sparse representation is used. A joint database of the feature vectors generated by the target images of military vehicle types is obtained side by side and this database is transformed into the matrix form. In order to classify the vehicles in a similar way, the test images of each vehicle is converted to the vector form and l₁ norm analysis of the sparse representation method is applied through the existing database matrix form. As a result, correct recognition has been performed by matching the target images of military vehicles with the test images by means of the sparse representation method. 97% classification success of SAR images of different military vehicle types is obtained.

Keywords: automatic target recognition, sparse representation, image classification, SAR images

Procedia PDF Downloads 366
3017 Optimising Transcranial Alternating Current Stimulation

Authors: Robert Lenzie

Abstract:

Transcranial electrical stimulation (tES) is significant in the research literature. However, the effects of tES on brain activity are still poorly understood at the surface level, the Brodmann Area level, and the impact on neural networks. Using a method like electroencephalography (EEG) in conjunction with tES might make it possible to comprehend the brain response and mechanisms behind published observed alterations in more depth. Using a method to directly see the effect of tES on EEG may offer high temporal resolution data on the brain activity changes/modulations brought on by tES that correlate to various processing stages within the brain. This paper provides unpublished information on a cutting-edge methodology that may reveal details about the dynamics of how the human brain works beyond what is now achievable with existing methods.

Keywords: tACS, frequency, EEG, optimal

Procedia PDF Downloads 83
3016 Effect of Particle Aspect Ratio and Shape Factor on Air Flow inside Pulmonary Region

Authors: Pratibha, Jyoti Kori

Abstract:

Particles in industry, harvesting, coal mines, etc. may not necessarily be spherical in shape. In general, it is difficult to find perfectly spherical particle. The prediction of movement and deposition of non spherical particle in distinct airway generation is much more difficult as compared to spherical particles. Moreover, there is extensive inflexibility in deposition between ducts of a particular generation and inside every alveolar duct since particle concentrations can be much bigger than the mean acinar concentration. Consequently, a large number of particles fail to be exhaled during expiration. This study presents a mathematical model for the movement and deposition of those non-spherical particles by using particle aspect ratio and shape factor. We analyse the pulsatile behavior underneath sinusoidal wall oscillation due to periodic breathing condition through a non-Darcian porous medium or inside pulmonary region. Since the fluid is viscous and Newtonian, the generalized Navier-Stokes equation in two-dimensional coordinate system (r, z) is used with boundary-layer theory. Results are obtained for various values of Reynolds number, Womersley number, Forchsheimer number, particle aspect ratio and shape factor. Numerical computation is done by using finite difference scheme for very fine mesh in MATLAB. It is found that the overall air velocity is significantly increased by changes in aerodynamic diameter, aspect ratio, alveoli size, Reynolds number and the pulse rate; while velocity is decreased by increasing Forchheimer number.

Keywords: deposition, interstitial lung diseases, non-Darcian medium, numerical simulation, shape factor

Procedia PDF Downloads 185
3015 Connected Objects with Optical Rectenna for Wireless Information Systems

Authors: Chayma Bahar, Chokri Baccouch, Hedi Sakli, Nizar Sakli

Abstract:

Harvesting and transport of optical and radiofrequency signals are a topical subject with multiple challenges. In this paper, we present a Optical RECTENNA system. We propose here a hybrid system solar cell antenna for 5G mobile communications networks. Thus, we propose rectifying circuit. A parametric study is done to follow the influence of load resistance and input power on Optical RECTENNA system performance. Thus, we propose a solar cell antenna structure in the frequency band of future 5G standard in 2.45 GHz bands.

Keywords: antenna, IoT, optical rectenna, solar cell

Procedia PDF Downloads 178
3014 Web-Content Analysis of the Major Spanish Tourist Destinations Evaluation by Russian Tourists

Authors: Natalia Polkanova, Sergey Kazakov

Abstract:

In the research, we proposed the set of factors of tourist destinations attractiveness in Spain and determined the factors that have the greatest impact on the positive perception of the tourist destination by Russian tourists; also, we examined what factors create the willingness for Russians to recommend this tourist destination to their friends and relatives. The tourists' comments on the Russian travel sites have been analyzed in order to determine the frequency of attractiveness characteristics references. Additionally, the study will reflect the relationship of variables.

Keywords: tourism destination, destination attractiveness, destination competitiveness, content analysis, unstructured image

Procedia PDF Downloads 470
3013 Communication in the Sciences: A Discourse Analysis of Biology Research Articles and Magazine Articles

Authors: Gayani Ranawake

Abstract:

Effective communication is widely regarded as an important aspect of any discipline. This particular study deals with written communication in science. Writing conventions and linguistic choices play a key role in conveying the message effectively to a target audience. Scientists are responsible for conveying their findings or research results not only to their discourse community but also to the general public. Recognizing appropriate linguistic choices is crucial since they vary depending on the target audience. The majority of scientists can communicate effectively with their discourse community, but public engagement seems more challenging to them. There is a lack of research into the language use of scientists, and in particular how it varies by discipline and audience (genre). A better understanding of the different linguistic conventions used in effective science writing by scientists for scientists and by scientists for the public will help to guide scientists who are familiar with their discourse community norms to write effectively for the public. This study investigates the differences and similarities of linguistic choices in biology articles written by scientists for their discourse community and biology magazine articles written by scientists and science communicators for the general public. This study is a part of a larger project investigating linguistic differences in different genres of science academic writing. The sample for this particular study is composed of 20 research articles from the journal Biological Reviews and 20 magazine articles from the magazine Australian Popular Science. Differences in the linguistic devices were analyzed using Hyland’s metadiscourse model for academic writing proposed in 2005. The frequency of the usage of interactive resources (transitions, frame markers, endophoric markers, evidentials and code glosses) and interactional resources (hedges, boosters, attitude markers, self-mentions and engagement markers) were compared and contrasted using the NVivo textual analysis tool. The results clearly show the differences in the frequency of usage of interactional and interactive resources in the two disciplines under investigation. The findings of this study provide a reference guide for scientists and science writers to understand the differences in the linguistic choices between the two genres. This will be particularly helpful for scientists who are proficient at writing for their discourse community, but not for the public.

Keywords: discourse analysis, linguistic choices, metadiscourse, science writing

Procedia PDF Downloads 141
3012 An Intelligent Text Independent Speaker Identification Using VQ-GMM Model Based Multiple Classifier System

Authors: Ben Soltane Cheima, Ittansa Yonas Kelbesa

Abstract:

Speaker Identification (SI) is the task of establishing identity of an individual based on his/her voice characteristics. The SI task is typically achieved by two-stage signal processing: training and testing. The training process calculates speaker specific feature parameters from the speech and generates speaker models accordingly. In the testing phase, speech samples from unknown speakers are compared with the models and classified. Even though performance of speaker identification systems has improved due to recent advances in speech processing techniques, there is still need of improvement. In this paper, a Closed-Set Tex-Independent Speaker Identification System (CISI) based on a Multiple Classifier System (MCS) is proposed, using Mel Frequency Cepstrum Coefficient (MFCC) as feature extraction and suitable combination of vector quantization (VQ) and Gaussian Mixture Model (GMM) together with Expectation Maximization algorithm (EM) for speaker modeling. The use of Voice Activity Detector (VAD) with a hybrid approach based on Short Time Energy (STE) and Statistical Modeling of Background Noise in the pre-processing step of the feature extraction yields a better and more robust automatic speaker identification system. Also investigation of Linde-Buzo-Gray (LBG) clustering algorithm for initialization of GMM, for estimating the underlying parameters, in the EM step improved the convergence rate and systems performance. It also uses relative index as confidence measures in case of contradiction in identification process by GMM and VQ as well. Simulation results carried out on voxforge.org speech database using MATLAB highlight the efficacy of the proposed method compared to earlier work.

Keywords: feature extraction, speaker modeling, feature matching, Mel frequency cepstrum coefficient (MFCC), Gaussian mixture model (GMM), vector quantization (VQ), Linde-Buzo-Gray (LBG), expectation maximization (EM), pre-processing, voice activity detection (VAD), short time energy (STE), background noise statistical modeling, closed-set tex-independent speaker identification system (CISI)

Procedia PDF Downloads 309
3011 The Development of Congeneric Elicited Writing Tasks to Capture Language Decline in Alzheimer Patients

Authors: Lise Paesen, Marielle Leijten

Abstract:

People diagnosed with probable Alzheimer disease suffer from an impairment of their language capacities; a gradual impairment which affects both their spoken and written communication. Our study aims at characterising the language decline in DAT patients with the use of congeneric elicited writing tasks. Within these tasks, a descriptive text has to be written based upon images with which the participants are confronted. A randomised set of images allows us to present the participants with a different task on every encounter, thus allowing us to avoid a recognition effect in this iterative study. This method is a revision from previous studies, in which participants were presented with a larger picture depicting an entire scene. In order to create the randomised set of images, existing pictures were adapted following strict criteria (e.g. frequency, AoA, colour, ...). The resulting data set contained 50 images, belonging to several categories (vehicles, animals, humans, and objects). A pre-test was constructed to validate the created picture set; most images had been used before in spoken picture naming tasks. Hence the same reaction times ought to be triggered in the typed picture naming task. Once validated, the effectiveness of the descriptive tasks was assessed. First, the participants (n=60 students, n=40 healthy elderly) performed a typing task, which provided information about the typing speed of each individual. Secondly, two descriptive writing tasks were carried out, one simple and one complex. The simple task contains 4 images (1 animal, 2 objects, 1 vehicle) and only contains elements with high frequency, a young AoA (<6 years), and fast reaction times. Slow reaction times, a later AoA (≥ 6 years) and low frequency were criteria for the complex task. This task uses 6 images (2 animals, 1 human, 2 objects and 1 vehicle). The data were collected with the keystroke logging programme Inputlog. Keystroke logging tools log and time stamp keystroke activity to reconstruct and describe text production processes. The data were analysed using a selection of writing process and product variables, such as general writing process measures, detailed pause analysis, linguistic analysis, and text length. As a covariate, the intrapersonal interkey transition times from the typing task were taken into account. The pre-test indicated that the new images lead to similar or even faster reaction times compared to the original images. All the images were therefore used in the main study. The produced texts of the description tasks were significantly longer compared to previous studies, providing sufficient text and process data for analyses. Preliminary analysis shows that the amount of words produced differed significantly between the healthy elderly and the students, as did the mean length of production bursts, even though both groups needed the same time to produce their texts. However, the elderly took significantly more time to produce the complex task than the simple task. Nevertheless, the amount of words per minute remained comparable between simple and complex. The pauses within and before words varied, even when taking personal typing abilities (obtained by the typing task) into account.

Keywords: Alzheimer's disease, experimental design, language decline, writing process

Procedia PDF Downloads 274
3010 Numerical Investigation of a New Two-Fluid Model for Semi-Dilute Polymer Solutions

Authors: Soroush Hooshyar, Mohamadali Masoudian, Natalie Germann

Abstract:

Many soft materials such as polymer solutions can develop localized bands with different shear rates, which are known as shear bands. Using the generalized bracket approach of nonequilibrium thermodynamics, we recently developed a new two-fluid model to study shear banding for semi-dilute polymer solutions. The two-fluid approach is an appropriate means for describing diffusion processes such as Fickian diffusion and stress-induced migration. In this approach, it is assumed that the local gradients in concentration and, if accounted for, also stress generate a nontrivial velocity difference between the components. Since the differential velocity is treated as a state variable in our model, the implementation of the boundary conditions arising from the derivative diffusive terms is straightforward. Our model is a good candidate for benchmark simulations because of its simplicity. We analyzed its behavior in cylindrical Couette flow, a rectilinear channel flow, and a 4:1 planar contraction flow. The latter problem was solved using the OpenFOAM finite volume package and the impact of shear banding on the lip and salient vortices was investigated. For the other smooth geometries, we employed a standard Chebyshev pseudospectral collocation method. The results showed that the steady-state solution is unique with respect to initial conditions, deformation history, and the value of the diffusivity constant. However, smaller the value of the diffusivity constant is, the more time it takes to reach the steady state.

Keywords: nonequilibrium thermodynamics, planar contraction, polymer solutions, shear banding, two-fluid approach

Procedia PDF Downloads 333
3009 Fuzzy Wavelet Model to Forecast the Exchange Rate of IDR/USD

Authors: Tri Wijayanti Septiarini, Agus Maman Abadi, Muhammad Rifki Taufik

Abstract:

The exchange rate of IDR/USD can be the indicator to analysis Indonesian economy. The exchange rate as a important factor because it has big effect in Indonesian economy overall. So, it needs the analysis data of exchange rate. There is decomposition data of exchange rate of IDR/USD to be frequency and time. It can help the government to monitor the Indonesian economy. This method is very effective to identify the case, have high accurate result and have simple structure. In this paper, data of exchange rate that used is weekly data from December 17, 2010 until November 11, 2014.

Keywords: the exchange rate, fuzzy mamdani, discrete wavelet transforms, fuzzy wavelet

Procedia PDF Downloads 571
3008 Design and Implementation of Grid-Connected Photovoltaic Inverter

Authors: B. H. Lee

Abstract:

Nowadays, a grid-connected photovoltaic (PV) inverter is adopted in various places like as home, factory, because grid-connected PV inverter can reduce total power consumption by supplying electricity from PV array. In this paper, design and implementation of a 300 W grid-connected PV inverter are described. It is implemented with TI Piccolo DSP core and operated at 100 kHz switching frequency in order to reduce harmonic contents. The maximum operating input voltage is up to 45 V. The characteristics of the designed system that include maximum power point tracking (MPPT), single operation and battery charging are verified by simulation and experimental results.

Keywords: design, grid-connected, implementation, photovoltaic

Procedia PDF Downloads 420
3007 Foreign Investment, Technological Diffusion and Competiveness of Exports: A Case for Textile Industry in Pakistan

Authors: Syed Toqueer Akhter, Muhammad Awais

Abstract:

Pakistan is a country which is gifted by naturally abundant resources these resources are a pioneer towards a prospect and developed country. Pakistan is the fourth largest exporter of the textile in the world and with the passage of time the competitiveness of these exports is subject to a decline. With a lot of International players in the textile world like China, Bangladesh, India, and Sri Lanka, Pakistan needs to put up a lot of effort to compete with these countries. This research paper would determine the impact of Foreign Direct Investment upon technological diffusion and that how significantly it may be affecting on export performance of the country. It would also demonstrate that with the increase in Foreign Direct Investment, technological diffusion, strong property rights, and using different policy tools, export competitiveness of the country could be improved. The research has been carried out using time series data from 1995 to 2013 and the results have been estimated by using competing Econometrics modes such as Robust regression and Generalized least squares so that to consolidate the impact of the Foreign Investments and Technological diffusion upon export competitiveness comprehensively. Distributed Lag model has also been used to encompass the lagged effect of policy tools variables used by the government. Model estimates entail that 'FDI' and 'Technological Diffusion' do have a significant impact on the competitiveness of the exports of Pakistan. It may also be inferred that competitiveness of Textile Sector requires integrated policy framework, primarily including the reduction in interest rates, providing subsides, and manufacturing of value added products.

Keywords: high technology export, robust regression, patents, technological diffusion, export competitiveness

Procedia PDF Downloads 501
3006 Antibiotic Prescribing in the Acute Care in Iraq

Authors: Ola A. Nassr, Ali M. Abd Alridha, Rua A. Naser, Rasha S. Abbas

Abstract:

Background: Excessive and inappropriate use of antimicrobial agents among hospitalized patients remains an important patient safety and public health issue worldwide. Not only does this behavior incur unnecessary cost but it is also associated with increased morbidity and mortality. The objective of this study is to obtain an insight into the prescribing patterns of antibiotics in surgical and medical wards, to help identify a scope for improvement in service delivery. Method: A simple point prevalence survey included a convenience sample of 200 patients admitted to medical and surgical wards in a government teaching hospital in Baghdad between October 2017 and April 2018. Data were collected by a trained pharmacy intern using a standardized form. Patient’s demographics and details of the prescribed antibiotics, including dose, frequency of dosing and route of administration, were reported. Patients were included if they had been admitted at least 24 hours before the survey. Patients under 18 years of age, having a diagnosis of cancer or shock, or being admitted to the intensive care unit, were excluded. Data were checked and entered by the authors into Excel and were subjected to frequency analysis, which was carried out on anonymized data to protect patient confidentiality. Results: Overall, 88.5% of patients (n=177) received 293 antibiotics during their hospital admission, with a small variation between wards (80%-97%). The average number of antibiotics prescribed per patient was 1.65, ranging from 1.3 for medical patients to 1.95 for surgical patients. Parenteral third-generation cephalosporins were the most commonly prescribed at a rate of 54.3% (n=159) followed by nitroimidazole 29.4% (n=86), quinolones 7.5% (n=22) and macrolides 4.4% (n=13), while carbapenems and aminoglycosides were the least prescribed together accounting for only 4.4% (n=13). The intravenous route was the most common route of administration, used for 96.6% of patients (n=171). Indications were reported in only 63.8% of cases. Culture to identify pathogenic organisms was employed in only 0.5% of cases. Conclusion: Broad-spectrum antibiotics are prescribed at an alarming rate. This practice may provoke antibiotic resistance and adversely affect the patient outcome. Implementation of an antibiotic stewardship program is warranted to enhance the efficacy, safety and cost-effectiveness of antimicrobial agents.

Keywords: Acute care, Antibiotic misuse, Iraq, Prescribing

Procedia PDF Downloads 122
3005 Rheological Properties of PP/EVA Blends

Authors: Othman Y. Alothman

Abstract:

The study aims to investigate the effects of blend ratio, VA content and temperature on the rheological properties of PPEVA blends. The results show that all pure polymers and their blends show typical shear thinning behaviour. All neat polymers exhibit power-low type flow behaviour, with the viscosity order as EVA328 > EVA206 > PP in almost all frequency ranges. As temperature increases, the viscosity of all polymers decreases as expected, and the viscosity becomes more sensitive to the addition of EVA. Two different regions can be observed on the flow curve of some of the polymers and their blends, which is thought to be due to slip-stick transition or melt fracture.

Keywords: polypropylene, ethylene vinyl acetate, blends, rheological properties

Procedia PDF Downloads 475
3004 Acute Severe Hyponatremia in Patient with Psychogenic Polydipsia, Learning Disability and Epilepsy

Authors: Anisa Suraya Ab Razak, Izza Hayat

Abstract:

Introduction: The diagnosis and management of severe hyponatremia in neuropsychiatric patients present a significant challenge to physicians. Several factors contribute, including diagnostic shadowing and attributing abnormal behavior to intellectual disability or psychiatric conditions. Hyponatraemia is the commonest electrolyte abnormality in the inpatient population, ranging from mild/asymptomatic, moderate to severe levels with life-threatening symptoms such as seizures, coma and death. There are several documented fatal case reports in the literature of severe hyponatremia secondary to psychogenic polydipsia, often diagnosed only in autopsy. This paper presents a case study of acute severe hyponatremia in a neuropsychiatric patient with early diagnosis and admission to intensive care. Case study: A 21-year old Caucasian male with known epilepsy and learning disability was admitted from residential living with generalized tonic-clonic self-terminating seizures after refusing medications for several weeks. Evidence of superficial head injury was detected on physical examination. His laboratory data demonstrated mild hyponatremia (125 mmol/L). Computed tomography imaging of his brain demonstrated no acute bleed or space-occupying lesion. He exhibited abnormal behavior - restlessness, drinking water from bathroom taps, inability to engage, paranoia, and hypersexuality. No collateral history was available to establish his baseline behavior. He was loaded with intravenous sodium valproate and leveritircaetam. Three hours later, he developed vomiting and a generalized tonic-clonic seizure lasting forty seconds. He remained drowsy for several hours and regained minimal recovery of consciousness. A repeat set of blood tests demonstrated profound hyponatremia (117 mmol/L). Outcomes: He was referred to intensive care for peripheral intravenous infusion of 2.7% sodium chloride solution with two-hourly laboratory monitoring of sodium concentration. Laboratory monitoring identified dangerously rapid correction of serum sodium concentration, and hypertonic saline was switched to a 5% dextrose solution to reduce the risk of acute large-volume fluid shifts from the cerebral intracellular compartment to the extracellular compartment. He underwent urethral catheterization and produced 8 liters of urine over 24 hours. Serum sodium concentration remained stable after 24 hours of correction fluids. His GCS recovered to baseline after 48 hours with improvement in behavior -he engaged with healthcare professionals, understood the importance of taking medications, admitted to illicit drug use and drinking massive amounts of water. He was transferred from high-dependency care to ward level and was initiated on multiple trials of anti-epileptics before achieving seizure-free days two weeks after resolution of acute hyponatremia. Conclusion: Psychogenic polydipsia is often found in young patients with intellectual disability or psychiatric disorders. Patients drink large volumes of water daily ranging from ten to forty liters, resulting in acute severe hyponatremia with mortality rates as high as 20%. Poor outcomes are due to challenges faced by physicians in making an early diagnosis and treating acute hyponatremia safely. A low index of suspicion of water intoxication is required in this population, including patients with known epilepsy. Monitoring urine output proved to be clinically effective in aiding diagnosis. Early referral and admission to intensive care should be considered for safe correction of sodium concentration while minimizing risk of fatal complications e.g. central pontine myelinolysis.

Keywords: epilepsy, psychogenic polydipsia, seizure, severe hyponatremia

Procedia PDF Downloads 122
3003 Speaker Identification by Atomic Decomposition of Learned Features Using Computational Auditory Scene Analysis Principals in Noisy Environments

Authors: Thomas Bryan, Veton Kepuska, Ivica Kostanic

Abstract:

Speaker recognition is performed in high Additive White Gaussian Noise (AWGN) environments using principals of Computational Auditory Scene Analysis (CASA). CASA methods often classify sounds from images in the time-frequency (T-F) plane using spectrograms or cochleargrams as the image. In this paper atomic decomposition implemented by matching pursuit performs a transform from time series speech signals to the T-F plane. The atomic decomposition creates a sparsely populated T-F vector in “weight space” where each populated T-F position contains an amplitude weight. The weight space vector along with the atomic dictionary represents a denoised, compressed version of the original signal. The arraignment or of the atomic indices in the T-F vector are used for classification. Unsupervised feature learning implemented by a sparse autoencoder learns a single dictionary of basis features from a collection of envelope samples from all speakers. The approach is demonstrated using pairs of speakers from the TIMIT data set. Pairs of speakers are selected randomly from a single district. Each speak has 10 sentences. Two are used for training and 8 for testing. Atomic index probabilities are created for each training sentence and also for each test sentence. Classification is performed by finding the lowest Euclidean distance between then probabilities from the training sentences and the test sentences. Training is done at a 30dB Signal-to-Noise Ratio (SNR). Testing is performed at SNR’s of 0 dB, 5 dB, 10 dB and 30dB. The algorithm has a baseline classification accuracy of ~93% averaged over 10 pairs of speakers from the TIMIT data set. The baseline accuracy is attributable to short sequences of training and test data as well as the overall simplicity of the classification algorithm. The accuracy is not affected by AWGN and produces ~93% accuracy at 0dB SNR.

Keywords: time-frequency plane, atomic decomposition, envelope sampling, Gabor atoms, matching pursuit, sparse dictionary learning, sparse autoencoder

Procedia PDF Downloads 289
3002 Financial Markets Integration between Morocco and France: Implications on International Portfolio Diversification

Authors: Abdelmounaim Lahrech, Hajar Bousfiha

Abstract:

This paper examines equity market integration between Morocco and France and its consequent implications on international portfolio diversification. In the absence of stock market linkages, Morocco can act as a diversification destination to European investors, allowing higher returns at a comparable level of risk in developed markets. In contrast, this attractiveness is limited if both financial markets show significant linkage. The research empirically measures financial market’s integration in by capturing the conditional correlation between the two markets using the Generalized Autoregressive Conditionally Heteroscedastic (GARCH) model. Then, the research uses the Dynamic Conditional Correlation (DCC) model of Engle (2002) to track the correlations. The research findings show that there is no important increase over the years in the correlation between the Moroccan and the French equity markets, even though France is considered Morocco’s first trading partner. Failing to prove evidence of the stock index linkage between the two countries, the volatility series of each market were assumed to change over time separately. Yet, the study reveals that despite the important historical and economic linkages between Morocco and France, there is no evidence that equity markets follow. The small correlations and their stationarity over time show that over the 10 years studied, correlations were fluctuating around a stable mean with no significant change at their level. Different explanations can be attributed to the absence of market linkage between the two equity markets.

Keywords: equity market linkage, DCC GARCH, international portfolio diversification, Morocco, France

Procedia PDF Downloads 442
3001 Extension and Closure of a Field for Engineering Purpose

Authors: Shouji Yujiro, Memei Dukovic, Mist Yakubu

Abstract:

Fields are important objects of study in algebra since they provide a useful generalization of many number systems, such as the rational numbers, real numbers, and complex numbers. In particular, the usual rules of associativity, commutativity and distributivity hold. Fields also appear in many other areas of mathematics; see the examples below. When abstract algebra was first being developed, the definition of a field usually did not include commutativity of multiplication, and what we today call a field would have been called either a commutative field or a rational domain. In contemporary usage, a field is always commutative. A structure which satisfies all the properties of a field except possibly for commutativity, is today called a division ring ordivision algebra or sometimes a skew field. Also non-commutative field is still widely used. In French, fields are called corps (literally, body), generally regardless of their commutativity. When necessary, a (commutative) field is called corps commutative and a skew field-corps gauche. The German word for body is Körper and this word is used to denote fields; hence the use of the blackboard bold to denote a field. The concept of fields was first (implicitly) used to prove that there is no general formula expressing in terms of radicals the roots of a polynomial with rational coefficients of degree 5 or higher. An extension of a field k is just a field K containing k as a subfield. One distinguishes between extensions having various qualities. For example, an extension K of a field k is called algebraic, if every element of K is a root of some polynomial with coefficients in k. Otherwise, the extension is called transcendental. The aim of Galois Theory is the study of algebraic extensions of a field. Given a field k, various kinds of closures of k may be introduced. For example, the algebraic closure, the separable closure, the cyclic closure et cetera. The idea is always the same: If P is a property of fields, then a P-closure of k is a field K containing k, having property, and which is minimal in the sense that no proper subfield of K that contains k has property P. For example if we take P (K) to be the property ‘every non-constant polynomial f in K[t] has a root in K’, then a P-closure of k is just an algebraic closure of k. In general, if P-closures exist for some property P and field k, they are all isomorphic. However, there is in general no preferable isomorphism between two closures.

Keywords: field theory, mechanic maths, supertech, rolltech

Procedia PDF Downloads 373
3000 Dwindling the Stability of DNA Sequence by Base Substitution at Intersection of COMT and MIR4761 Gene

Authors: Srishty Gulati, Anju Singh, Shrikant Kukreti

Abstract:

The manifestation of structural polymorphism in DNA depends on the sequence and surrounding environment. Ample of folded DNA structures have been found in the cellular system out of which DNA hairpins are very common, however, are indispensable due to their role in the replication initiation sites, recombination, transcription regulation, and protein recognition. We enumerate this approach in our study, where the two base substitutions and change in temperature embark destabilization of DNA structure and misbalance the equilibrium between two structures of a sequence present at the overlapping region of the human COMT gene and MIR4761 gene. COMT and MIR4761 gene encodes for catechol-O-methyltransferase (COMT) enzyme and microRNAs (miRNAs), respectively. Environmental changes and errors during cell division lead to genetic abnormalities. The COMT gene entailed in dopamine regulation fosters neurological diseases like Parkinson's disease, schizophrenia, velocardiofacial syndrome, etc. A 19-mer deoxyoligonucleotide sequence 5'-AGGACAAGGTGTGCATGCC-3' (COMT19) is located at exon-4 on chromosome 22 and band q11.2 at the intersection of COMT and MIR4761 gene. Bioinformatics studies suggest that this sequence is conserved in humans and few other organisms and is involved in recognition of transcription factors in the vicinity of 3'-end. Non-denaturating gel electrophoresis and CD spectroscopy of COMT sequences indicate the formation of hairpin type DNA structures. Temperature-dependent CD studies revealed an unusual shift in the slipped DNA-Hairpin DNA equilibrium with the change in temperature. Also, UV-thermal melting techniques suggest that the two base substitutions on the complementary strand of COMT19 did not affect the structure but reduces the stability of duplex. This study gives insight about the possibility of existing structurally polymorphic transient states within DNA segments present at the intersection of COMT and MIR4761 gene.

Keywords: base-substitution, catechol-o-methyltransferase (COMT), hairpin-DNA, structural polymorphism

Procedia PDF Downloads 122
2999 Assesments of Some Environment Variables on Fisheries at Two Levels: Global and Fao Major Fishing Areas

Authors: Hyelim Park, Juan Martin Zorrilla

Abstract:

Climate change influences very widely and in various ways ocean ecosystem functioning. The consequences of climate change on marine ecosystems are an increase in temperature and irregular behavior of some solute concentrations. These changes would affect fisheries catches in several ways. Our aim is to assess the quantitative contribution change of fishery catches along the time and express them through four environment variables: Sea Surface Temperature (SST4) and the concentrations of Chlorophyll (CHL), Particulate Inorganic Carbon (PIC) and Particulate Organic Carbon (POC) at two spatial scales: Global and the nineteen FAO Major Fishing Areas divisions. Data collection was based on the FAO FishStatJ 2014 database as well as MODIS Aqua satellite observations from 2002 to 2012. Some data had to be corrected and interpolated using some existing methods. As the results, a multivariable regression model for average Global fisheries captures contained temporal mean of SST4, standard deviation of SST4, standard deviation of CHL and standard deviation of PIC. Global vector auto-regressive (VAR) model showed that SST4 was a statistical cause of global fishery capture. To accommodate varying conditions in fishery condition and influence of climate change variables, a model was constructed for each FAO major fishing area. From the management perspective it should be recognized some limitations of the FAO marine areas division that opens to possibility to the discussion of the subdivision of the areas into smaller units. Furthermore, it should be treated that the contribution changes of fishery species and the possible environment factor for specific species at various scale levels.

Keywords: fisheries-catch, FAO FishStatJ, MODIS Aqua, sea surface temperature (SST), chlorophyll, particulate inorganic carbon (PIC), particulate organic carbon (POC), VAR, granger causality

Procedia PDF Downloads 484
2998 International Classification of Primary Care as a Reference for Coding the Demand for Care in Primary Health Care

Authors: Souhir Chelly, Chahida Harizi, Aicha Hechaichi, Sihem Aissaoui, Leila Ben Ayed, Maha Bergaoui, Mohamed Kouni Chahed

Abstract:

Introduction: The International Classification of Primary Care (ICPC) is part of the morbidity classification system. It had 17 chapters, and each is coded by an alphanumeric code: the letter corresponds to the chapter, the number to a paragraph in the chapter. The objective of this study is to show the utility of this classification in the coding of the reasons for demand for care in Primary health care (PHC), its advantages and limits. Methods: This is a cross-sectional descriptive study conducted in 4 PHC in Ariana district. Data on the demand for care during 2 days in the same week were collected. The coding of the information was done according to the CISP. The data was entered and analyzed by the EPI Info 7 software. Results: A total of 523 demands for care were investigated. The patients who came for the consultation are predominantly female (62.72%). Most of the consultants are young with an average age of 35 ± 26 years. In the ICPC, there are 7 rubrics: 'infections' is the most common reason with 49.9%, 'other diagnoses' with 40.2%, 'symptoms and complaints' with 5.5%, 'trauma' with 2.1%, 'procedures' with 2.1% and 'neoplasm' with 0.3%. The main advantage of the ICPC is the fact of being a standardized tool. It is very suitable for classification of the reasons for demand for care in PHC according to their specificity, capacity to be used in a computerized medical file of the PHC. Its current limitations are related to the difficulty of classification of some reasons for demand for care. Conclusion: The ICPC has been developed to provide healthcare with a coding reference that takes into account their specificity. The CIM is in its 10th revision; it would gain from revision to revision to be more efficient to be generalized and used by the teams of PHC.

Keywords: international classification of primary care, medical file, primary health care, Tunisia

Procedia PDF Downloads 267
2997 Development of a Sustainable Municipal Solid Waste Management for an Urban Area: Case Study from a Developing Country

Authors: Anil Kumar Gupta, Dronadula Venkata Sai Praneeth, Brajesh Dubey, Arundhuti Devi, Suravi Kalita, Khanindra Sharma

Abstract:

Increase in urbanization and industrialization have led to improve in the standard of living. However, at the same time, the challenges due to improper solid waste management are also increasing. Municipal Solid Waste management is considered as a vital step in the development of urban infrastructure. The present study focuses on developing a solid waste management plan for an urban area in a developing country. The current scenario of solid waste management practices at various urban bodies in India is summarized. Guwahati city in the northeastern part of the country and is also one of the targeted smart cities (under the governments Smart Cities program) was chosen as case study to develop and implement the solid waste management plan. The whole city was divided into various divisions and waste samples were collected according to American Society for Testing and Materials (ASTM) - D5231-92 - 2016 for each division in the city and a composite sample prepared to represent the waste from the entire city. The solid waste characterization in terms of physical and chemical which includes mainly proximate and ultimate analysis were carried out. Existing primary and secondary collection systems were studied and possibilities of enhancing the collection systems were discussed. The composition of solid waste for the overall city was found to be as: organic matters 38%, plastic 27%, paper + cardboard 15%, Textile 9%, inert 7% and others 4%. During the conference presentation, further characterization results in terms of Thermal gravimetric analysis (TGA), pH and water holding capacity will be discussed. The waste management options optimizing activities such as recycling, recovery, reuse and reduce will be presented and discussed.

Keywords: proximate, recycling, thermal gravimetric analysis (TGA), solid waste management

Procedia PDF Downloads 191