Search results for: computer processing of large databases
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 12481

Search results for: computer processing of large databases

1471 Linguistic Cyberbullying, a Legislative Approach

Authors: Simona Maria Ignat

Abstract:

Bullying online has been an increasing studied topic during the last years. Different approaches, psychological, linguistic, or computational, have been applied. To our best knowledge, a definition and a set of characteristics of phenomenon agreed internationally as a common framework are still waiting for answers. Thus, the objectives of this paper are the identification of bullying utterances on Twitter and their algorithms. This research paper is focused on the identification of words or groups of words, categorized as “utterances”, with bullying effect, from Twitter platform, extracted on a set of legislative criteria. This set is the result of analysis followed by synthesis of law documents on bullying(online) from United States of America, European Union, and Ireland. The outcome is a linguistic corpus with approximatively 10,000 entries. The methods applied to the first objective have been the following. The discourse analysis has been applied in identification of keywords with bullying effect in texts from Google search engine, Images link. Transcription and anonymization have been applied on texts grouped in CL1 (Corpus linguistics 1). The keywords search method and the legislative criteria have been used for identifying bullying utterances from Twitter. The texts with at least 30 representations on Twitter have been grouped. They form the second corpus linguistics, Bullying utterances from Twitter (CL2). The entries have been identified by using the legislative criteria on the the BoW method principle. The BoW is a method of extracting words or group of words with same meaning in any context. The methods applied for reaching the second objective is the conversion of parts of speech to alphabetical and numerical symbols and writing the bullying utterances as algorithms. The converted form of parts of speech has been chosen on the criterion of relevance within bullying message. The inductive reasoning approach has been applied in sampling and identifying the algorithms. The results are groups with interchangeable elements. The outcomes convey two aspects of bullying: the form and the content or meaning. The form conveys the intentional intimidation against somebody, expressed at the level of texts by grammatical and lexical marks. This outcome has applicability in the forensic linguistics for establishing the intentionality of an action. Another outcome of form is a complex of graphemic variations essential in detecting harmful texts online. This research enriches the lexicon already known on the topic. The second aspect, the content, revealed the topics like threat, harassment, assault, or suicide. They are subcategories of a broader harmful content which is a constant concern for task forces and legislators at national and international levels. These topic – outcomes of the dataset are a valuable source of detection. The analysis of content revealed algorithms and lexicons which could be applied to other harmful contents. A third outcome of content are the conveyances of Stylistics, which is a rich source of discourse analysis of social media platforms. In conclusion, this corpus linguistics is structured on legislative criteria and could be used in various fields.

Keywords: corpus linguistics, cyberbullying, legislation, natural language processing, twitter

Procedia PDF Downloads 70
1470 A Grid Synchronization Method Based On Adaptive Notch Filter for SPV System with Modified MPPT

Authors: Priyanka Chaudhary, M. Rizwan

Abstract:

This paper presents a grid synchronization technique based on adaptive notch filter for SPV (Solar Photovoltaic) system along with MPPT (Maximum Power Point Tracking) techniques. An efficient grid synchronization technique offers proficient detection of various components of grid signal like phase and frequency. It also acts as a barrier for harmonics and other disturbances in grid signal. A reference phase signal synchronized with the grid voltage is provided by the grid synchronization technique to standardize the system with grid codes and power quality standards. Hence, grid synchronization unit plays important role for grid connected SPV systems. As the output of the PV array is fluctuating in nature with the meteorological parameters like irradiance, temperature, wind etc. In order to maintain a constant DC voltage at VSC (Voltage Source Converter) input, MPPT control is required to track the maximum power point from PV array. In this work, a variable step size P & O (Perturb and Observe) MPPT technique with DC/DC boost converter has been used at first stage of the system. This algorithm divides the dPpv/dVpv curve of PV panel into three separate zones i.e. zone 0, zone 1 and zone 2. A fine value of tracking step size is used in zone 0 while zone 1 and zone 2 requires a large value of step size in order to obtain a high tracking speed. Further, adaptive notch filter based control technique is proposed for VSC in PV generation system. Adaptive notch filter (ANF) approach is used to synchronize the interfaced PV system with grid to maintain the amplitude, phase and frequency parameters as well as power quality improvement. This technique offers the compensation of harmonics current and reactive power with both linear and nonlinear loads. To maintain constant DC link voltage a PI controller is also implemented and presented in this paper. The complete system has been designed, developed and simulated using SimPower System and Simulink toolbox of MATLAB. The performance analysis of three phase grid connected solar photovoltaic system has been carried out on the basis of various parameters like PV output power, PV voltage, PV current, DC link voltage, PCC (Point of Common Coupling) voltage, grid voltage, grid current, voltage source converter current, power supplied by the voltage source converter etc. The results obtained from the proposed system are found satisfactory.

Keywords: solar photovoltaic systems, MPPT, voltage source converter, grid synchronization technique

Procedia PDF Downloads 581
1469 Automatic Differentiation of Ultrasonic Images of Cystic and Solid Breast Lesions

Authors: Dmitry V. Pasynkov, Ivan A. Egoshin, Alexey A. Kolchev, Ivan V. Kliouchkin

Abstract:

In most cases, typical cysts are easily recognized at ultrasonography. The specificity of this method for typical cysts reaches 98%, and it is usually considered as gold standard for typical cyst diagnosis. However, it is necessary to have all the following features to conclude the typical cyst: clear margin, the absence of internal echoes and dorsal acoustic enhancement. At the same time, not every breast cyst is typical. It is especially characteristic for protein-contained cysts that may have significant internal echoes. On the other hand, some solid lesions (predominantly malignant) may have cystic appearance and may be falsely accepted as cysts. Therefore we tried to develop the automatic method of cystic and solid breast lesions differentiation. Materials and methods. The input data were the ultrasonography digital images with the 256-gradations of gray color (Medison SA8000SE, Siemens X150, Esaote MyLab C). Identification of the lesion on these images was performed in two steps. On the first one, the region of interest (or contour of lesion) was searched and selected. Selection of such region is carried out using the sigmoid filter where the threshold is calculated according to the empirical distribution function of the image brightness and, if necessary, it was corrected according to the average brightness of the image points which have the highest gradient of brightness. At the second step, the identification of the selected region to one of lesion groups by its statistical characteristics of brightness distribution was made. The following characteristics were used: entropy, coefficients of the linear and polynomial regression, quantiles of different orders, an average gradient of brightness, etc. For determination of decisive criterion of belonging to one of lesion groups (cystic or solid) the training set of these characteristics of brightness distribution separately for benign and malignant lesions were received. To test our approach we used a set of 217 ultrasonic images of 107 cystic (including 53 atypical, difficult for bare eye differentiation) and 110 solid lesions. All lesions were cytologically and/or histologically confirmed. Visual identification was performed by trained specialist in breast ultrasonography. Results. Our system correctly distinguished all (107, 100%) typical cysts, 107 of 110 (97.3%) solid lesions and 50 of 53 (94.3%) atypical cysts. On the contrary, with the bare eye it was possible to identify correctly all (107, 100%) typical cysts, 96 of 110 (87.3%) solid lesions and 32 of 53 (60.4%) atypical cysts. Conclusion. Automatic approach significantly surpasses the visual assessment performed by trained specialist. The difference is especially large for atypical cysts and hypoechoic solid lesions with the clear margin. This data may have a clinical significance.

Keywords: breast cyst, breast solid lesion, differentiation, ultrasonography

Procedia PDF Downloads 253
1468 Decomposition of the Discount Function Into Impatience and Uncertainty Aversion. How Neurofinance Can Help to Understand Behavioral Anomalies

Authors: Roberta Martino, Viviana Ventre

Abstract:

Intertemporal choices are choices under conditions of uncertainty in which the consequences are distributed over time. The Discounted Utility Model is the essential reference for describing the individual in the context of intertemporal choice. The model is based on the idea that the individual selects the alternative with the highest utility, which is calculated by multiplying the cardinal utility of the outcome, as if the reception were instantaneous, by the discount function that determines a decrease in the utility value according to how the actual reception of the outcome is far away from the moment the choice is made. Initially, the discount function was assumed to have an exponential trend, whose decrease over time is constant, in line with a profile of a rational investor described by classical economics. Instead, empirical evidence called for the formulation of alternative, hyperbolic models that better represented the actual actions of the investor. Attitudes that do not comply with the principles of classical rationality are termed anomalous, i.e., difficult to rationalize and describe through normative models. The development of behavioral finance, which describes investor behavior through cognitive psychology, has shown that deviations from rationality are due to the limited rationality condition of human beings. What this means is that when a choice is made in a very difficult and information-rich environment, the brain does a compromise job between the cognitive effort required and the selection of an alternative. Moreover, the evaluation and selection phase of the alternative, the collection and processing of information, are dynamics conditioned by systematic distortions of the decision-making process that are the behavioral biases involving the individual's emotional and cognitive system. In this paper we present an original decomposition of the discount function to investigate the psychological principles of hyperbolic discounting. It is possible to decompose the curve into two components: the first component is responsible for the smaller decrease in the outcome as time increases and is related to the individual's impatience; the second component relates to the change in the direction of the tangent vector to the curve and indicates how much the individual perceives the indeterminacy of the future indicating his or her aversion to uncertainty. This decomposition allows interesting conclusions to be drawn with respect to the concept of impatience and the emotional drives involved in decision-making. The contribution that neuroscience can make to decision theory and inter-temporal choice theory is vast as it would allow the description of the decision-making process as the relationship between the individual's emotional and cognitive factors. Neurofinance is a discipline that uses a multidisciplinary approach to investigate how the brain influences decision-making. Indeed, considering that the decision-making process is linked to the activity of the prefrontal cortex and amygdala, neurofinance can help determine the extent to which abnormal attitudes respect the principles of rationality.

Keywords: impatience, intertemporal choice, neurofinance, rationality, uncertainty

Procedia PDF Downloads 112
1467 Production of Nanocomposite Electrical Contact Materials Ag-SnO2, W-Cu and Cu-C in Thermal Plasma

Authors: A. V. Samokhin, A. A. Fadeev, M. A. Sinaiskii, N. V. Alekseev, A. V. Kolesnikov

Abstract:

Composite materials where metal matrix is reinforced by ceramic or metal particles are of great interest for use in the manufacturing of electrical contacts. Significant improvement of the composite physical and mechanical properties as well as increase of the performance parameters of composite-based products can be achieved if the nanoscale structure in the composite materials is obtained by using nanosized powders as starting components. The results of nanosized composite powders synthesis (Ag-SnO2, W-Cu and Cu-C) in the DC thermal plasma flows are presented in this paper. The investigations included the following processes: - Recondensation of micron powder mixture Ag + SnO2 in a nitrogen plasma; - The reduction of the oxide powders mixture (WO3 + CuO) in a hydrogen-nitrogen plasma; - Decomposition of the copper formate and copper acetate powders in nitrogen plasma. The calculations of equilibrium compositions of multicomponent systems Ag-Sn-O-N, W-Cu-O-H-N and Cu-O-C-H-N in the temperature range of 400-5000 K were carried to estimate basic process characteristics. Experimental studies of the processes were performed using a plasma reactor with a confined jet flow. The plasma jet net power was in the range of 2 - 13 kW, and the feedstock flow rate was up to 0.35 kg/h. The obtained powders were characterized by TEM, HR-TEM, SEM, EDS, ED-XRF, XRD, BET and QEA methods. Nanocomposite Ag-SnO2 (12 wt. %). Processing of the initial powder mixture (Ag-SnO2) in nitrogen thermal plasma stream allowed to produce nanopowders with a specific surface area up to 24 m2/g, consisting predominantly of particles with size less than 100 nm. According to XRD results, tin was present in the obtained products as SnO2 phase, and also as intermetallic phases AgxSn. Nanocomposite W-Cu (20 wt .%). Reduction of (WO3+CuO) mixture in the hydrogen-nitrogen plasma provides W-Cu nanopowder with particle sizes in the range of 10-150 nm. The particles have mainly spherical shape and structure tungsten core - copper shell. The thickness of the shell is about several nanometers, the shell is composed of copper and its oxides (Cu2O, CuO). The nanopowders had 1.5 wt. % oxygen impurity. Heat treatment in a hydrogen atmosphere allows to reduce the oxygen content to less than 0.1 wt. %. Nanocomposite Cu-C. Copper nanopowders were found as products of the starting copper compounds decomposition. The nanopowders primarily had a spherical shape with a particle size of less than 100 nm. The main phase was copper, with small amount of Cu2O and CuO oxides. Copper formate decomposition products had a specific surface area 2.5-7 m2/g and contained 0.15 - 4 wt. % carbon; and copper acetate decomposition products had the specific surface area 5-35 m2/g, and carbon content of 0.3 - 5 wt. %. Compacting of nanocomposites (sintering in hydrogen for Ag-SnO2 and electric spark sintering (SPS) for W-Cu) showed that the samples having a relative density of 97-98 % can be obtained with a submicron structure. The studies indicate the possibility of using high-intensity plasma processes to create new technologies to produce nanocomposite materials for electric contacts.

Keywords: electrical contact, material, nanocomposite, plasma, synthesis

Procedia PDF Downloads 224
1466 Imbalance on the Croatian Housing Market in the Aftermath of an Economic Crisis

Authors: Tamara Slišković, Tomislav Sekur

Abstract:

This manuscript examines factors that affect demand and supply of the housing market in Croatia. The period from the beginning of this century, until 2008, was characterized by a strong expansion of construction, housing and real estate market in general. Demand for residential units was expanding, and this was supported by favorable lending conditions of banks. Indicators on the supply side, such as the number of newly built houses and the construction volume index were also increasing. Rapid growth of demand, along with the somewhat slower supply growth, led to the situation in which new apartments were sold before the completion of residential buildings. This resulted in a rise of housing price which was indication of a clear link between the housing prices with the supply and demand in the housing market. However, after 2008 general economic conditions in Croatia worsened and demand for housing has fallen dramatically, while supply descended at much slower pace. Given that there is a gap between supply and demand, it can be concluded that the housing market in Croatia is in imbalance. Such trend is accompanied by a relatively small decrease in housing price. The final result of such movements is the large number of unsold housing units at relatively high price levels. For this reason, it can be argued that housing prices are sticky and that, consequently, the price level in the aftermath of a crisis does not correspond to the discrepancy between supply and demand on the Croatian housing market. The degree of rigidity of the housing price can be determined by inclusion of the housing price as the explanatory variable in the housing demand function. Other independent variables are demographic variable (e.g. the number of households), the interest rate on housing loans, households' disposable income and rent. The equilibrium price is reached when the demand for housing equals its supply, and the speed of adjustment of actual prices to equilibrium prices reveals the extent to which the prices are rigid. The latter requires inclusion of the housing prices with time lag as an independent variable in estimating demand function. We also observe the supply side of the housing market, in order to explain to what extent housing prices explain the movement of new construction activity, and other variables that describe the supply. In this context, we test whether new construction on the Croatian market is dependent on current prices or prices with a time lag. Number of dwellings is used to approximate new construction (flow variable), while the housing prices (current or lagged), quantity of dwellings in the previous period (stock variable) and a series of costs related to new construction are independent variables. We conclude that the key reason for the imbalance in the Croatian housing market should be sought in the relative relationship of price elasticities of supply and demand.

Keywords: Croatian housing market, economic crisis, housing prices, supply imbalance, demand imbalance

Procedia PDF Downloads 257
1465 Integration of an Augmented Reality System for the Visualization of the HRMAS NMR Analysis of Brain Biopsy Specimens Using the Brainlab Cranial Navigation System

Authors: Abdelkrim Belhaoua, Jean-Pierre Radoux, Mariana Kuras, Vincent Récamier, Martial Piotto, Karim Elbayed, François Proust, Izzie Namer

Abstract:

This paper proposes an augmented reality system dedicated to neurosurgery in order to assist the surgeon during an operation. This work is part of the ExtempoRMN project (Funded by Bpifrance) which aims at analyzing during a surgical operation the metabolic content of tumoral brain biopsy specimens by HRMAS NMR. Patients affected with a brain tumor (gliomas) frequently need to undergo an operation in order to remove the tumoral mass. During the operation, the neurosurgeon removes biopsy specimens using image-guided surgery. The biopsy specimens removed are then sent for HRMAS NMR analysis in order to obtain a better diagnosis and prognosis. Image-guided refers to the use of MRI images and a computer to precisely locate and target a lesion (abnormal tissue) within the brain. This is performed using preoperative MRI images and the BrainLab neuro-navigation system. With the patient MRI images loaded on the Brainlab Cranial neuro-navigation system in the operating theater, surgeons can better identify their approach before making an incision. The Brainlab neuro-navigation tool tracks in real time the position of the instruments and displays their position on the patient MRI data. The results of the biopsy analysis by 1H HRMAS NMR are then sent back to the operating theater and superimposed on the 3D localization system directly on the MRI images. The method we have developed to communicate between the HRMAS NMR analysis software and Brainlab makes use of a combination of C++, VTK and the Insight Toolkit using OpenIGTLink protocol.

Keywords: neuro-navigation, augmented reality, biopsy, BrainLab, HR-MAS NMR

Procedia PDF Downloads 353
1464 Visual Representation and the De-Racialization of Public Spaces

Authors: Donna Banks

Abstract:

In 1998 Winston James called for more research on the Caribbean diaspora and this ethnographic study, incorporating participant observation, interviews, and archival research, adds to the scholarship in this area. The research is grounded in the discipline of cultural studies but is cross-disciplinary in nature, engaging anthropology, psychology, and urban planning. This paper centers on community murals and their contribution to a more culturally diverse and representative community. While many museums are in the process of reassessing their collection, acquiring works, and developing programming to be more inclusive, and public art programs are investing millions of dollars in trying to fashion an identity in which all residents can feel included, local artists in neighborhoods in many countries have been using community murals to tell their stories. Community murals serve a historical, political, and social purpose and are an instrumental strategy in creative placemaking projects. Community murals add to the livability of an area. Even though official measurements of livability do not include race, ethnicity, and gender - which are egregious omissions - murals are a way to integrate historically underrepresented people into the wider history of a country. This paper draws attention to a creative placemaking project in the port city of Bristol, England. A city, like many others, with a history of spacializing race and racializing space. For this reason, Bristol’s Seven Saints of St. Pauls® Art & Heritage Trail, which memorializes seven Caribbean-born social and political change agents, is examined. The Seven Saints of St. Pauls® Art & Heritage Trail is crucial to the city, as well as the country, in its contribution to the de-racialization of public spaces. Within British art history, with few exceptions, portraits of non-White people who are not depicted in a subordinate role have been absent. The artist of the mural project, Michelle Curtis, has changed this long-lasting racist and hegemonic narrative. By creating seven large-scale portraits of individuals not typically represented visually, the artist has added them into Britain’s story. In these murals, however, we see more than just the likeness of a person; we are presented with a visual commentary that reflects each Saint’s hybrid identity of being both Black Caribbean and British, as well as their social and political involvement. Additionally, because the mural project is part of a heritage trail, the murals' are therapeutic and contribute to improving the well-being of residents and strengthening their sense of belonging.

Keywords: belonging, murals, placemaking, representation

Procedia PDF Downloads 78
1463 Performance Comparison of Deep Convolutional Neural Networks for Binary Classification of Fine-Grained Leaf Images

Authors: Kamal KC, Zhendong Yin, Dasen Li, Zhilu Wu

Abstract:

Intra-plant disease classification based on leaf images is a challenging computer vision task due to similarities in texture, color, and shape of leaves with a slight variation of leaf spot; and external environmental changes such as lighting and background noises. Deep convolutional neural network (DCNN) has proven to be an effective tool for binary classification. In this paper, two methods for binary classification of diseased plant leaves using DCNN are presented; model created from scratch and transfer learning. Our main contribution is a thorough evaluation of 4 networks created from scratch and transfer learning of 5 pre-trained models. Training and testing of these models were performed on a plant leaf images dataset belonging to 16 distinct classes, containing a total of 22,265 images from 8 different plants, consisting of a pair of healthy and diseased leaves. We introduce a deep CNN model, Optimized MobileNet. This model with depthwise separable CNN as a building block attained an average test accuracy of 99.77%. We also present a fine-tuning method by introducing the concept of a convolutional block, which is a collection of different deep neural layers. Fine-tuned models proved to be efficient in terms of accuracy and computational cost. Fine-tuned MobileNet achieved an average test accuracy of 99.89% on 8 pairs of [healthy, diseased] leaf ImageSet.

Keywords: deep convolution neural network, depthwise separable convolution, fine-grained classification, MobileNet, plant disease, transfer learning

Procedia PDF Downloads 169
1462 Understanding Cyber Kill Chains: Optimal Allocation of Monitoring Resources Using Cooperative Game Theory

Authors: Roy. H. A. Lindelauf

Abstract:

Cyberattacks are complex processes consisting of multiple interwoven tasks conducted by a set of agents. Interdictions and defenses against such attacks often rely on cyber kill chain (CKC) models. A CKC is a framework that tries to capture the actions taken by a cyber attacker. There exists a growing body of literature on CKCs. Most of this work either a) describes the CKC with respect to one or more specific cyberattacks or b) discusses the tools and technologies used by the attacker at each stage of the CKC. Defenders, facing scarce resources, have to decide where to allocate their resources given the CKC and partial knowledge on the tools and techniques attackers use. In this presentation CKCs are analyzed through the lens of covert projects, i.e., interrelated tasks that have to be conducted by agents (human and/or computer) with the aim of going undetected. Various aspects of covert project models have been studied abundantly in the operations research and game theory domain, think of resource-limited interdiction actions that maximally delay completion times of a weapons project for instance. This presentation has investigated both cooperative and non-cooperative game theoretic covert project models and elucidated their relation to CKC modelling. To view a CKC as a covert project each step in the CKC is broken down into tasks and there are players of which each one is capable of executing a subset of the tasks. Additionally, task inter-dependencies are represented by a schedule. Using multi-glove cooperative games it is shown how a defender can optimize the allocation of his scarce resources (what, where and how to monitor) against an attacker scheduling a CKC. This study presents and compares several cooperative game theoretic solution concepts as metrics for assigning resources to the monitoring of agents.

Keywords: cyber defense, cyber kill chain, game theory, information warfare techniques

Procedia PDF Downloads 127
1461 Design, Development and Testing of Polymer-Glass Microfluidic Chips for Electrophoretic Analysis of Biological Sample

Authors: Yana Posmitnaya, Galina Rudnitskaya, Tatyana Lukashenko, Anton Bukatin, Anatoly Evstrapov

Abstract:

An important area of biological and medical research is the study of genetic mutations and polymorphisms that can alter gene function and cause inherited diseases and other diseases. The following methods to analyse DNA fragments are used: capillary electrophoresis and electrophoresis on microfluidic chip (MFC), mass spectrometry with electrophoresis on MFC, hybridization assay on microarray. Electrophoresis on MFC allows to analyse small volumes of samples with high speed and throughput. A soft lithography in polydimethylsiloxane (PDMS) was chosen for operative fabrication of MFCs. A master-form from silicon and photoresist SU-8 2025 (MicroChem Corp.) was created for the formation of micro-sized structures in PDMS. A universal topology which combines T-injector and simple cross was selected for the electrophoretic separation of the sample. Glass K8 and PDMS Sylgard® 184 (Dow Corning Corp.) were used for fabrication of MFCs. Electroosmotic flow (EOF) plays an important role in the electrophoretic separation of the sample. Therefore, the estimate of the quantity of EOF and the ways of its regulation are of interest for the development of the new methods of the electrophoretic separation of biomolecules. The following methods of surface modification were chosen to change EOF: high-frequency (13.56 MHz) plasma treatment in oxygen and argon at low pressure (1 mbar); 1% aqueous solution of polyvinyl alcohol; 3% aqueous solution of Kolliphor® P 188 (Sigma-Aldrich Corp.). The electroosmotic mobility was evaluated by the method of Huang X. et al., wherein the borate buffer was used. The influence of physical and chemical methods of treatment on the wetting properties of the PDMS surface was controlled by the sessile drop method. The most effective way of surface modification of MFCs, from the standpoint of obtaining the smallest value of the contact angle and the smallest value of the EOF, was the processing with aqueous solution of Kolliphor® P 188. This method of modification has been selected for the treatment of channels of MFCs, which are used for the separation of mixture of oligonucleotides fluorescently labeled with the length of chain with 10, 20, 30, 40 and 50 nucleotides. Electrophoresis was performed on the device MFAS-01 (IAI RAS, Russia) at the separation voltage of 1500 V. 6% solution of polydimethylacrylamide with the addition of 7M carbamide was used as the separation medium. The separation time of components of the mixture was determined from electropherograms. The time for untreated MFC was ~275 s, and for the ones treated with solution of Kolliphor® P 188 – ~ 220 s. Research of physical-chemical methods of surface modification of MFCs allowed to choose the most effective way for reducing EOF – the modification with aqueous solution of Kolliphor® P 188. In this case, the separation time of the mixture of oligonucleotides decreased about 20%. The further optimization of method of modification of channels of MFCs will allow decreasing the separation time of sample and increasing the throughput of analysis.

Keywords: electrophoresis, microfluidic chip, modification, nucleic acid, polydimethylsiloxane, soft lithography

Procedia PDF Downloads 396
1460 Preparation of hydrophobic silica membranes supported on alumina hollow fibers for pervaporation applications

Authors: Ami Okabe, Daisuke Gondo, Akira Ogawa, Yasuhisa Hasegawa, Koichi Sato, Sadao Araki, Hideki Yamamoto

Abstract:

Membrane separation draws attention as the energy-saving technology. Pervaporation (PV) uses hydrophobic ceramic membranes to separate organic compounds from industrial wastewaters. PV makes it possible to separate organic compounds from azeotropic mixtures and from aqueous solutions. For the PV separation of low concentrations of organics from aqueous solutions, hydrophobic ceramic membranes are expected to have high separation performance compared with that of conventional hydrophilic membranes. Membrane separation performance is evaluated based on the pervaporation separation index (PSI), which depends on both the separation factor and the permeate flux. Ingenuity is required to increase the PSI such that the permeate flux increases without reducing the separation factor or to increase the separation factor without reducing the flux. A thin separation layer without defects and pinholes is required. In addition, it is known that the flux can be increased without reducing the separation factor by reducing the diffusion resistance of the membrane support. In a previous study, we prepared hydrophobic silica membranes by a molecular templating sol−gel method using cetyltrimethylammonium bromide (CTAB) to form pores suitable for permitting the passage of organic compounds through the membrane. We separated low-concentration organics from aqueous solutions by PV using these membranes. In the present study, hydrophobic silica membranes were prepared on a porous alumina hollow fiber support that is thinner than the previously used alumina support. Ethyl acetate (EA) is used in large industrial quantities, so it was selected as the organic substance to be separated. Hydrophobic silica membranes were prepared by dip-coating porous alumina supports with a -alumina interlayer into a silica sol containing CTAB and vinyltrimethoxysilane (VTMS) as the silica precursor. Membrane thickness increases with the lifting speed of the sol in the dip-coating process. Different thicknesses of the γ-alumina layer were prepared by dip-coating the support into a boehmite sol at different lifting speeds (0.5, 1, 3, and 5 mm s-1). Silica layers were subsequently formed by dip-coating using an immersion time of 60 s and lifting speed of 1 mm s-1. PV measurements of the EA (5 wt.%)/water system were carried out using VTMS hydrophobic silica membranes prepared on -alumina layers of different thicknesses. Water and EA flux showed substantially constant value despite of the change of the lifting speed to form the γ-alumina interlayer. All prepared hydrophobic silica membranes showed the higher PSI compared with the hydrophobic membranes using the previous alumina support of hollow fiber.

Keywords: membrane separation, pervaporation, hydrophobic, silica

Procedia PDF Downloads 390
1459 Relationship between Pushing Behavior and Subcortical White Matter Lesion in the Acute Phase after Stroke

Authors: Yuji Fujino, Kazu Amimoto, Kazuhiro Fukata, Masahide Inoue, Hidetoshi Takahashi, Shigeru Makita

Abstract:

Aim: Pusher behavior (PB) is a disorder in which stroke patients shift their body weight toward the affected side of the body (the hemiparetic side) and push away from the non-hemiparetic side. These patients often use further pushing to resist any attempts to correct their position to upright. It is known that the subcortical white matter lesion (SWML) usually correlates of gait or balance function in stroke patients. However, it is unclear whether the SWML influences PB. The purpose of this study was to investigate if the damage of SWML affects the severity of PB on acute stroke patients. Methods: Fourteen PB patients without thalamic or cortical lesions (mean age 73.4 years, 17.5 days from onset) participated in this study. Evaluation of PB was performed according to the Scale for Contraversive Pushing (SCP) for sitting and/or standing. We used modified criteria wherein the SCP subscale scores in each section of the scale were >0. As a clinical measurement, patients were evaluated by the Stroke Impairment Assessment Set (SIAS). For the depiction of SWML, we used T2-weighted fluid-attenuated inversion-recovery imaging. The degree of damage on SWML was assessed using the Fazekas scale. Patients were divided into two groups in the presence of SWML (SWML+ group; Fazekas scale grade 1-3, SWML- group; Fazekas scale grade 0). The independent t-test was used to compare the SCP and SIAS. This retrospective study was approved by the Ethics Committee. Results: In SWML+ group, the SCP was 3.7±1.0 points (mean±SD), the SIAS was 28.0 points (median). In SWML- group, the SCP was 2.0±0.2 points, and the SIAS was 31.5 points. The SCP was significantly higher in SWML+ group than in SWML- group (p<0.05). The SIAS was not significant in both groups (p>0.05). Discussion: It has been considered that the posterior thalamus is the neural structures that process the afferent sensory signals mediating graviceptive information about upright body orientation in humans. Therefore, many studies reported that PB was typically associated with unilateral lesions of the posterior thalamus. However, the result indicates that these extra-thalamic brain areas also contribute to the network controlling upright body posture. Therefore, SMWL might induce dysfunction through malperfusion in distant thalamic or other structurally intact neural structures. This study had a small sample size. Therefore, future studies should be performed with a large number of PB patients. Conclusion: The present study suggests that SWML can be definitely associated with PB. The patients with SWML may be severely incapacitating.

Keywords: pushing behavior, subcortical white matter lesion, acute phase, stroke

Procedia PDF Downloads 233
1458 Clustering Locations of Textile and Garment Industries to Compare with the Future Industrial Cluster in Thailand

Authors: Kanogkan Leerojanaprapa

Abstract:

Textile and garment industry is used to a major exporting industry of Thailand. According to lacking of the nation's price-competitiveness by stopping the EU's GSP (Generalised Scheme of Preferences) and ‘Nationwide Minimum Wage Policy’ that Thailand’s employers must pay all employees at least 300 baht (about $10) a day, the supply chains of the Thai textile and garment industry is affected and need to be reformed. Therefore, either Thai textile or garment industry will be existed or not would be concerned. This is also challenged for the government to decide which industries should be promoted the future industries of Thailand. Recently Thai government launch The Cluster-based Special Economic Development Zones Policy for promoting business cluster (effect on September 16, 2015). They define a cluster as the concentration of interconnected businesses and related institutions that operate within the same geographic areas and textiles and garment is one of target industrial clusters and 9 provinces are targeted (Bangkok, Kanchanaburi, Nakhon Pathom, Ratchaburi, Samut Sakhon, Chonburi, Chachoengsao, Prachinburi, and Sa Kaeo). The cluster zone are defined to link west-east corridor connected to manufacturing source in Cambodia and Mynmar to Bangkok where are promoted to be design, sourcing, and trading hub. The Thai government will provide tax and non-tax incentives for targeted industries within the clusters and expects these businesses are scattered to where they can get the most benefit which will identify future industrial cluster. This research will show the difference between the current cluster and future cluster following the target provinces of the textile and garment. The current cluster is analysed from secondary data. The four characteristics of the numbers of plants in Spinning, weaving and finishing of textiles, Manufacture of made-up textile articles, except apparel, Manufacture of knitted and crocheted fabrics, and Manufacture of other textiles, not elsewhere classified in particular 77 provinces (in total) are clustered by K-means cluster analysis and Hierarchical Cluster Analysis. In addition, the cluster can be confirmed and showed which variables contribute the most to defined cluster solution with ANOVA test. The results of analysis can identify 22 provinces (which the textile or garment plants are located) into 3 clusters. Plants in cluster 1 tend to be large numbers of plants which is only Bangkok, Next plants in cluster 2 tend to be moderate numbers of plants which are Samut Prakan, Samut Sakhon and Nakhon Pathom. Finally plants in cluster 3 tend to be little numbers of plants which are other 18 provinces. The same methodology can be implemented in other industries for future study.

Keywords: ANOVA, hierarchical cluster analysis, industrial clusters, K -means cluster analysis, textile and garment industry

Procedia PDF Downloads 200
1457 Development of Microsatellite Markers for Dalmatian Pyrethrum Using Next-Generation Sequencing

Authors: Ante Turudic, Filip Varga, Zlatko Liber, Jernej Jakse, Zlatko Satovic, Ivan Radosavljevic, Martina Grdisa

Abstract:

Microsatellites (SSRs) are highly informative repetitive sequences of 2-6 base pairs, which are the most used molecular markers in assessing the genetic diversity of plant species. Dalmatian pyrethrum (Tanacetum cinerariifolium /Trevir./ Sch. Bip) is an outcrossing diploid (2n = 18) endemic to the eastern Adriatic coast and source of the natural insecticide pyrethrin. Due to the high repetitiveness and large size of the genome (haploid genome size of 9,58 pg), previous attempts to develop microsatellite markers using the standard methods were unsuccessful. A next-generation sequencing (NGS) approach was applied on genomic DNA extracted from fresh leaves of Dalmatian pyrethrum. The sequencing was conducted using NovaSeq6000 Illumina sequencer, after which almost 400 million high-quality paired-end reads were obtained, with a read length of 150 base pairs. Short reads were assembled by combining two approaches; (1) de-novo assembly and (2) joining of overlapped pair-end reads. In total, 6.909.675 contigs were obtained, with the contig average length of 249 base pairs. Of the resulting contigs, 31.380 contained one or multiple microsatellite sequences, in total 35.556 microsatellite loci were identified. Out of detected microsatellites, dinucleotide repeats were the most frequent, accounting for more than half of all microsatellites identifies (21,212; 59.7%), followed by trinucleotide repeats (9,204; 25.9%). Tetra-, penta- and hexanucleotides had similar frequency of 1,822 (5.1%), 1,472 (4.1%), and 1,846 (5.2%), respectively. Contigs containing microsatellites were further filtered by SSR pattern type, transposon occurrences, assembly characteristics, GC content, and the number of occurrences against the draft genome of T. cinerariifolium published previously. After the selection process, 50 microsatellite loci were used for primer design. Designed primers were tested on samples from five distinct populations, and 25 of them showed a high degree of polymorphism. The selected loci were then genotyped on 20 samples belonging to one population resulting in 17 microsatellite markers. Availability of codominant SSR markers will significantly improve the knowledge on population genetic diversity and structure as well as complex genetics and biochemistry of this species. Acknowledgment: This work has been fully supported by the Croatian Science Foundation under the project ‘Genetic background of Dalmatian pyrethrum (Tanacetum cinerariifolium /Trevir/ Sch. Bip.) insecticidal potential’ - (PyrDiv) (IP-06-2016-9034).

Keywords: genome assembly, NGS, SSR, Tanacetum cinerariifolium

Procedia PDF Downloads 114
1456 Using Printouts as Social Media Evidence and Its Authentication in the Courtroom

Authors: Chih-Ping Chang

Abstract:

Different from traditional objective evidence, social media evidence has its own characteristics with easily tampering, recoverability, and cannot be read without using other devices (such as a computer). Simply taking a screenshot from social network sites must be questioned its original identity. When the police search and seizure digital information, a common way they use is to directly print out digital data obtained and ask the signature of the parties at the presence, without taking original digital data back. In addition to the issue on its original identity, this conduct to obtain evidence may have another two results. First, it will easily allege that is tampering evidence because the police wanted to frame the suspect and falsified evidence. Second, it is not easy to discovery hidden information. The core evidence associated with crime may not appear in the contents of files. Through discovery the original file, data related to the file, such as the original producer, creation time, modification date, and even GPS location display can be revealed from hidden information. Therefore, how to show this kind of evidence in the courtroom will be arguably the most important task for ruling social media evidence. This article, first, will introduce forensic software, like EnCase, TCT, FTK, and analyze their function to prove the identity with another digital data. Then turning back to the court, the second part of this article will discuss legal standard for authentication of social media evidence and application of that forensic software in the courtroom. As the conclusion, this article will provide a rethinking, that is, what kind of authenticity is this rule of evidence chase for. Does legal system automatically operate the transcription of scientific knowledge? Or furthermore, it wants to better render justice, not only under scientific fact, but through multivariate debating.

Keywords: federal rule of evidence, internet forensic, printouts as evidence, social media evidence, United States v. Vayner

Procedia PDF Downloads 278
1455 Analysing Anime as the Narration of Resistance: Case Study of Japanese Vampire Anime

Authors: Patrycja Pichnicka

Abstract:

Anime is the Japanese art of animation and a kind of Japanese animated movie, different from the Western ones by its specific features. In the world dominated by live action movies, mostly the ones produced in the United States, Japanese animated movies, which constitute a large part of the Japanese movie industry, play the role of the Other. They adapt elements of Western culture and technology to create something that resists global Western domination. This phenomenon is particularly interesting to observe in the case of narration borrowed from the Western culture, yet transformed in a specific manner: such as Vampire Narration. The phenomenon should be examined using the theory of cultural adaptation of Siergiei Arutiunow, as well as theory of cultural hegemony and postcolonial theories, including the theory of the discourse of resistance. Relations of cultural hegemony and resistance have been mentioned in works of Susan Napier, however they are worth to be fully developed. Anime relations to globally dominating culture reveal tension between submission and resistance in which non-Western identity is constructed and performed. Nonetheless, the tension between the Global/Western and the Japanese is not the only one existing in contemporaneous Japanese society and culture. Sexual, gender, class, and ethnic issues are also expressed in and through pop culture narrations. Using the basic division of the types of cultural adaptation we can trace the line of the evolution of the Japanese cultural attitude towards the West, expressed in the Vampire Narration from the time of American occupation till now. These attitudes changed from the submissive assimilation or reproduction of cultural models, through the simple opposition, to the more nuanced attitude of nowadays. However, according to Kimberlé Crenshaw’s intersectional theory, there is no one category of discrimination or submission. There are individuals or groups existing on the cross of two or more categories of emancipation. If the Japanese were culturally subdued to the Westerner, the Japanese woman was doubly subdued: as a woman and as a Japanese. The emancipation of one group can deepen the submission of another one, of internal Other, of the group in which two or more categories of domination/submission intersect. That is why some Japanese female authors enthusiastically reproduce the Western cultural models, even if this means a cultural hegemony of the West over the Japanese. They see, as women, more liberal attitudes towards their gender in the Western culture than in the Japanese culture, as it is constructed and produced by Japanese men. The Japanese anime is the realm in which sophisticated art meets social tendencies and cultural attitudes. Anime examination permits to study of the composed contemporaneous Japanese identity, as well as general rules of cultural relations.

Keywords: anime, cultural hegemony, intercultural relations, resistance, vampire narration

Procedia PDF Downloads 130
1454 Meta Model for Optimum Design Objective Function of Steel Frames Subjected to Seismic Loads

Authors: Salah R. Al Zaidee, Ali S. Mahdi

Abstract:

Except for simple problems of statically determinate structures, optimum design problems in structural engineering have implicit objective functions where structural analysis and design are essential within each searching loop. With these implicit functions, the structural engineer is usually enforced to write his/her own computer code for analysis, design, and searching for optimum design among many feasible candidates and cannot take advantage of available software for structural analysis, design, and searching for the optimum solution. The meta-model is a regression model used to transform an implicit objective function into objective one and leads in turn to decouple the structural analysis and design processes from the optimum searching process. With the meta-model, well-known software for structural analysis and design can be used in sequence with optimum searching software. In this paper, the meta-model has been used to develop an explicit objective function for plane steel frames subjected to dead, live, and seismic forces. Frame topology is assumed as predefined based on architectural and functional requirements. Columns and beams sections and different connections details are the main design variables in this study. Columns and beams are grouped to reduce the number of design variables and to make the problem similar to that adopted in engineering practice. Data for the implicit objective function have been generated based on analysis and assessment for many design proposals with CSI SAP software. These data have been used later in SPSS software to develop a pure quadratic nonlinear regression model for the explicit objective function. Good correlations with a coefficient, R2, in the range from 0.88 to 0.99 have been noted between the original implicit functions and the corresponding explicit functions generated with meta-model.

Keywords: meta-modal, objective function, steel frames, seismic analysis, design

Procedia PDF Downloads 227
1453 Business Program Curriculum with Industry-Recognized Certifications: An Empirical Study of Exam Results and Program Curriculum

Authors: Thomas J. Bell III

Abstract:

Pursuing a business degree is fraught with perplexing questions regarding the rising tuition cost and the immediate value of earning a degree. Any decision to pursue an undergraduate business degree is perceived to have value if it facilitates post-graduate job placement. Business programs have decreased value in the absence of innovation in business programs that close the skills gap between recent graduates and employment opportunities. Industry-based certifications are seemingly becoming a requirement differentiator among job applicants. Texas Wesleyan University offers a Computer Information System (CIS) program with an innovative curriculum that integrates industry-recognized certification training into its traditional curriculum with core subjects and electives. This paper explores a culture of innovation in the CIS business program curriculum that creates sustainable stakeholder value for students, employers, the community, and the university. A quantitative research methodology surveying over one-hundred students in the CIS program will be used to examine factors influencing the success or failure of students taking certification exams. Researchers will analyze control variables to identify specific correlations between practice exams, teaching pedagogy, study time, age, work experience, etc. This study compared various exam preparation techniques to corresponding exam results across several industry certification exams. The findings will aid in understanding control variables with correlations that positively and negatively impact exam results. Such discovery may provide useful insight into pedagogical impact indicators that positively contribute to certification exam success and curriculum enhancement.

Keywords: taking certification exams, exam training, testing skills, exam study aids, certification exam curriculum

Procedia PDF Downloads 70
1452 Governance in the Age of Artificial intelligence and E- Government

Authors: Mernoosh Abouzari, Shahrokh Sahraei

Abstract:

Electronic government is a way for governments to use new technology that provides people with the necessary facilities for proper access to government information and services, improving the quality of services and providing broad opportunities to participate in democratic processes and institutions. That leads to providing the possibility of easy use of information technology in order to distribute government services to the customer without holidays, which increases people's satisfaction and participation in political and economic activities. The expansion of e-government services and its movement towards intelligentization has the ability to re-establish the relationship between the government and citizens and the elements and components of the government. Electronic government is the result of the use of information and communication technology (ICT), which by implementing it at the government level, in terms of the efficiency and effectiveness of government systems and the way of providing services, tremendous commercial changes are created, which brings people's satisfaction at the wide level will follow. The main level of electronic government services has become objectified today with the presence of artificial intelligence systems, which recent advances in artificial intelligence represent a revolution in the use of machines to support predictive decision-making and Classification of data. With the use of deep learning tools, artificial intelligence can mean a significant improvement in the delivery of services to citizens and uplift the work of public service professionals while also inspiring a new generation of technocrats to enter government. This smart revolution may put aside some functions of the government, change its components, and concepts such as governance, policymaking or democracy will change in front of artificial intelligence technology, and the top-down position in governance may face serious changes, and If governments delay in using artificial intelligence, the balance of power will change and private companies will monopolize everything with their pioneering in this field, and the world order will also depend on rich multinational companies and in fact, Algorithmic systems will become the ruling systems of the world. It can be said that currently, the revolution in information technology and biotechnology has been started by engineers, large economic companies, and scientists who are rarely aware of the political complexities of their decisions and certainly do not represent anyone. Therefore, it seems that if liberalism, nationalism, or any other religion wants to organize the world of 2050, it should not only rationalize the concept of artificial intelligence and complex data algorithm but also mix them in a new and meaningful narrative. Therefore, the changes caused by artificial intelligence in the political and economic order will lead to a major change in the way all countries deal with the phenomenon of digital globalization. In this paper, while debating the role and performance of e-government, we will discuss the efficiency and application of artificial intelligence in e-government, and we will consider the developments resulting from it in the new world and the concepts of governance.

Keywords: electronic government, artificial intelligence, information and communication technology., system

Procedia PDF Downloads 80
1451 Identifying Large-Scale Photovoltaic and Concentrated Solar Power Hot Spots: Multi-Criteria Decision-Making Framework

Authors: Ayat-Allah Bouramdane

Abstract:

Solar Photovoltaic (PV) and Concentrated Solar Power (CSP) do not burn fossil fuels and, therefore, could meet the world's needs for low-carbon power generation as they do not release greenhouse gases into the atmosphere as they generate electricity. The power output of the solar PV module and CSP collector is proportional to the temperature and the amount of solar radiation received by their surface. Hence, the determination of the most convenient locations of PV and CSP systems is crucial to maximizing their output power. This study aims to provide a hands-on and plausible approach to the multi-criteria evaluation of site suitability of PV and CSP plants using a combination of Geographic Referenced Information (GRI) and Analytic Hierarchy Process (AHP). Applying the GRI-based AHP approach is meant to specify the criteria and sub-criteria, to identify the unsuitable areas, the low-, moderate-, high- and very high suitable areas for each layer of GRI, to perform the pairwise comparison matrix at each level of the hierarchy structure based on experts' knowledge, and calculate the weights using AHP to create the final map of solar PV and CSP plants suitability in Morocco with a particular focus on the Dakhla city. The results recognize that solar irradiation is the main decision factor for the integration of these technologies on energy policy goals of Morocco but explicitly account for other factors that cannot only limit the potential of certain locations but can even exclude the Dakhla city classified as unsuitable area. We discuss the sensitivity of the PV and CSP site suitability to different aspects, such as the methodology, the climate conditions, and the technology used in each source, and provide the final recommendations to the Moroccan energy strategy by analyzing if actual Morocco's PV and CSP installations are located within areas deemed suitable and by discussing several cases to provide mutual benefits across the Food-Energy-Water nexus. The adapted methodology and conducted suitability map could be used by researchers or engineers to provide helpful information for decision-makers in terms of sites selection, design, and planning of future solar plants, especially in areas suffering from energy shortages, such as the Dakhla city, which is now one of Africa's most promising investment hubs and it is especially attractive to investors looking to root their operations in Africa and import to European markets.

Keywords: analytic hierarchy process, concentrated solar power, dakhla, geographic referenced information, Morocco, multi-criteria decision-making, photovoltaic, site suitability

Procedia PDF Downloads 151
1450 An Algebraic Geometric Imaging Approach for Automatic Dairy Cow Body Condition Scoring System

Authors: Thi Thi Zin, Pyke Tin, Ikuo Kobayashi, Yoichiro Horii

Abstract:

Today dairy farm experts and farmers have well recognized the importance of dairy cow Body Condition Score (BCS) since these scores can be used to optimize milk production, managing feeding system and as an indicator for abnormality in health even can be utilized to manage for having healthy calving times and process. In tradition, BCS measures are done by animal experts or trained technicians based on visual observations focusing on pin bones, pin, thurl and hook area, tail heads shapes, hook angles and short and long ribs. Since the traditional technique is very manual and subjective, the results can lead to different scores as well as not cost effective. Thus this paper proposes an algebraic geometric imaging approach for an automatic dairy cow BCS system. The proposed system consists of three functional modules. In the first module, significant landmarks or anatomical points from the cow image region are automatically extracted by using image processing techniques. To be specific, there are 23 anatomical points in the regions of ribs, hook bones, pin bone, thurl and tail head. These points are extracted by using block region based vertical and horizontal histogram methods. According to animal experts, the body condition scores depend mainly on the shape structure these regions. Therefore the second module will investigate some algebraic and geometric properties of the extracted anatomical points. Specifically, the second order polynomial regression is employed to a subset of anatomical points to produce the regression coefficients which are to be utilized as a part of feature vector in scoring process. In addition, the angles at thurl, pin, tail head and hook bone area are computed to extend the feature vector. Finally, in the third module, the extracted feature vectors are trained by using Markov Classification process to assign BCS for individual cows. Then the assigned BCS are revised by using multiple regression method to produce the final BCS score for dairy cows. In order to confirm the validity of proposed method, a monitoring video camera is set up at the milk rotary parlor to take top view images of cows. The proposed method extracts the key anatomical points and the corresponding feature vectors for each individual cows. Then the multiple regression calculator and Markov Chain Classification process are utilized to produce the estimated body condition score for each cow. The experimental results tested on 100 dairy cows from self-collected dataset and public bench mark dataset show very promising with accuracy of 98%.

Keywords: algebraic geometric imaging approach, body condition score, Markov classification, polynomial regression

Procedia PDF Downloads 143
1449 Investigating Student Behavior in Adopting Online Formative Assessment Feedback

Authors: Peter Clutterbuck, Terry Rowlands, Owen Seamons

Abstract:

In this paper we describe one critical research program within a complex, ongoing multi-year project (2010 to 2014 inclusive) with the overall goal to improve the learning outcomes for first year undergraduate commerce/business students within an Information Systems (IS) subject with very large enrolment. The single research program described in this paper is the analysis of student attitudes and decision making in relation to the availability of formative assessment feedback via Web-based real time conferencing and document exchange software (Adobe Connect). The formative assessment feedback between teaching staff and students is in respect of an authentic problem-based, team-completed assignment. The analysis of student attitudes and decision making is investigated via both qualitative (firstly) and quantitative (secondly) application of the Theory of Planned Behavior (TPB) with a two statistically-significant and separate trial samples of the enrolled students. The initial qualitative TPB investigation revealed that perceived self-efficacy, improved time-management, and lecturer-student relationship building were the major factors in shaping an overall favorable student attitude to online feedback, whilst some students expressed valid concerns with perceived control limitations identified within the online feedback protocols. The subsequent quantitative TPB investigation then confirmed that attitude towards usage, subjective norms surrounding usage, and perceived behavioral control of usage were all significant in shaping student intention to use the online feedback protocol, with these three variables explaining 63 percent of the variance in the behavioral intention to use the online feedback protocol. The identification in this research of perceived behavioral control as a significant determinant in student usage of a specific technology component within a virtual learning environment (VLE) suggests that VLEs could now be viewed not as a single, atomic entity, but as a spectrum of technology offerings ranging from the mature and simple (e.g., email, Web downloads) to the cutting-edge and challenging (e.g., Web conferencing and real-time document exchange). That is, that all VLEs should not be considered the same. The results of this research suggest that tertiary students have the technological sophistication to assess a VLE in this more selective manner.

Keywords: formative assessment feedback, virtual learning environment, theory of planned behavior, perceived behavioral control

Procedia PDF Downloads 384
1448 Medical Diagnosis of Retinal Diseases Using Artificial Intelligence Deep Learning Models

Authors: Ethan James

Abstract:

Over one billion people worldwide suffer from some level of vision loss or blindness as a result of progressive retinal diseases. Many patients, particularly in developing areas, are incorrectly diagnosed or undiagnosed whatsoever due to unconventional diagnostic tools and screening methods. Artificial intelligence (AI) based on deep learning (DL) convolutional neural networks (CNN) have recently gained a high interest in ophthalmology for its computer-imaging diagnosis, disease prognosis, and risk assessment. Optical coherence tomography (OCT) is a popular imaging technique used to capture high-resolution cross-sections of retinas. In ophthalmology, DL has been applied to fundus photographs, optical coherence tomography, and visual fields, achieving robust classification performance in the detection of various retinal diseases including macular degeneration, diabetic retinopathy, and retinitis pigmentosa. However, there is no complete diagnostic model to analyze these retinal images that provide a diagnostic accuracy above 90%. Thus, the purpose of this project was to develop an AI model that utilizes machine learning techniques to automatically diagnose specific retinal diseases from OCT scans. The algorithm consists of neural network architecture that was trained from a dataset of over 20,000 real-world OCT images to train the robust model to utilize residual neural networks with cyclic pooling. This DL model can ultimately aid ophthalmologists in diagnosing patients with these retinal diseases more quickly and more accurately, therefore facilitating earlier treatment, which results in improved post-treatment outcomes.

Keywords: artificial intelligence, deep learning, imaging, medical devices, ophthalmic devices, ophthalmology, retina

Procedia PDF Downloads 161
1447 Live Music Promotion in Burundi Country

Authors: Aster Anderson Rugamba

Abstract:

Context: Live music in Burundi is currently facing neglect and a decline in popularity, resulting in artists struggling to generate income from this field. Additionally, live music from Burundi has not been able to gain traction in the international market. It is essential to establish various structures and organizations to promote cultural events and support artistic endeavors in music and performing arts. Research Aim: The aim of this research is to seek new knowledge and understanding in the field of live music and its content in Burundi. Furthermore, it aims to connect with other professionals in the industry, make new discoveries, and explore potential collaborations and investments. Methodology: The research will utilize both quantitative and qualitative research methodologies. The quantitative approach will involve a sample size of 57 musician artists in Burundi. It will employ closed-ended questions and gather quantitative data to ensure a large sample size and high external validity. The qualitative approach will provide deeper insights and understanding through open-ended questions and in-depth interviews with selected participants. Findings: The research expects to find new theories, methodologies, empirical findings, and applications of existing knowledge that can contribute to the development of live music in Burundi. By exploring the challenges faced by artists and identifying potential solutions, the study aims to establish live music as a catalyst for development and generate a positive impact on both the Burundian and international community. Theoretical Importance: Theoretical contributions of this research will expand the current understanding of the live music industry in Burundi. It will propose new theories and models to address the issues faced by artists and highlight the potential of live music as a lucrative and influential industry. By bridging the gap between theory and practice, the research aims to provide valuable insights for academics, professionals, and policymakers. Data Collection and Analysis Procedures: Data will be collected through surveys, interviews, and archival research. Surveys will be administered to the sample of 57 musician artists, while interviews will be conducted to gain in-depth insights from selected participants. The collected data will be analyzed using both quantitative and qualitative methods, including statistical analysis and thematic analysis, respectively. This mixed-method approach will ensure a comprehensive and rigorous examination of the research questions addressed.

Keywords: business music in burundi, music in burundi, promotion of art, burundi music culture

Procedia PDF Downloads 47
1446 Adaption to Climate Change as a Challenge for the Manufacturing Industry: Finding Business Strategies by Game-Based Learning

Authors: Jan Schmitt, Sophie Fischer

Abstract:

After the Corona pandemic, climate change is a further, long-lasting challenge the society must deal with. An ongoing climate change need to be prevented. Nevertheless, the adoption tothe already changed climate conditionshas to be focused in many sectors. Recently, the decisive role of the economic sector with high value added can be seen in the Corona crisis. Hence, manufacturing industry as such a sector, needs to be prepared for climate change and adaption. Several examples from the manufacturing industry show the importance of a strategic effort in this field: The outsourcing of a major parts of the value chain to suppliers in other countries and optimizing procurement logistics in a time-, storage- and cost-efficient manner within a network of global value creation, can lead vulnerable impacts due to climate-related disruptions. E.g. the total damage costs after the 2011 flood disaster in Thailand, including costs for delivery failures, were estimated at 45 billion US dollars worldwide. German car manufacturers were also affected by supply bottlenecks andhave close its plant in Thailand for a short time. Another OEM must reduce the production output. In this contribution, a game-based learning approach is presented, which should enable manufacturing companies to derive their own strategies for climate adaption out of a mix of different actions. Based on data from a regional study of small, medium and large manufacturing companies in Mainfranken, a strongly industrialized region of northern Bavaria (Germany) the game-based learning approach is designed. Out of this, the actual state of efforts due to climate adaption is evaluated. First, the results are used to collect single actions for manufacturing companies and second, further actions can be identified. Then, a variety of climate adaption activities can be clustered according to the scope of activity of the company. The combination of different actions e.g. the renewal of the building envelope with regard to thermal insulation, its benefits and drawbacks leads to a specific strategy for climate adaption for each company. Within the game-based approach, the players take on different roles in a fictionalcompany and discuss the order and the characteristics of each action taken into their climate adaption strategy. Different indicators such as economic, ecologic and stakeholder satisfaction compare the success of the respective measures in a competitive format with other virtual companies deriving their own strategy. A "play through" climate change scenarios with targeted adaptation actions illustrate the impact of different actions and their combination onthefictional company.

Keywords: business strategy, climate change, climate adaption, game-based learning

Procedia PDF Downloads 190
1445 Study of Interplanetary Transfer Trajectories via Vicinity of Libration Points

Authors: Zhe Xu, Jian Li, Lvping Li, Zezheng Dong

Abstract:

This work is to study an optimized transfer strategy of connecting Earth and Mars via the vicinity of libration points, which have been playing an increasingly important role in trajectory designing on a deep space mission, and can be used as an effective alternative solution for Earth-Mars direct transfer mission in some unusual cases. The use of vicinity of libration points of the sun-planet body system is becoming potential gateways for future interplanetary transfer missions. By adding fuel to cargo spaceships located in spaceports, the interplanetary round-trip exploration shuttle mission of such a system facility can also be a reusable transportation system. In addition, in some cases, when the S/C cruising through invariant manifolds, it can also save a large amount of fuel. Therefore, it is necessary to make an effort on looking for efficient transfer strategies using variant manifold about libration points. It was found that Earth L1/L2 Halo/Lyapunov orbits and Mars L2/L1 Halo/Lyapunov orbits could be connected with reasonable fuel consumption and flight duration with appropriate design. In the paper, the halo hopping method and coplanar circular method are briefly introduced. The former used differential corrections to systematically generate low ΔV transfer trajectories between interplanetary manifolds, while the latter discussed escape and capture trajectories to and from Halo orbits by using impulsive maneuvers at periapsis of the manifolds about libration points. In the following, designs of transfer strategies of the two methods are shown here. A comparative performance analysis of interplanetary transfer strategies of the two methods is carried out accordingly. Comparison of strategies is based on two main criteria: the total fuel consumption required to perform the transfer and the time of flight, as mentioned above. The numeric results showed that the coplanar circular method procedure has certain advantages in cost or duration. Finally, optimized transfer strategy with engineering constraints is searched out and examined to be an effective alternative solution for a given direct transfer mission. This paper investigated main methods and gave out an optimized solution in interplanetary transfer via the vicinity of libration points. Although most of Earth-Mars mission planners prefer to build up a direct transfer strategy for the mission due to its advantage in relatively short time of flight, the strategies given in the paper could still be regard as effective alternative solutions since the advantages mentioned above and longer departure window than direct transfer.

Keywords: circular restricted three-body problem, halo/Lyapunov orbit, invariant manifolds, libration points

Procedia PDF Downloads 226
1444 Electronic Commerce in Georgia: Problems and Development Perspectives

Authors: Nika GorgoShadze, Anri Shainidze, Bachuki Katamadze

Abstract:

In parallel to the development of the digital economy in the world, electronic commerce is also widely developing. Internet and ICT (information and communication technology) have created new business models as well as promoted to market consolidation, sustainability of the business environment, creation of digital economy, facilitation of business and trade, business dynamism, higher competitiveness, etc. Electronic commerce involves internet technology which is sold via the internet. Nowadays electronic commerce is a field of business which is used by leading world brands very effectively. After the research of internet market in Georgia, it was found out that quality of internet is high in Tbilisi and is low in the regions. The internet market of Tbilisi can be evaluated as high-speed internet service, competitive and cost effective internet market. Development of electronic commerce in Georgia is connected with organizational and methodological as well as legal problems. First of all, a legal framework should be developed which will regulate responsibilities of organizations. The Ministry of Economy and Sustainable Development will play a crucial role in creating legal framework. Ministry of Justice will also be involved in this process as well as agency for data exchange. Measures should be taken in order to make electronic commerce in Georgia easier. Business companies may be offered some model to get low-cost and complex service. A service centre should be created which will provide all kinds of online-shopping. This will be a rather interesting innovation which will facilitate online-shopping in Georgia. Development of electronic business in Georgia requires modernized infrastructure of telecommunications (especially in the regions) as well as solution of institutional and socio-economic problems. Issues concerning internet availability and computer skills are also important.

Keywords: electronic commerce, internet market, electronic business, information technology, information society, electronic systems

Procedia PDF Downloads 367
1443 Empowering Youth Through Pesh Poultry: A Transformative Approach to Addressing Unemployment and Fostering Sustainable Livelihoods in Busia District, Uganda

Authors: Bisemiire Anthony,

Abstract:

PESH Poultry is a business project proposed specifically to solve unemployment and income-related problems affecting the youths in the Busia district. The project is intended to transform the life of the youth in terms of economic, social and behavioral, as well as the domestic well-being of the community at large. PESH Poultry is a start-up poultry farm that will be engaged in the keeping of poultry birds, broilers and layers for the production of quality and affordable poultry meat and eggs respectively and other poultry derivatives targeting consumers in eastern Uganda, for example, hotels, restaurants, households and bakeries. We intend to use a semi-intensive system of farming, where water and some food are provided in a separate nighttime shelter for the birds; our location will be in Lumino, Busia district. The poultry project will be established and owned by Bisemiire Anthony, Nandera Patience, Naula Justine, Bwire Benjamin and other investors. The farm will be managed and directed by Nandera Patience, who has five years of work experience and business administration knowledge. We will sell poultry products, including poultry eggs, chicken meat, feathers and poultry manure. We also offer consultancy services for poultry farming. Our eggs and chicken meat are hygienic, rich in protein and high quality. We produce processes and packages to meet the standard organization of Uganda and international standards. The business project shall comprise five (5) workers on the key management team who will share various roles and responsibilities in the identified business functions such as marketing, finance and other related poultry farming activities. PESH Poultry seeks 30 million Ugandan shillings in long-term financing to cover start-up costs, equipment, building expenses and working capital. Funding for the launch of the business will be provided primarily by equity from the investors. The business will reach positive cash flow in its first year of operation, allowing for the expected repayment of its loan obligations. Revenue will top UGX 11,750,000, and net income will reach about UGX115 950,000 in the 1st year of operation. The payback period for our project is 2 years and 3 months. The farm plans on starting with 1000 layer birds and 1000 broiler birds, 20 workers in the first year of operation.

Keywords: chicken, pullets, turkey, ducks

Procedia PDF Downloads 72
1442 Transition towards a Market Society: Commodification of Public Health in India and Pakistan

Authors: Mayank Mishra

Abstract:

Market Economy can be broadly defined as economic system where supply and demand regulate the economy and in which decisions pertaining to production, consumption, allocation of resources, price and competition are made by collective actions of individuals or organisations with limited government intervention. On the other hand Market Society is one where instead of the economy being embedded in social relations, social relations are embedded in the economy. A market economy becomes a market society when all of land, labour and capital are commodified. This transition also has effect on people’s attitude and values. Such a transition commence impacting the non-material aspect of life such as public education, public health and the like. The inception of neoliberal policies in non-market norms altered the nature of social goods like public health that raised the following questions. What impact would the transition to a market society make on people in terms of accessibility to public health? Is healthcare a commodity that can be subjected to a competitive market place? What kind of private investments are being made in public health and how do private investments alter the nature of a public good like healthcare? This research problem will employ empirical-analytical approach that includes deductive reasoning which will be using the existing concept of market economy and market society as a foundation for the analytical framework and the hypotheses to be examined. The research also intends to inculcate the naturalistic elements of qualitative methodology which refers to studying of real world situations as they unfold. The research will analyse the existing literature available on the subject. Concomitantly the research intends to access the primary literature which includes reports from the World Bank, World Health Organisation (WHO) and the different departments of respective ministries of the countries for the analysis. This paper endeavours to highlight how the issue of commodification of public health would lead to perpetual increase in its inaccessibility leading to stratification of healthcare services where one can avail the better services depending on the extent of one’s ability to pay. Since the fundamental maxim of private investments is to churn out profits, these kinds of trends would pose a detrimental effect on the society at large perpetuating the lacuna between the have and the have-nots.The increasing private investments, both, domestic and foreign, in public health sector are leading to increasing inaccessibility of public health services. Despite the increase in various public health schemes the quality and impact of government public health services are on a continuous decline.

Keywords: commodity, India and Pakistan, market society, public health

Procedia PDF Downloads 294