Search results for: Solar water heating systems
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7034

Search results for: Solar water heating systems

374 A Framework for Improving Trade Contractors’ Productivity Tracking Methods

Authors: Sophia Hayes, Kenny L. Liang, Sahil Sharma, Austin Shema, Mahmoud Bader, Mohamed Elbarkouky

Abstract:

Despite being one of the most significant economic contributors of the country, Canada’s construction industry is lagging behind other sectors when it comes to labor productivity improvements. The construction industry is very collaborative as a general contractor, will hire trade contractors to perform most of a project’s work; meaning low productivity from one contractor can have a domino effect on the shared success of a project. To address this issue and encourage trade contractors to improve their productivity tracking methods, an investigative study was done on the productivity views and tracking methods of various trade contractors. Additionally, an in-depth review was done on four standard tracking methods used in the construction industry: cost codes, benchmarking, the job productivity measurement (JPM) standard, and WorkFace Planning (WFP). The four tracking methods were used as a baseline in comparing the trade contractors’ responses, determining gaps within their current tracking methods, and for making improvement recommendations. 15 interviews were conducted with different trades to analyze how contractors value productivity. The results of these analyses indicated that there seem to be gaps within the construction industry when it comes to an understanding of the purpose and value in productivity tracking. The trade contractors also shared their current productivity tracking systems; which were then compared to the four standard tracking methods used in the construction industry. Gaps were identified in their various tracking methods and using a framework; recommendations were made based on the type of trade on how to improve how they track productivity.

Keywords: Trade contractors’ productivity, productivity tracking, cost codes, benchmarking, job productivity measurement, JPM, workface planning WFP.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 888
373 A Mobile Multihop Relay Dynamic TDD Scheme for Cellular Networks

Authors: Jong-Moon Chung, Hyung-Weon Cho, Ki-Yong Jin, Min-Hee Cho

Abstract:

In this paper, we present an analytical framework for the evaluation of the uplink performance of multihop cellular networks based on dynamic time division duplex (TDD). New wireless broadband protocols, such as WiMAX, WiBro, and 3G-LTE apply TDD, and mobile communication protocols under standardization (e.g., IEEE802.16j) are investigating mobile multihop relay (MMR) as a future technology. In this paper a novel MMR TDD scheme is presented, where the dynamic range of the frame is shared to traffic resources of asymmetric nature and multihop relaying. The mobile communication channel interference model comprises of inner and co-channel interference (CCI). The performance analysis focuses on the uplink due to the fact that the effects of dynamic resource allocation show significant performance degradation only in the uplink compared to time division multiple access (TDMA) schemes due to CCI [1-3], where the downlink results to be the same or better.The analysis was based on the signal to interference power ratio (SIR) outage probability of dynamic TDD (D-TDD) and TDMA systems,which are the most widespread mobile communication multi-user control techniques. This paper presents the uplink SIR outage probability with multihop results and shows that the dynamic TDD scheme applying MMR can provide a performance improvement compared to single hop applications if executed properly.

Keywords: Co-Channel Interference, Dynamic TDD, MobileMultihop Reply, Cellular Network, Time Division Multiple Access.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2343
372 Vitamin D Deficiency and Insufficiency in Postmenopausal Women with Obesity

Authors: Vladyslav Povoroznyuk, Anna Musiienko, Nataliia Dzerovych, Roksolana Povoroznyuk, Oksana Ivanyk

Abstract:

Deficiency and insufficiency of Vitamin D is a pandemic of the 21st century. Obesity patients have a lower level of vitamin D, but the literature data are contradictory. The purpose of this study is to investigate deficiency and insufficiency vitamin D in postmenopausal women with obesity. We examined 1007 women aged 50-89 years. Mean age was 65.74±8.61 years; mean height was 1.61±0.07 m; mean weight was 70.65±13.50 kg; mean body mass index was 27.27±4.86 kg/m2, and mean 25(OH) D levels in serum was 26.00±12.00 nmol/l. The women were divided into the following six groups depending on body mass index: I group – 338 women with normal body weight, II group – 16 women with insufficient body weight, III group – 382 women with excessive body weight, IV group – 199 women with obesity of class I, V group – 60 women with obesity of class II, and VI group – 12 women with obesity of class III. Level of 25(OH)D in serum was measured by means of an electrochemiluminescent method - Elecsys 2010 analyzer (Roche Diagnostics, Germany) and cobas test-systems. 34.4% of the examined women have deficiency of vitamin D and 31.4% insufficiency. Women with obesity of class I (23.60±10.24 ng/ml) and obese of class II (22.38±10.34 ng/ml) had significantly lower levels of 25 (OH) D compared to women with normal body weight (28.24±12.99 ng/ml), p=0.00003. In women with obesity, BMI significantly influences vitamin D level, and this influence does not depend on the season.

Keywords: Obesity, body mass index, vitamin D deficiency/insufficiency, postmenopausal women, age.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1059
371 A Framework for University Social Responsibility and Sustainability: The Case of South Valley University, Egypt

Authors: Alaa Tag Eldin Mohamed

Abstract:

The environmental, cultural, social, and technological changes have led higher education institutes to question their traditional roles. Many declarations and frameworks highlight the importance of fulfilling social responsibility of higher education institutes. The study aims at developing a framework of university social responsibility and sustainability (USR&S) with focus on South Valley University (SVU) as a case study of Egyptian Universities. The study used meetings with 12 vice deans of community services and environmental affairs on social responsibility and environmental issues. The proposed framework integrates social responsibility with strategic management through the establishment and maintenance of the vision, mission, values, goals and management systems; elaboration of policies; provision of actions; evaluation of services and development of social collaboration with stakeholders to meet current and future needs of the community and environment. The framework links between different stakeholders internally and externally using communication and reporting tools. The results show that SVU integrates social responsibility and sustainability in its strategic plans. It has policies and actions however fragmented and lack of appropriate structure and budgeting. The proposed framework could be valuable for researchers and decision makers of the Egyptian Universities. The study proposed recommendations and highlighted building on the results and conducting future research.

Keywords: Corporate social responsibility (CSR), South Valley University, Sustainable University, university social responsibility and sustainability (USR&S).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4107
370 Virtual Conciliation in Colombia: Evaluation of Maturity Level within the Framework of E-Government

Authors: Jenny Paola Forero Pachón, Sonia Cristina Gamboa Sarmiento, Luis Carlos Gómez Flórez

Abstract:

The Colombian government has defined an e-government strategy to take advantage of Information Technologies (IT) in order to contribute to the building of a more efficient, transparent and participative State that provides better services to citizens and businesses. In this regard, the Justice sector is one of the government sectors where IT has generated more expectation considering that the country has a judicial processes backlog. This situation has led to the search for alternative forms of access to justice that speed up the process while providing a low cost for citizens. To this end, the Colombian government has authorized the use of Alternative Dispute Resolution methods (ADR), a remedy where disputes can be resolved more quickly compared to judicial processes while facilitating greater communication between the parties, without recourse to judicial authority. One of these methods is conciliation, which includes a special modality that takes advantage of IT for the development of itself known as virtual conciliation. With this option the conciliation is supported by information systems, applications or platforms and communications are provided through it. This paper evaluates the level of maturity in how the service of virtual conciliation is under the framework of this strategy. This evaluation is carried out considering Shahkooh's 5-phase model for e-government. As a result, it is evident that in the context of conciliation, maturity does not reach the necessary level in the model so that it can be considered as virtual conciliation; therefore, it is necessary to define strategies to maximize the potential of IT in this context.

Keywords: Alternative dispute resolution, e-government, evaluation of maturity, Shahkooh model, virtual conciliation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 948
369 Nuclear Medical Image Treatment System Based On FPGA in Real Time

Authors: B. Mahmoud, M.H. Bedoui, R. Raychev, H. Essabbah

Abstract:

We present in this paper an acquisition and treatment system designed for semi-analog Gamma-camera. It consists of a nuclear medical Image Acquisition, Treatment and Display chain(IATD) ensuring the acquisition, the treatment of the signals(resulting from the Gamma-camera detection head) and the scintigraphic image construction in real time. This chain is composed by an analog treatment board and a digital treatment board. We describe the designed systems and the digital treatment algorithms in which we have improved the performance and the flexibility. The digital treatment algorithms are implemented in a specific reprogrammable circuit FPGA (Field Programmable Gate Array).interface for semi-analog cameras of Sopha Medical Vision(SMVi) by taking as example SOPHY DS7. The developed system consists of an Image Acquisition, Treatment and Display (IATD) ensuring the acquisition and the treatment of the signals resulting from the DH. The developed chain is formed by a treatment analog board and a digital treatment board designed around a DSP [2]. In this paper we have presented the architecture of a new version of our chain IATD in which the integration of the treatment algorithms is executed on an FPGA (Field Programmable Gate Array)

Keywords: Nuclear medical image, scintigraphic image, digitaltreatment, linearity, spectrometry, FPGA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1676
368 A Rule-based Approach for Anomaly Detection in Subscriber Usage Pattern

Authors: Rupesh K. Gopal, Saroj K. Meher

Abstract:

In this report we present a rule-based approach to detect anomalous telephone calls. The method described here uses subscriber usage CDR (call detail record) data sampled over two observation periods: study period and test period. The study period contains call records of customers- non-anomalous behaviour. Customers are first grouped according to their similar usage behaviour (like, average number of local calls per week, etc). For customers in each group, we develop a probabilistic model to describe their usage. Next, we use maximum likelihood estimation (MLE) to estimate the parameters of the calling behaviour. Then we determine thresholds by calculating acceptable change within a group. MLE is used on the data in the test period to estimate the parameters of the calling behaviour. These parameters are compared against thresholds. Any deviation beyond the threshold is used to raise an alarm. This method has the advantage of identifying local anomalies as compared to techniques which identify global anomalies. The method is tested for 90 days of study data and 10 days of test data of telecom customers. For medium to large deviations in the data in test window, the method is able to identify 90% of anomalous usage with less than 1% false alarm rate.

Keywords: Subscription fraud, fraud detection, anomalydetection, maximum likelihood estimation, rule based systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2813
367 Least Square-SVM Detector for Wireless BPSK in Multi-Environmental Noise

Authors: J. P. Dubois, Omar M. Abdul-Latif

Abstract:

Support Vector Machine (SVM) is a statistical learning tool developed to a more complex concept of structural risk minimization (SRM). In this paper, SVM is applied to signal detection in communication systems in the presence of channel noise in various environments in the form of Rayleigh fading, additive white Gaussian background noise (AWGN), and interference noise generalized as additive color Gaussian noise (ACGN). The structure and performance of SVM in terms of the bit error rate (BER) metric is derived and simulated for these advanced stochastic noise models and the computational complexity of the implementation, in terms of average computational time per bit, is also presented. The performance of SVM is then compared to conventional binary signaling optimal model-based detector driven by binary phase shift keying (BPSK) modulation. We show that the SVM performance is superior to that of conventional matched filter-, innovation filter-, and Wiener filter-driven detectors, even in the presence of random Doppler carrier deviation, especially for low SNR (signal-to-noise ratio) ranges. For large SNR, the performance of the SVM was similar to that of the classical detectors. However, the convergence between SVM and maximum likelihood detection occurred at a higher SNR as the noise environment became more hostile.

Keywords: Colour noise, Doppler shift, innovation filter, least square-support vector machine, matched filter, Rayleigh fading, Wiener filter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1813
366 MHD Chemically Reacting Viscous Fluid Flow towards a Vertical Surface with Slip and Convective Boundary Conditions

Authors: Ibrahim Yakubu Seini, Oluwole Daniel Makinde

Abstract:

MHD chemically reacting viscous fluid flow towards a vertical surface with slip and convective boundary conditions has been conducted. The temperature and the chemical species concentration of the surface and the velocity of the external flow are assumed to vary linearly with the distance from the vertical surface. The governing differential equations are modeled and transformed into systems of ordinary differential equations, which are then solved numerically by a shooting method. The effects of various parameters on the heat and mass transfer characteristics are discussed. Graphical results are presented for the velocity, temperature, and concentration profiles whilst the skin-friction coefficient and the rate of heat and mass transfers near the surface are presented in tables and discussed. The results revealed that increasing the strength of the magnetic field increases the skin-friction coefficient and the rate of heat and mass transfers toward the surface. The velocity profiles are increased towards the surface due to the presence of the Lorenz force, which attracts the fluid particles near the surface. The rate of chemical reaction is seen to decrease the concentration boundary layer near the surface due to the destructive chemical reaction occurring near the surface.

Keywords: Boundary layer, surface slip, MHD flow, chemical reaction, heat transfer, mass transfer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2238
365 The Effects of Drought and Nitrogen on Soybean (Glycine max (L.) Merrill) Physiology and Yield

Authors: Oqba Basal, András Szabó

Abstract:

Legume crops are able to fix atmospheric nitrogen by the symbiotic relation with specific bacteria, which allows the use of the mineral nitrogen-fertilizer to be reduced, or even excluded, resulting in more profit for the farmers and less pollution for the environment. Soybean (Glycine max (L.) Merrill) is one of the most important legumes with its high content of both protein and oil. However, it is recommended to combine the two nitrogen sources under stress conditions in order to overcome its negative effects. Drought stress is one of the most important abiotic stresses that increasingly limits soybean yields. A precise rate of mineral nitrogen under drought conditions is not confirmed, as it depends on many factors; soybean yield-potential and soil-nitrogen content to name a few. An experiment was conducted during 2017 growing season in Debrecen, Hungary to investigate the effects of nitrogen source on the physiology and the yield of the soybean cultivar 'Boglár'. Three N-fertilizer rates including no N-fertilizer (0 N), 35 kg ha-1 of N-fertilizer (35 N) and 105 kg ha-1 of N-fertilizer (105 N) were applied under three different irrigation regimes; severe drought stress (SD), moderate drought stress (MD) and control with no drought stress (ND). Half of the seeds in each treatment were pre-inoculated with Bradyrhizobium japonicum inoculant. The overall results showed significant differences associated with fertilization and irrigation, but not with inoculation. Increasing N rate was mostly accompanied with increased chlorophyll content and leaf area index, whereas it positively affected the plant height only when the drought was waived off. Plant height was the lowest under severe drought, regardless of inoculation and N-fertilizer application and rate. Inoculation increased the yield when there was no drought, and a low rate of N-fertilizer increased the yield furthermore; however, the high rate of N-fertilizer decreased the yield to a level even less than the inoculated control. On the other hand, the yield of non-inoculated plants increased as the N-fertilizer rate increased. Under drought conditions, adding N-fertilizer increased the yield of the non-inoculated plants compared to their inoculated counterparts; moreover, the high rate of N-fertilizer resulted in the best yield. Regardless of inoculation, the mean yield of the three fertilization rates was better when the water amount increased. It was concluded that applying N-fertilizer to provide the nitrogen needed by soybean plants, with the absence of N2-fixation process, is very important. Moreover, adding relatively high rate of N-fertilizer is very important under severe drought stress to alleviate the drought negative effects. Further research to recommend the best N-fertilizer rate to inoculated soybean under drought stress conditions should be executed.

Keywords: Drought stress, inoculation, N-fertilizer, soybean physiology, yield.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 821
364 Information Retrieval: A Comparative Study of Textual Indexing Using an Oriented Object Database (db4o) and the Inverted File

Authors: Mohammed Erritali

Abstract:

The growth in the volume of text data such as books and articles in libraries for centuries has imposed to establish effective mechanisms to locate them. Early techniques such as abstraction, indexing and the use of classification categories have marked the birth of a new field of research called "Information Retrieval". Information Retrieval (IR) can be defined as the task of defining models and systems whose purpose is to facilitate access to a set of documents in electronic form (corpus) to allow a user to find the relevant ones for him, that is to say, the contents which matches with the information needs of the user. Most of the models of information retrieval use a specific data structure to index a corpus which is called "inverted file" or "reverse index". This inverted file collects information on all terms over the corpus documents specifying the identifiers of documents that contain the term in question, the frequency of each term in the documents of the corpus, the positions of the occurrences of the word... In this paper we use an oriented object database (db4o) instead of the inverted file, that is to say, instead to search a term in the inverted file, we will search it in the db4o database. The purpose of this work is to make a comparative study to see if the oriented object databases may be competing for the inverse index in terms of access speed and resource consumption using a large volume of data.

Keywords: Information Retrieval, indexation, oriented object database (db4o), inverted file.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1734
363 Development of a Software about Calculating the Production Parameters in Knitted Garment Plants

Authors: Ender Bulgun, Arzu Vuruskan

Abstract:

Apparel product development is an important stage in the life cycle of a product. Shortening this stage will help to reduce the costs of a garment. The aim of this study is to examine the production parameters in knitwear apparel companies by defining the unit costs, and developing a software to calculate the unit costs of garments and make the cost estimates. In this study, with the help of a questionnaire, different companies- systems of unit cost estimating and cost calculating were tried to be analyzed. Within the scope of the questionnaire, the importance of cost estimating process for apparel companies and the expectations from a new cost estimating program were investigated. According to the results of the questionnaire, it was seen that the majority of companies which participated to the questionnaire use manual cost calculating methods or simple Microsoft Excel spreadsheets to make cost estimates. Furthermore, it was discovered that many companies meet with difficulties in archiving the cost data for future use and as a solution to that problem, it is thought that prior to making a cost estimate, sub units of garment costs which are fabric, accessory and the labor costs should be analyzed and added to the database of the programme beforehand. Another specification of the cost estimating unit prepared in this study is that the programme was designed to consist of two main units, one of which makes the product specification and the other makes the cost calculation. The programme is prepared as a web-based application in order that the supplier, the manufacturer and the customer can have the opportunity to communicate through the same platform.

Keywords: Apparel, cost estimating, design archive.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2982
362 Facility Location Selection using Preference Programming

Authors: C. Ardil

Abstract:

This paper presents preference programming technique based multiple criteria decision making analysis for selecting a facility location for a new organization or expansion of an existing facility which is of vital importance for a decision support system and strategic planning process. The implementation of decision support systems is considered crucial to sustain competitive advantage and profitability persistence in turbulent environment. As an effective strategic management and decision making is necessary, multiple criteria decision making analysis supports the decision makers to formulate and implement the right strategy. The investment cost associated with acquiring the property and facility construction makes the facility location selection problem a long-term strategic investment decision, which rationalize the best location selection which results in higher economic benefits through increased productivity and optimal distribution network. Selecting the proper facility location from a given set of alternatives is a difficult task, as many potential qualitative and quantitative multiple conflicting criteria are to be considered. This paper solves a facility location selection problem using preference programming, which is an effective multiple criteria decision making analysis tool applied to deal with complex decision problems in the operational research environment. The ranking results of preference programming are compared with WSM, TOPSIS and VIKOR methods.

Keywords: Facility Location Selection, Multiple Criteria Decision Making, Multiple Criteria Decision Making Analysis, Preference Programming, Location Selection, WSM, TOPSIS, VIKOR

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 540
361 A Novel VLSI Architecture for Image Compression Model Using Low power Discrete Cosine Transform

Authors: Vijaya Prakash.A.M, K.S.Gurumurthy

Abstract:

In Image processing the Image compression can improve the performance of the digital systems by reducing the cost and time in image storage and transmission without significant reduction of the Image quality. This paper describes hardware architecture of low complexity Discrete Cosine Transform (DCT) architecture for image compression[6]. In this DCT architecture, common computations are identified and shared to remove redundant computations in DCT matrix operation. Vector processing is a method used for implementation of DCT. This reduction in computational complexity of 2D DCT reduces power consumption. The 2D DCT is performed on 8x8 matrix using two 1-Dimensional Discrete cosine transform blocks and a transposition memory [7]. Inverse discrete cosine transform (IDCT) is performed to obtain the image matrix and reconstruct the original image. The proposed image compression algorithm is comprehended using MATLAB code. The VLSI design of the architecture is implemented Using Verilog HDL. The proposed hardware architecture for image compression employing DCT was synthesized using RTL complier and it was mapped using 180nm standard cells. . The Simulation is done using Modelsim. The simulation results from MATLAB and Verilog HDL are compared. Detailed analysis for power and area was done using RTL compiler from CADENCE. Power consumption of DCT core is reduced to 1.027mW with minimum area[1].

Keywords: Discrete Cosine Transform (DCT), Inverse DiscreteCosine Transform (IDCT), Joint Photographic Expert Group (JPEG), Low Power Design, Very Large Scale Integration (VLSI) .

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3139
360 Approximation of PE-MOCVD to ALD for TiN Concerning Resistivity and Chemical Composition

Authors: D. Geringswald, B. Hintze

Abstract:

The miniaturization of circuits is advancing. During chip manufacturing, structures are filled for example by metal organic chemical vapor deposition (MOCVD). Since this process reaches its limits in case of very high aspect ratios, the use of alternatives such as the atomic layer deposition (ALD) is possible, requiring the extension of existing coating systems. However, it is an unsolved question to what extent MOCVD can achieve results similar as an ALD process. In this context, this work addresses the characterization of a metal organic vapor deposition of titanium nitride. Based on the current state of the art, the film properties coating thickness, sheet resistance, resistivity, stress and chemical composition are considered. The used setting parameters are temperature, plasma gas ratio, plasma power, plasma treatment time, deposition time, deposition pressure, number of cycles and TDMAT flow. The derived process instructions for unstructured wafers and inside a structure with high aspect ratio include lowering the process temperature and increasing the number of cycles, the deposition and the plasma treatment time as well as the plasma gas ratio of hydrogen to nitrogen (H2:N2). In contrast to the current process configuration, the deposited titanium nitride (TiN) layer is more uniform inside the entire test structure. Consequently, this paper provides approaches to employ the MOCVD for structures with increasing aspect ratios.

Keywords: ALD, high aspect ratio, PE-MOCVD, TiN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1507
359 Visualization and Indexing of Spectral Databases

Authors: Tibor Kulcsar, Gabor Sarossy, Gabor Bereznai, Robert Auer, Janos Abonyi

Abstract:

On-line (near infrared) spectroscopy is widely used to support the operation of complex process systems. Information extracted from spectral database can be used to estimate unmeasured product properties and monitor the operation of the process. These techniques are based on looking for similar spectra by nearest neighborhood algorithms and distance based searching methods. Search for nearest neighbors in the spectral space is an NP-hard problem, the computational complexity increases by the number of points in the discrete spectrum and the number of samples in the database. To reduce the calculation time some kind of indexing could be used. The main idea presented in this paper is to combine indexing and visualization techniques to reduce the computational requirement of estimation algorithms by providing a two dimensional indexing that can also be used to visualize the structure of the spectral database. This 2D visualization of spectral database does not only support application of distance and similarity based techniques but enables the utilization of advanced clustering and prediction algorithms based on the Delaunay tessellation of the mapped spectral space. This means the prediction has not to use the high dimension space but can be based on the mapped space too. The results illustrate that the proposed method is able to segment (cluster) spectral databases and detect outliers that are not suitable for instance based learning algorithms.

Keywords: indexing high dimensional databases, dimensional reduction, clustering, similarity, k-nn algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1769
358 Complex-Valued Neural Network in Image Recognition: A Study on the Effectiveness of Radial Basis Function

Authors: Anupama Pande, Vishik Goel

Abstract:

A complex valued neural network is a neural network, which consists of complex valued input and/or weights and/or thresholds and/or activation functions. Complex-valued neural networks have been widening the scope of applications not only in electronics and informatics, but also in social systems. One of the most important applications of the complex valued neural network is in image and vision processing. In Neural networks, radial basis functions are often used for interpolation in multidimensional space. A Radial Basis function is a function, which has built into it a distance criterion with respect to a centre. Radial basis functions have often been applied in the area of neural networks where they may be used as a replacement for the sigmoid hidden layer transfer characteristic in multi-layer perceptron. This paper aims to present exhaustive results of using RBF units in a complex-valued neural network model that uses the back-propagation algorithm (called 'Complex-BP') for learning. Our experiments results demonstrate the effectiveness of a Radial basis function in a complex valued neural network in image recognition over a real valued neural network. We have studied and stated various observations like effect of learning rates, ranges of the initial weights randomly selected, error functions used and number of iterations for the convergence of error on a neural network model with RBF units. Some inherent properties of this complex back propagation algorithm are also studied and discussed.

Keywords: Complex valued neural network, Radial BasisFunction, Image recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2411
357 Biosensor Design through Molecular Dynamics Simulation

Authors: Wenjun Zhang, Yunqing Du, Steven W. Cranford, Ming L. Wang

Abstract:

The beginning of 21st century has witnessed new advancements in the design and use of new materials for biosensing applications, from nano to macro, protein to tissue. Traditional analytical methods lack a complete toolset to describe the complexities introduced by living systems, pathological relations, discrete hierarchical materials, cross-phase interactions, and structure-property dependencies. Materiomics – via systematic molecular dynamics (MD) simulation – can provide structureprocess- property relations by using a materials science approach linking mechanisms across scales and enables oriented biosensor design. With this approach, DNA biosensors can be utilized to detect disease biomarkers present in individuals’ breath such as acetone for diabetes. Our wireless sensor array based on single-stranded DNA (ssDNA)-decorated single-walled carbon nanotubes (SWNT) has successfully detected trace amount of various chemicals in vapor differentiated by pattern recognition. Here, we present how MD simulation can revolutionize the way of design and screening of DNA aptamers for targeting biomarkers related to oral diseases and oral health monitoring. It demonstrates great potential to be utilized to build a library of DNDA sequences for reliable detection of several biomarkers of one specific disease, and as well provides a new methodology of creating, designing, and applying of biosensors.

Keywords: Biosensor, design, DNA, molecular dynamics simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3036
356 Quantifying the Second-Level Digital Divide on Sub-National Level

Authors: Vladimir Korovkin, Albert Park, Evgeny Kaganer

Abstract:

Digital divide, the gap in the access to the world of digital technologies and the socio-economic opportunities that they create is an important phenomenon of the XXI century. This gap may exist between countries, regions within a country or socio-demographic groups, creating the classes of “digital have and have nots”. While the 1st-level divide (the difference in opportunities to access the digital networks) was demonstrated to diminish with time, the issues of 2nd level divide (the difference in skills and usage of digital systems) and 3rd level divide (the difference in effects obtained from digital technology) may grow. The paper offers a systemic review of literature on the measurement of the digital divide, noting the certain conceptual stagnation due to the lack of effective instruments that would capture the complex nature of the phenomenon. As a result, many important concepts do not receive the empiric exploration they deserve. As a solution the paper suggests a composite Digital Life Index, that studies separately the digital supply and demand across seven independent dimensions providing for 14 subindices. The Index is based on Internet-borne data, a distinction from traditional research approaches that rely on official statistics or surveys. The application of the model to the study of the digital divide between Russian regions and between cities in China have brought promising results. The paper advances the existing methodological literature on the 2nd level digital divide and can also inform practical decision-making regarding the strategies of national and regional digital development.

Keywords: Digital transformation, second-level digital divide, composite index, digital policy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 462
355 Probabilistic Approach of Dealing with Uncertainties in Distributed Constraint Optimization Problems and Situation Awareness for Multi-agent Systems

Authors: Sagir M. Yusuf, Chris Baber

Abstract:

In this paper, we describe how Bayesian inferential reasoning will contributes in obtaining a well-satisfied prediction for Distributed Constraint Optimization Problems (DCOPs) with uncertainties. We also demonstrate how DCOPs could be merged to multi-agent knowledge understand and prediction (i.e. Situation Awareness). The DCOPs functions were merged with Bayesian Belief Network (BBN) in the form of situation, awareness, and utility nodes. We describe how the uncertainties can be represented to the BBN and make an effective prediction using the expectation-maximization algorithm or conjugate gradient descent algorithm. The idea of variable prediction using Bayesian inference may reduce the number of variables in agents’ sampling domain and also allow missing variables estimations. Experiment results proved that the BBN perform compelling predictions with samples containing uncertainties than the perfect samples. That is, Bayesian inference can help in handling uncertainties and dynamism of DCOPs, which is the current issue in the DCOPs community. We show how Bayesian inference could be formalized with Distributed Situation Awareness (DSA) using uncertain and missing agents’ data. The whole framework was tested on multi-UAV mission for forest fire searching. Future work focuses on augmenting existing architecture to deal with dynamic DCOPs algorithms and multi-agent information merging.

Keywords: DCOP, multi-agent reasoning, Bayesian reasoning, swarm intelligence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1010
354 Comparing the Behaviour of the FRP and Steel Reinforced Shear Walls under Cyclic Seismic Loading in Aspect of the Energy Dissipation

Authors: H. Rahman, T. Donchev, D. Petkova

Abstract:

Earthquakes claim thousands of lives around the world annually due to inadequate design of lateral load resisting systems particularly shear walls. Additionally, corrosion of the steel reinforcement in concrete structures is one of the main challenges in construction industry. Fibre Reinforced Polymer (FRP) reinforcement can be used as an alternative to traditional steel reinforcement. FRP has several excellent mechanical properties than steel such as high resistance to corrosion, high tensile strength and light self-weight; additionally, it has electromagnetic neutrality advantageous to the structures where it is important such as hospitals, some laboratories and telecommunications. This paper is about results of experimental research and it is incorporating experimental testing of two medium-scale concrete shear wall samples; one reinforced with Basalt FRP (BFRP) bar and one reinforced with steel bars as a control sample. The samples are tested under quasi-static-cyclic loading following modified ATC-24 protocol standard seismic loading. The results of both samples are compared to allow a judgement about performance of BFRP reinforced against steel reinforced concrete shear walls. The results of the conducted researches show a promising momentum toward utilisation of the BFRP as an alternative to traditional steel reinforcement with the aim of improving durability with suitable energy dissipation in the reinforced concrete shear walls.  

Keywords: Shear walls, internal FRP reinforcement, cyclic loading, energy dissipation and seismic behaviour.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 746
353 Energy Conscious Builder Design Pattern with C# and Intermediate Language

Authors: Kayun Chantarasathaporn, Chonawat Srisa-an

Abstract:

Design Patterns have gained more and more acceptances since their emerging in software development world last decade and become another de facto standard of essential knowledge for Object-Oriented Programming developers nowadays. Their target usage, from the beginning, was for regular computers, so, minimizing power consumption had never been a concern. However, in this decade, demands of more complicated software for running on mobile devices has grown rapidly as the much higher performance portable gadgets have been supplied to the market continuously. To get along with time to market that is business reason, the section of software development for power conscious, battery, devices has shifted itself from using specific low-level languages to higher level ones. Currently, complicated software running on mobile devices are often developed by high level languages those support OOP concepts. These cause the trend of embracing Design Patterns to mobile world. However, using Design Patterns directly in software development for power conscious systems is not recommended because they were not originally designed for such environment. This paper demonstrates the adapted Design Pattern for power limitation system. Because there are numerous original design patterns, it is not possible to mention the whole at once. So, this paper focuses only in creating Energy Conscious version of existing regular "Builder Pattern" to be appropriated for developing low power consumption software.

Keywords: Design Patterns, Builder Pattern, Low Power Consumption, Object Oriented Programming, Power Conscious System, Software.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1999
352 A Case Study on the Value of Corporate Social Responsibility Systems

Authors: José M. Brotons, Manuel E. Sansalvador

Abstract:

The relationship between Corporate Social Responsibility (CSR) and financial performance (FP) is a subject of great interest that has not yet been resolved. In this work, we have developed a new and original tool to measure this relation. The tool quantifies the value contributed to companies that are committed to CSR. The theoretical model used is the fuzzy discounted cash flow method. Two assumptions have been considered, the first, the company has implemented the IQNet SR10 certification, and the second, the company has not implemented that certification. For the first one, the growth rate used for the time horizon is the rate maintained by the company after obtaining the IQNet SR10 certificate. For the second one, both, the growth rates company prior to the implementation of the certification, and the evolution of the sector will be taken into account. By using triangular fuzzy numbers, it is possible to deal adequately with each company’s forecasts as well as the information corresponding to the sector. Once the annual growth rate of the sales is obtained, the profit and loss accounts are generated from the annual estimate sales. For the remaining elements of this account, their regression with the nets sales has been considered. The difference between these two valuations, made in a fuzzy environment, allows obtaining the value of the IQNet SR10 certification. Although this study presents an innovative methodology to quantify the relation between CSR and FP, the authors are aware that only one company has been analyzed. This is precisely the main limitation of this study which in turn opens up an interesting line for future research: to broaden the sample of companies.

Keywords: Corporate social responsibility, case study, financial performance, company valuation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 789
351 A Case Study of Clinicians’ Perceptions of Enterprise Content Management at Tygerberg Hospital

Authors: Temitope O. Tokosi

Abstract:

Healthcare is a human right. The sensitivity of health issues has necessitated the introduction of Enterprise Content Management (ECM) at district hospitals in the Western Cape Province of South Africa. The objective is understanding clinicians’ perception of ECM at their workplace. It is a descriptive case study design of constructivist paradigm. It employed a phenomenological data analysis method using a pattern matching deductive based analytical procedure. Purposive and s4nowball sampling techniques were applied in selecting participants. Clinicians expressed concerns and frustrations using ECM such as, non-integration with other hospital systems. Inadequate access points to ECM. Incorrect labelling of notes and bar-coding causes more time wasted in finding information. System features and/or functions (such as search and edit) are not possible. Hospital management and clinicians are not constantly interacting and discussing. Information turnaround time is unacceptably lengthy. Resolving these problems would involve a positive working relationship between hospital management and clinicians. In addition, prioritising the problems faced by clinicians in relation to relevance can ensure problem-solving in order to meet clinicians’ expectations and hospitals’ objective. Clinicians’ perception should invoke attention from hospital management with regards technology use. The study’s results can be generalised across clinician groupings exposed to ECM at various district hospitals because of professional and hospital homogeneity.

Keywords: Clinician, electronic content management, hospital, perception, technology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1051
350 A Distributed Cognition Framework to Compare E-Commerce Websites Using Data Envelopment Analysis

Authors: C. lo Storto

Abstract:

This paper presents an approach based on the adoption of a distributed cognition framework and a non parametric multicriteria evaluation methodology (DEA) designed specifically to compare e-commerce websites from the consumer/user viewpoint. In particular, the framework considers a website relative efficiency as a measure of its quality and usability. A website is modelled as a black box capable to provide the consumer/user with a set of functionalities. When the consumer/user interacts with the website to perform a task, he/she is involved in a cognitive activity, sustaining a cognitive cost to search, interpret and process information, and experiencing a sense of satisfaction. The degree of ambiguity and uncertainty he/she perceives and the needed search time determine the effort size – and, henceforth, the cognitive cost amount – he/she has to sustain to perform his/her task. On the contrary, task performing and result achievement induce a sense of gratification, satisfaction and usefulness. In total, 9 variables are measured, classified in a set of 3 website macro-dimensions (user experience, site navigability and structure). The framework is implemented to compare 40 websites of businesses performing electronic commerce in the information technology market. A questionnaire to collect subjective judgements for the websites in the sample was purposely designed and administered to 85 university students enrolled in computer science and information systems engineering undergraduate courses.

Keywords: Website, e-commerce, DEA, distributed cognition, evaluation, comparison.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1706
349 Choice Experiment Approach on Evaluation of Non-Market Farming System Outputs: First Results from Lithuanian Case Study

Authors: A. Novikova, L. Rocchi, G. Startiene

Abstract:

Market and non-market outputs are produced jointly in agriculture. Their supply depends on the intensity and type of production. The role of agriculture as an economic activity and its effects are important for the Lithuanian case study, as agricultural land covers more than a half of country. Positive and negative externalities, created in agriculture are not considered in the market. Therefore, specific techniques such as stated preferences methods, in particular choice experiments (CE) are used for evaluation of non-market outputs in agriculture. The main aim of this paper is to present construction of the research path for evaluation of non-market farming system outputs in Lithuania. The conventional and organic farming, covering crops (including both cereal and industrial crops) and livestock (including dairy and cattle) production has been selected. The CE method and nested logit (NL) model were selected as appropriate for evaluation of non-market outputs of different farming systems in Lithuania. A pilot survey was implemented between October–November 2018, in order to test and improve the CE questionnaire. The results of the survey showed that the questionnaire is accepted and well understood by the respondents. The econometric modelling showed that the selected NL model could be used for the main survey. The understanding of the differences between organic and conventional farming by residents was identified. It was revealed that they are more willing to choose organic farming in comparison to conventional farming.

Keywords: Choice experiments, farming system, Lithuania market outputs, non-market outputs.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 600
348 Evolution of Web Development Techniques in Modern Technology

Authors: Abdul Basit Kiani, Maryam Kiani

Abstract:

The art of web development in new technologies is a dynamic journey, shaped by the constant evolution of tools and platforms. With the emergence of JavaScript frameworks and APIs, web developers are empowered to craft web applications that are not only robust but also highly interactive. The aim is to provide an overview of the developments in the field. The integration of artificial intelligence (AI) and machine learning (ML) has opened new horizons in web development. Chatbots, intelligent recommendation systems, and personalization algorithms have become integral components of modern websites. These AI-powered features enhance user engagement, provide personalized experiences, and streamline customer support processes, revolutionizing the way businesses interact with their audiences. Lastly, the emphasis on web security and privacy has been a pivotal area of progress. With the increasing incidents of cyber threats, web developers have implemented robust security measures to safeguard user data and ensure secure transactions. Innovations such as HTTPS protocol, two-factor authentication, and advanced encryption techniques have bolstered the overall security of web applications, fostering trust and confidence among users. Hence, recent progress in web development has propelled the industry forward, enabling developers to craft innovative and immersive digital experiences. From responsive design to AI integration and enhanced security, the landscape of web development continues to evolve, promising a future filled with endless possibilities.

Keywords: Web development, software testing, progressive web apps, web and mobile native application.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 381
347 Relationship between Personality Traits and Postural Stability among Czech Military Combat Troops

Authors: K. Rusnakova, D. Gerych, M. Stehlik

Abstract:

Postural stability is a complex process involving actions of biomechanical, motor, sensory and central nervous system components. Numerous joint systems, muscles involved, the complexity of sporting movements and situations require perfect coordination of the body's movement patterns. To adapt to a constantly changing situation in such a dynamic environment as physical performance, optimal input of information from visual, vestibular and somatosensory sensors are needed. Combat soldiers are required to perform physically and mentally demanding tasks in adverse conditions, and poor postural stability has been identified as a risk factor for lower extremity musculoskeletal injury. The aim of this study is to investigate whether some personality traits are related to the performance of static postural stability among soldiers of combat troops. NEO personality inventory (NEO-PI-R) was used to identify personality traits and the Nintendo Wii Balance Board was used to assess static postural stability of soldiers. Postural stability performance was assessed by changes in center of pressure (CoP) and center of gravity (CoG). A posturographic test was performed for 60 s with eyes opened during quiet upright standing. The results showed that facets of neuroticism and conscientiousness personality traits were significantly correlated with measured parameters of CoP and CoG. This study can help for better understanding the relationship between personality traits and static postural stability. The results can be used to optimize the training process at the individual level.

Keywords: Neuroticism, conscientiousness, postural stability, combat troops.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 511
346 A Biomimetic Structural Form: Developing a Paradigm to Attain Vital Sustainability in Tall Architecture

Authors: Osama Al-Sehail

Abstract:

This paper argues for sustainability as a necessity in the evolution of tall architecture. It provides a different mode for dealing with sustainability in tall architecture, taking into consideration the speciality of its typology. To this end, the article develops a Biomimetic Structural Form as a paradigm to attain Vital Sustainability. A Biomimetic Structural Form, which is derived from the amalgamation of biomimicry as an approach for sustainability defining nature as source of knowledge and inspiration in solving humans’ problems and a Structural Form as a catalyst for evolving tall architecture, is a dynamic paradigm emerging from a conceptualizing and morphological process. A Biomimetic Structural Form is a flow system whose different forces and functions tend to be “better”, more "fit", to “survive”, and to be efficient. Through geometry and function—the two aspects of knowledge extracted from nature—the attributes of the Biomimetic Structural Form are formulated. Vital Sustainability is the survival level of sustainability in natural systems through which a system enhances the performance of its internal working and its interaction with the external environment. A Biomimetic Structural Form, in this context, is a medium for evolving tall architecture to emulate natural models in their ways of coexistence with the environment. As an integral part of this article, the sustainable super tall building 3Ts is discussed as a case study of applying Biomimetic Structural Form.   

Keywords: Biomimicry, design in nature, high-rise buildings, sustainability, structural form, tall architecture, vital sustainability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1522
345 Thermal Insulating Silicate Materials Suitable for Thermal Insulation and Rehabilitation Structures

Authors: J. Hroudova, M. Sedlmajer, J. Zach

Abstract:

Problems insulation of building structures is often closely connected with the problem of moisture remediation. In the case of historic buildings or if only part of the redevelopment of envelope of structures, it is not possible to apply the classical external thermal insulation composite systems. This application is mostly effective thermal insulation plasters with high porosity and controlled capillary properties which assures improvement of thermal properties construction, its diffusion openness towards the external environment and suitable treatment capillary properties of preventing the penetration of liquid moisture and salts thereof toward the outer surface of the structure. With respect to the current trend of reducing the energy consumption of building structures and reduce the production of CO2 is necessary to develop capillary-active materials characterized by their low density, low thermal conductivity while maintaining good mechanical properties. The aim of researchers at the Faculty of Civil Engineering, Brno University of Technology is the development and study of hygrothermal behaviour of optimal materials for thermal insulation and rehabilitation of building structures with the possible use of alternative, less energy demanding binders in comparison with conventional, frequently used binder, which represents cement. The paper describes the evaluation of research activities aimed at the development of thermal insulation and repair materials using lightweight aggregate and alternative binders such as metakaolin and finely ground fly ash.

Keywords: Thermal insulating plasters, rehabilitation materials, thermal conductivity, lightweight aggregate, alternative binders.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2181