Search results for: extracting rules
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1471

Search results for: extracting rules

451 Efficient Fuzzy Classified Cryptographic Model for Intelligent Encryption Technique towards E-Banking XML Transactions

Authors: Maher Aburrous, Adel Khelifi, Manar Abu Talib

Abstract:

Transactions performed by financial institutions on daily basis require XML encryption on large scale. Encrypting large volume of message fully will result both performance and resource issues. In this paper a novel approach is presented for securing financial XML transactions using classification data mining (DM) algorithms. Our strategy defines the complete process of classifying XML transactions by using set of classification algorithms, classified XML documents processed at later stage using element-wise encryption. Classification algorithms were used to identify the XML transaction rules and factors in order to classify the message content fetching important elements within. We have implemented four classification algorithms to fetch the importance level value within each XML document. Classified content is processed using element-wise encryption for selected parts with "High", "Medium" or “Low” importance level values. Element-wise encryption is performed using AES symmetric encryption algorithm and proposed modified algorithm for AES to overcome the problem of computational overhead, in which substitute byte, shift row will remain as in the original AES while mix column operation is replaced by 128 permutation operation followed by add round key operation. An implementation has been conducted using data set fetched from e-banking service to present system functionality and efficiency. Results from our implementation showed a clear improvement in processing time encrypting XML documents.

Keywords: XML transaction, encryption, Advanced Encryption Standard (AES), XML classification, e-banking security, fuzzy classification, cryptography, intelligent encryption

Procedia PDF Downloads 387
450 Unlocking Green Hydrogen Potential: A Machine Learning-Based Assessment

Authors: Said Alshukri, Mazhar Hussain Malik

Abstract:

Green hydrogen is hydrogen produced using renewable energy sources. In the last few years, Oman aimed to reduce its dependency on fossil fuels. Recently, the hydrogen economy has become a global trend, and many countries have started to investigate the feasibility of implementing this sector. Oman created an alliance to establish the policy and rules for this sector. With motivation coming from both global and local interest in green hydrogen, this paper investigates the potential of producing hydrogen from wind and solar energies in three different locations in Oman, namely Duqm, Salalah, and Sohar. By using machine learning-based software “WEKA” and local metrological data, the project was designed to figure out which location has the highest wind and solar energy potential. First, various supervised models were tested to obtain their prediction accuracy, and it was found that the Random Forest (RF) model has the best prediction performance. The RF model was applied to 2021 metrological data for each location, and the results indicated that Duqm has the highest wind and solar energy potential. The system of one wind turbine in Duqm can produce 8335 MWh/year, which could be utilized in the water electrolysis process to produce 88847 kg of hydrogen mass, while a solar system consisting of 2820 solar cells is estimated to produce 1666.223 MWh/ year which is capable of producing 177591 kg of hydrogen mass.

Keywords: green hydrogen, machine learning, wind and solar energies, WEKA, supervised models, random forest

Procedia PDF Downloads 50
449 The Posthuman Condition and a Translational Ethics of Entanglement

Authors: Shabnam Naderi

Abstract:

Traditional understandings of ethics considered translators, translations, technologies and other agents as separate and prioritized human agents. In fact, ethics was equated with morality. This disengaged understanding of ethics is superseded by an ethics of relation/entanglement in the posthuman philosophy. According to this ethics of entanglement, human and nonhuman agents are in constant ‘intra-action’. The human is not separate from nature, from technology and from other nonhuman entities, and an ethics of translation in this regard cannot be separated from technology and ecology and get defined merely within the realm of human-human encounter. As such, a posthuman ethics offers opportunities for change and responds to the changing nature of reality, it is negotiable and reveals itself as a moment-by-moment practice (i.e. as temporally emergent and beyond determinacy and permanence). Far from the linguistic or cultural, or individual concerns, posthuman translational ethics discusses how the former rigid norms and laws are challenged in a process ontology which puts emphasis on activity and activation and considers ethics as surfacing in activity, not as a predefined set of rules and values. In this sense, traditional ethical principles like faithfulness, accuracy and representation are superseded by principles of privacy, sustainability, multiplicity and decentralization. The present conceptual study, drawing on Ferrando’s philosophical posthumanism (as a post-humanism, as a post-dualism and as a post-anthropocentrism), Deleuze-Guattarian philosophy of immanence and Barad’s physics-philosophy strives to destabilize traditional understandings of translation ethics and bring an ethics that has loose ends and revolves around multiplicity and decentralization into the picture.

Keywords: ethics of entanglement, post-anthropocentrism, post-dualism, post-humanism, translation

Procedia PDF Downloads 52
448 Portfolio Optimization with Reward-Risk Ratio Measure Based on the Mean Absolute Deviation

Authors: Wlodzimierz Ogryczak, Michal Przyluski, Tomasz Sliwinski

Abstract:

In problems of portfolio selection, the reward-risk ratio criterion is optimized to search for a risky portfolio with the maximum increase of the mean return in proportion to the risk measure increase when compared to the risk-free investments. In the classical model, following Markowitz, the risk is measured by the variance thus representing the Sharpe ratio optimization and leading to the quadratic optimization problems. Several Linear Programming (LP) computable risk measures have been introduced and applied in portfolio optimization. In particular, the Mean Absolute Deviation (MAD) measure has been widely recognized. The reward-risk ratio optimization with the MAD measure can be transformed into the LP formulation with the number of constraints proportional to the number of scenarios and the number of variables proportional to the total of the number of scenarios and the number of instruments. This may lead to the LP models with huge number of variables and constraints in the case of real-life financial decisions based on several thousands scenarios, thus decreasing their computational efficiency and making them hardly solvable by general LP tools. We show that the computational efficiency can be then dramatically improved by an alternative model based on the inverse risk-reward ratio minimization and by taking advantages of the LP duality. In the introduced LP model the number of structural constraints is proportional to the number of instruments thus not affecting seriously the simplex method efficiency by the number of scenarios and therefore guaranteeing easy solvability. Moreover, we show that under natural restriction on the target value the MAD risk-reward ratio optimization is consistent with the second order stochastic dominance rules.

Keywords: portfolio optimization, reward-risk ratio, mean absolute deviation, linear programming

Procedia PDF Downloads 386
447 Seismic Interpretation and Petrophysical Evaluation of SM Field, Libya

Authors: Abdalla Abdelnabi, Yousf Abushalah

Abstract:

The G Formation is a major gas producing reservoir in the SM Field, eastern, Libya. It is called G limestone because it consists of shallow marine limestone. Well data and 3D-Seismic in conjunction with the results of a previous study were used to delineate the hydrocarbon reservoir of Middle Eocene G-Formation of SM Field area. The data include three-dimensional seismic data acquired in 2009. It covers approximately an area of 75 mi² and with more than 9 wells penetrating the reservoir. Seismic data are used to identify any stratigraphic and structural and features such as channels and faults and which may play a significant role in hydrocarbon traps. The well data are used to calculation petrophysical analysis of S field. The average porosity of the Middle Eocene G Formation is very good with porosity reaching 24% especially around well W 6. Average water saturation was calculated for each well from porosity and resistivity logs using Archie’s formula. The average water saturation for the whole well is 25%. Structural mapping of top and bottom of Middle Eocene G formation revealed the highest area in the SM field is at 4800 ft subsea around wells W4, W5, W6, and W7 and the deepest point is at 4950 ft subsea. Correlation between wells using well data and structural maps created from seismic data revealed that net thickness of G Formation range from 0 ft in the north part of the field to 235 ft in southwest and south part of the field. The gas water contact is found at 4860 ft using the resistivity log. The net isopach map using both the trapezoidal and pyramid rules are used to calculate the total bulk volume. The original gas in place and the recoverable gas were calculated volumetrically to be 890 Billion Standard Cubic Feet (BSCF) and 630 (BSCF) respectively.

Keywords: 3D seismic data, well logging, petrel, kingdom suite

Procedia PDF Downloads 132
446 Symmetric Key Encryption Algorithm Using Indian Traditional Musical Scale for Information Security

Authors: Aishwarya Talapuru, Sri Silpa Padmanabhuni, B. Jyoshna

Abstract:

Cryptography helps in preventing threats to information security by providing various algorithms. This study introduces a new symmetric key encryption algorithm for information security which is linked with the "raagas" which means Indian traditional scale and pattern of music notes. This algorithm takes the plain text as input and starts its encryption process. The algorithm then randomly selects a raaga from the list of raagas that is assumed to be present with both sender and the receiver. The plain text is associated with the thus selected raaga and an intermediate cipher-text is formed as the algorithm converts the plain text characters into other characters, depending upon the rules of the algorithm. This intermediate code or cipher text is arranged in various patterns in three different rounds of encryption performed. The total number of rounds in the algorithm is equal to the multiples of 3. To be more specific, the outcome or output of the sequence of first three rounds is again passed as the input to this sequence of rounds recursively, till the total number of rounds of encryption is performed. The raaga selected by the algorithm and the number of rounds performed will be specified at an arbitrary location in the key, in addition to important information regarding the rounds of encryption, embedded in the key which is known by the sender and interpreted only by the receiver, thereby making the algorithm hack proof. The key can be constructed of any number of bits without any restriction to the size. A software application is also developed to demonstrate this process of encryption, which dynamically takes the plain text as input and readily generates the cipher text as output. Therefore, this algorithm stands as one of the strongest tools for information security.

Keywords: cipher text, cryptography, plaintext, raaga

Procedia PDF Downloads 265
445 Durham Region: How to Achieve Zero Waste in a Municipal Setting

Authors: Mirka Januszkiewicz

Abstract:

The Regional Municipality of Durham is the upper level of a two-tier municipal and regional structure comprised of eight lower-tier municipalities. With a population of 655,000 in both urban and rural settings, the Region is approximately 2,537 square kilometers neighboring the City of Toronto, Ontario Canada to the east. The Region has been focused on diverting waste from disposal since the development of its Long Term Waste Management Strategy Plan for 2000-2020. With a 54 percent solid waste diversion rate, the focus now is on achieving 70 percent diversion on the path to zero waste using local waste management options whenever feasible. The Region has an Integrated Waste Management System that consists of a weekly curbside collection of recyclable printed paper and packaging and source separated organics; a seasonal collection of leaf and yard waste; a bi-weekly collection of residual garbage; and twice annual collection of intact, sealed household batteries. The Region also maintains three Waste Management Facilities for residential drop-off of household hazardous waste, polystyrene, construction and demolition debris and electronics. Special collection events are scheduled in the spring, summer and fall months for reusable items, household hazardous waste, and electronics. The Region is in the final commissioning stages of an energy from the waste facility for residual waste disposal that will recover energy from non-recyclable wastes. This facility is state of the art and is equipped for installation of carbon capture technology in the future. Despite all of these diversion programs and efforts, there is still room for improvement. Recent residential waste studies revealed that over 50% of the residual waste placed at the curb that is destined for incineration could be recycled. To move towards a zero waste community, the Region is looking to more advanced technologies for extracting the maximum recycling value from residential waste. Plans are underway to develop a pre-sort facility to remove organics and recyclables from the residual waste stream, including the growing multi-residential sector. Organics would then be treated anaerobically to generate biogas and fertilizer products for beneficial use within the Region. This project could increase the Region’s diversion rate beyond 70 percent and enhance the Region’s climate change mitigation goals. Zero waste is an ambitious goal in a changing regulatory and economic environment. Decision makers must be willing to consider new and emerging technologies and embrace change to succeed.

Keywords: municipal waste, residential, waste diversion, zero waste

Procedia PDF Downloads 205
444 A Ground Structure Method to Minimize the Total Installed Cost of Steel Frame Structures

Authors: Filippo Ranalli, Forest Flager, Martin Fischer

Abstract:

This paper presents a ground structure method to optimize the topology and discrete member sizing of steel frame structures in order to minimize total installed cost, including material, fabrication and erection components. The proposed method improves upon existing cost-based ground structure methods by incorporating constructability considerations well as satisfying both strength and serviceability constraints. The architecture for the method is a bi-level Multidisciplinary Feasible (MDF) architecture in which the discrete member sizing optimization is nested within the topology optimization process. For each structural topology generated, the sizing optimization process seek to find a set of discrete member sizes that result in the lowest total installed cost while satisfying strength (member utilization) and serviceability (node deflection and story drift) criteria. To accurately assess cost, the connection details for the structure are generated automatically using accurate site-specific cost information obtained directly from fabricators and erectors. Member continuity rules are also applied to each node in the structure to improve constructability. The proposed optimization method is benchmarked against conventional weight-based ground structure optimization methods resulting in an average cost savings of up to 30% with comparable computational efficiency.

Keywords: cost-based structural optimization, cost-based topology and sizing, optimization, steel frame ground structure optimization, multidisciplinary optimization of steel structures

Procedia PDF Downloads 321
443 Analysis of Computer Science Papers Conducted by Board of Intermediate and Secondary Education at Secondary Level

Authors: Ameema Mahroof, Muhammad Saeed

Abstract:

The purpose of this study was to analyze the papers of computer science conducted by Board of Intermediate and Secondary Education with reference to Bloom’s taxonomy. The present study has two parts. First, the analysis is done on the papers conducted by Board of Intermediate of Secondary Education on the basis of basic rules of item construction especially Bloom’s (1956). And the item analysis is done to improve the psychometric properties of a test. The sample included the question papers of computer science of higher secondary classes (XI-XII) for the years 2011 and 2012. For item analysis, the data was collected from 60 students through convenient sampling. Findings of the study revealed that in the papers by Board of intermediate and secondary education the maximum focus was on knowledge and understanding level and very less focus was on the application, analysis, and synthesis. Furthermore, the item analysis on the question paper reveals that item difficulty of most of the questions did not show a balanced paper, the items were either very difficult while most of the items were too easy (measuring knowledge and understanding abilities). Likewise, most of the items were not truly discriminating the high and low achievers; four items were even negatively discriminating. The researchers also analyzed the items of the paper through software Conquest. These results show that the papers conducted by Board of Intermediate and Secondary Education were not well constructed. It was recommended that paper setters should be trained in developing the question papers that can measure various cognitive abilities of students so that a good paper in computer science should assess all cognitive abilities of students.

Keywords: Bloom’s taxonomy, question paper, item analysis, cognitive domain, computer science

Procedia PDF Downloads 129
442 Optimization of Heat Insulation Structure and Heat Flux Calculation Method of Slug Calorimeter

Authors: Zhu Xinxin, Wang Hui, Yang Kai

Abstract:

Heat flux is one of the most important test parameters in the ground thermal protection test. Slug calorimeter is selected as the main sensor measuring heat flux in arc wind tunnel test due to the convenience and low cost. However, because of excessive lateral heat transfer and the disadvantage of the calculation method, the heat flux measurement error of the slug calorimeter is large. In order to enhance measurement accuracy, the heat insulation structure and heat flux calculation method of slug calorimeter were improved. The heat transfer model of the slug calorimeter was built according to the energy conservation principle. Based on the heat transfer model, the insulating sleeve of the hollow structure was designed, which helped to greatly decrease lateral heat transfer. And the slug with insulating sleeve of hollow structure was encapsulated using a package shell. The improved insulation structure reduced heat loss and ensured that the heat transfer characteristics were almost the same when calibrated and tested. The heat flux calibration test was carried out in arc lamp system for heat flux sensor calibration, and the results show that test accuracy and precision of slug calorimeter are improved greatly. In the meantime, the simulation model of the slug calorimeter was built. The heat flux values in different temperature rise time periods were calculated by the simulation model. The results show that extracting the data of the temperature rise rate as soon as possible can result in a smaller heat flux calculation error. Then the different thermal contact resistance affecting calculation error was analyzed by the simulation model. The contact resistance between the slug and the insulating sleeve was identified as the main influencing factor. The direct comparison calibration correction method was proposed based on only heat flux calibration. The numerical calculation correction method was proposed based on the heat flux calibration and simulation model of slug calorimeter after the simulation model was solved by solving the contact resistance between the slug and the insulating sleeve. The simulation and test results show that two methods can greatly reduce the heat flux measurement error. Finally, the improved slug calorimeter was tested in the arc wind tunnel. And test results show that the repeatability accuracy of improved slug calorimeter is less than 3%. The deviation of measurement value from different slug calorimeters is less than 3% in the same fluid field. The deviation of measurement value between slug calorimeter and Gordon Gage is less than 4% in the same fluid field.

Keywords: correction method, heat flux calculation, heat insulation structure, heat transfer model, slug calorimeter

Procedia PDF Downloads 102
441 Using Artificial Intelligence Technology to Build the User-Oriented Platform for Integrated Archival Service

Authors: Lai Wenfang

Abstract:

Tthis study will describe how to use artificial intelligence (AI) technology to build the user-oriented platform for integrated archival service. The platform will be launched in 2020 by the National Archives Administration (NAA) in Taiwan. With the progression of information communication technology (ICT) the NAA has built many systems to provide archival service. In order to cope with new challenges, such as new ICT, artificial intelligence or blockchain etc. the NAA will try to use the natural language processing (NLP) and machine learning (ML) skill to build a training model and propose suggestions based on the data sent to the platform. NAA expects the platform not only can automatically inform the sending agencies’ staffs which records catalogues are against the transfer or destroy rules, but also can use the model to find the details hidden in the catalogues and suggest NAA’s staff whether the records should be or not to be, to shorten the auditing time. The platform keeps all the users’ browse trails; so that the platform can predict what kinds of archives user could be interested and recommend the search terms by visualization, moreover, inform them the new coming archives. In addition, according to the Archives Act, the NAA’s staff must spend a lot of time to mark or remove the personal data, classified data, etc. before archives provided. To upgrade the archives access service process, the platform will use some text recognition pattern to black out automatically, the staff only need to adjust the error and upload the correct one, when the platform has learned the accuracy will be getting higher. In short, the purpose of the platform is to deduct the government digital transformation and implement the vision of a service-oriented smart government.

Keywords: artificial intelligence, natural language processing, machine learning, visualization

Procedia PDF Downloads 150
440 Major Constraints to Adoption of Improved Post-harvest Technologies among Smallholder Farmers in Developing Countries: A Systematic Review

Authors: Muganyizi Jonas Bisheko, G. Rejikumar

Abstract:

Reducing post-harvest losses could be a sustainable solution to enhance the food and income security of smallholder farmers in developing countries. While various research institutions have come up with a number of innovative post-harvest technologies for reducing post-harvest losses, most of them have not been extensively adopted by smallholder farmers. Despite this gap, the synthesized information about the major constraints of post-harvest technology is scarce. This study has been conducted to fill this gap and show the implications of the findings for future post-harvest research. The developed search strategy retrieved 2201 studies. However, after excluding duplicates, title, abstract and full article screening, a total of 41 documents were identified. The major findings are: (i) there is an outstanding deficiency of systematic evidence of the effect of climate change, off-farm income and sources of post-harvest information on the adoption of improved post-harvest technologies; (ii) there is very limited information on adoption constraints pertaining to matters of policy, rules and regulations; (iii) there is very thin literature on behavioral constraints associated with limited adoption of improved post-harvest technologies; (iv) most of the studies focused on post-harvest storage technologies (47%) followed by overall post-harvest management practices (25%), processing technologies (19%) and packaging technologies (3%). Much of the information was found on Cereals (58%), especially maize (44%); (v) geographically, Sub-Saharan Africa accounted for 79% of the reviewed interventions, while South Asia occupied only 21%. The findings of this review are intended to guide various post-harvest technologists and decision-makers in addressing the challenge of huge post-harvest losses.

Keywords: constraints, post-harvest loss, post-harvest technology , smallholder farmer

Procedia PDF Downloads 205
439 Parallel Fuzzy Rough Support Vector Machine for Data Classification in Cloud Environment

Authors: Arindam Chaudhuri

Abstract:

Classification of data has been actively used for most effective and efficient means of conveying knowledge and information to users. The prima face has always been upon techniques for extracting useful knowledge from data such that returns are maximized. With emergence of huge datasets the existing classification techniques often fail to produce desirable results. The challenge lies in analyzing and understanding characteristics of massive data sets by retrieving useful geometric and statistical patterns. We propose a supervised parallel fuzzy rough support vector machine (PFRSVM) for data classification in cloud environment. The classification is performed by PFRSVM using hyperbolic tangent kernel. The fuzzy rough set model takes care of sensitiveness of noisy samples and handles impreciseness in training samples bringing robustness to results. The membership function is function of center and radius of each class in feature space and is represented with kernel. It plays an important role towards sampling the decision surface. The success of PFRSVM is governed by choosing appropriate parameter values. The training samples are either linear or nonlinear separable. The different input points make unique contributions to decision surface. The algorithm is parallelized with a view to reduce training times. The system is built on support vector machine library using Hadoop implementation of MapReduce. The algorithm is tested on large data sets to check its feasibility and convergence. The performance of classifier is also assessed in terms of number of support vectors. The challenges encountered towards implementing big data classification in machine learning frameworks are also discussed. The experiments are done on the cloud environment available at University of Technology and Management, India. The results are illustrated for Gaussian RBF and Bayesian kernels. The effect of variability in prediction and generalization of PFRSVM is examined with respect to values of parameter C. It effectively resolves outliers’ effects, imbalance and overlapping class problems, normalizes to unseen data and relaxes dependency between features and labels. The average classification accuracy for PFRSVM is better than other classifiers for both Gaussian RBF and Bayesian kernels. The experimental results on both synthetic and real data sets clearly demonstrate the superiority of the proposed technique.

Keywords: FRSVM, Hadoop, MapReduce, PFRSVM

Procedia PDF Downloads 472
438 Optimization of Smart Beta Allocation by Momentum Exposure

Authors: J. B. Frisch, D. Evandiloff, P. Martin, N. Ouizille, F. Pires

Abstract:

Smart Beta strategies intend to be an asset management revolution with reference to classical cap-weighted indices. Indeed, these strategies allow a better control on portfolios risk factors and an optimized asset allocation by taking into account specific risks or wishes to generate alpha by outperforming indices called 'Beta'. Among many strategies independently used, this paper focuses on four of them: Minimum Variance Portfolio, Equal Risk Contribution Portfolio, Maximum Diversification Portfolio, and Equal-Weighted Portfolio. Their efficiency has been proven under constraints like momentum or market phenomenon, suggesting a reconsideration of cap-weighting.
 To further increase strategy return efficiency, it is proposed here to compare their strengths and weaknesses inside time intervals corresponding to specific identifiable market phases, in order to define adapted strategies depending on pre-specified situations. 
Results are presented as performance curves from different combinations compared to a benchmark. If a combination outperforms the applicable benchmark in well-defined actual market conditions, it will be preferred. It is mainly shown that such investment 'rules', based on both historical data and evolution of Smart Beta strategies, and implemented according to available specific market data, are providing very interesting optimal results with higher return performance and lower risk.
 Such combinations have not been fully exploited yet and justify present approach aimed at identifying relevant elements characterizing them.

Keywords: smart beta, minimum variance portfolio, equal risk contribution portfolio, maximum diversification portfolio, equal weighted portfolio, combinations

Procedia PDF Downloads 319
437 Realizing the Full Potential of Islamic Banking System: Proposed Suitable Legal Framework for Islamic Banking System in Tanzania

Authors: Maulana Ayoub Ali, Pradeep Kulshrestha

Abstract:

Laws of any given secular state have a huge contribution in the growth of the Islamic banking system because the system uses conventional laws to govern its activities. Therefore, the former should be ready to accommodate the latter in order to make the Islamic banking system work properly without affecting the current conventional banking system and therefore without affecting its system. Islamic financial rules have been practiced since the birth of Islam. Following the recent world economic challenges in the financial sector, a quick rebirth of the contemporary Islamic ethical banking system took place. The coming of the Islamic banking system is due to various reasons including but not limited to the failure of the interest based economy in solving financial problems around the globe. Therefore, the Islamic banking system has been adopted as an alternative banking system in order to recover the highly damaged global financial sector. But the Islamic banking system has been facing a number of challenges which hinder its smooth operation in different parts of the world. It has not been the aim of this paper to discuss other challenges rather than the legal ones, but the same was partly discussed when it was justified that it was proper to do so. Generally, there are so many things which have been discovered in the course of writing this paper. The most important part is the issue of the regulatory and supervisory framework for the Islamic banking system in Tanzania and in other nations is considered to be a crucial part for the development of the Islamic banking industry. This paper analyses what has been observed in the study on that area and recommends for necessary actions to be taken on board in a bid to make Islamic banking system reach its climax of serving the larger community by providing ethical, equitable, affordable, interest-free and society cantered banking system around the globe.

Keywords: Islamic banking, interest free banking, ethical banking, legal framework

Procedia PDF Downloads 128
436 Intentionality and Context in the Paradox of Reward and Punishment in the Meccan Surahs

Authors: Asmaa Fathy Mohamed Desoky

Abstract:

The subject of this research is the inference of intentionality and context from the verses of the Meccan surahs, which include the paradox of reward and punishment, applied to the duality of disbelief and faith; The Holy Quran is the most important sacred linguistic reference in the Arabic language because it is rich in all the rules of the language in addition to the linguistic miracle. the Quranic text is a first-class intentional text, sent down to convey something to the recipient (Muhammad first and then communicates it to Muslims) and influence and convince him, which opens the door to many Ijtihad; a desire to reach the will of Allah and his intention from his words Almighty. Intentionality as a term is one of the most important deliberative terms, but it will be modified to suit the Quranic discourse, especially since intentionality is related to intention-as it turned out earlier - that is, it turns the reader or recipient into a predictor of the unseen, and this does not correspond to the Quranic discourse. Hence, in this research, a set of dualities will be identified that will be studied in order to clarify the meaning of them according to the opinions of previous interpreters in accordance with the sanctity of the Quranic discourse, which is intentionally related to the dualities of reward and punishment, such as: the duality of disbelief and faith, noting that it is a duality that combines opposites and Paradox on one level, because it may be an external paradox between action and reaction, and may be an internal paradox in matters related to faith, and may be a situational paradox in a specific event or a certain fact. It should be noted that the intention of the Qur'anic text is fully realized in form and content, in whole and in part, and this research includes a presentation of some applied models of the issues of intention and context that appear in the verses of the paradox of reward and punishment in the Meccan surahs in Quraan.

Keywords: intentionality, context, the paradox, reward, punishment, Meccan surahs

Procedia PDF Downloads 47
435 Assessing the Impacts of Urbanization on Urban Precincts: A Case of Golconda Precinct, Hyderabad

Authors: Sai AKhila Budaraju

Abstract:

Heritage sites are an integral part of cities and carry a sense of identity to the cities/ towns, but the process of urbanization is a carrying potential threat for the loss of these heritage sites/monuments. Both Central and State Governments listed the historic Golconda fort as National Important Monument and the Heritage precinct with eight heritage-listed buildings and two historical sites respectively, for conservation and preservation, due to the presence of IT Corridor 6kms away accommodating more people in the precinct is under constant pressure. The heritage precinct possesses high property values, being a prime location connecting the IT corridor and CBD (central business district )areas. The primary objective of the study was to assess and identify the factors that are affecting the heritage precinct through Mapping and documentation, Identifying and assessing the factors through empirical analysis, Ordinal regression analysis and Hedonic Pricing Model. Ordinal regression analysis was used to identify the factors that contribute to the changes in the precinct due to urbanization. Hedonic Pricing Model was used to understand and establish a relation whether the presence of historical monuments is also a contributing factor to the property value and to what extent this influence can contribute. The above methods and field visit indicates the Physical, socio-economic factors and the neighborhood characteristics of the precinct contributing to the property values. The outturns and the potential elements derived from the analysis of the Development Control Rules were derived as recommendations to Integrate both Old and newly built environments.

Keywords: heritage planning, heritage conservation, hedonic pricing model, ordinal regression analysis

Procedia PDF Downloads 168
434 Landscape Planning And Development Of Integrated Farming Based On Low External Input Sustainable Agriculture (LEISA) In Pangulah Village, Karawang County, West Java, Indonesia

Authors: Eduwin Eko Franjaya, Yesi Hendriani Supartoyo

Abstract:

Integrated farming with LEISA concept as one of the systems or sustainable farming techniques in agriculture has provided opportunities to increase farmers' income. This system also has a positive impact on the environment. However, the development of integrated farming is still on a small scale/site scale. Development on a larger scale is necessary considering to the number of potential resources in the village that can be integrated each other. The aim of this research is to develop an integrated farming landscape on small scale that has been done in previous study, into the village scale. The method used in this study follows the rules of scientific planning in landscape architecture. The initial phase begins with an inventory of the existing condition of the village, by conducting a survey. The second stage is analysis of potential and constraints in the village based on the results of a survey that has been done before. The next stage is concept-making that consists of basic concept, design concept, and development concept. The basic concept is integrated farming based on LEISA. The design concept is based on commodities that are developed in the village. The development concept consists of space concept, circulation concept, the concept of vegetation and commodities, and the concept of the production system. The last stage is planning process which produces Site Plan based on LEISA on village scale. Site Plan is also the end product of this research. The results of this research are expected to increase the income and welfare of the farmers in the village, and can be develop into a tourism area of integrated farming.

Keywords: integrated farming, LEISA, site plan, sustainable agriculture

Procedia PDF Downloads 426
433 Modeling and Numerical Simulation of Heat Transfer and Internal Loads at Insulating Glass Units

Authors: Nina Penkova, Kalin Krumov, Liliana Zashcova, Ivan Kassabov

Abstract:

The insulating glass units (IGU) are widely used in the advanced and renovated buildings in order to reduce the energy for heating and cooling. Rules for the choice of IGU to ensure energy efficiency and thermal comfort in the indoor space are well known. The existing of internal loads - gage or vacuum pressure in the hermetized gas space, requires additional attention at the design of the facades. The internal loads appear at variations of the altitude, meteorological pressure and gas temperature according to the same at the process of sealing. The gas temperature depends on the presence of coatings, coating position in the transparent multi-layer system, IGU geometry and space orientation, its fixing on the facades and varies with the climate conditions. An algorithm for modeling and numerical simulation of thermal fields and internal pressure in the gas cavity at insulating glass units as function of the meteorological conditions is developed. It includes models of the radiation heat transfer in solar and infrared wave length, indoor and outdoor convection heat transfer and free convection in the hermetized gas space, assuming the gas as compressible. The algorithm allows prediction of temperature and pressure stratification in the gas domain of the IGU at different fixing system. The models are validated by comparison of the numerical results with experimental data obtained by Hot-box testing. Numerical calculations and estimation of 3D temperature, fluid flow fields, thermal performances and internal loads at IGU in window system are implemented.

Keywords: insulating glass units, thermal loads, internal pressure, CFD analysis

Procedia PDF Downloads 253
432 Environmental Fatigue Analysis for Control Rod Drive Mechanisms Seal House

Authors: Xuejiao Shao, Jianguo Chen, Xiaolong Fu

Abstract:

In this paper, the elastoplastic strain correction factor computed by software of ANSYS was modified, and the fatigue usage factor in air was also corrected considering in water under reactor operating condition. The fatigue of key parts on control rod drive mechanisms was analyzed considering the influence of environmental fatigue caused by the coolant in the react pressure vessel. The elastoplastic strain correction factor was modified by analyzing thermal and mechanical loads separately referring the rules of RCC-M 2002. The new elastoplastic strain correction factor Ke(mix) is computed to replace the original Ke computed by the software of ANSYS when evaluating the fatigue produced by thermal and mechanical loads together. Based on the Ke(mix) and the usage cycle and fatigue design curves, the new range of primary plus secondary stresses was evaluated to obtain the final fatigue usage factor. The results show that the precision of fatigue usage factor can be elevated by using modified Ke when the amplify of the primary and secondary stress is large to some extent. One approach has been proposed for incorporating the environmental effects considering the effects of reactor coolant environments on fatigue life in terms of an environmental correction factor Fen, which is the ratio of fatigue life in air at room. To incorporate environmental effects into the RCCM Code fatigue evaluations, the fatigue usage factor based on the current Code design curves is multiplied by the correction factor. The contribution of environmental effects to results is discussed. Fatigue life decreases logarithmically with decreasing strain rate below 10%/s, which is insensitive to strain rate when temperatures below 100°C.

Keywords: environmental fatigue, usage factor, elastoplastic strain correction factor, environmental correction

Procedia PDF Downloads 296
431 “Referral for re-submission” – The Case of EFL Applied Linguistics Doctoral Defense Sessions

Authors: Alireza Jalilifar, Nadia Mayahi

Abstract:

An oral defense is the examination of a doctoral program in which the candidates display their academic capacity through sharing and disseminating the findings of their study and defending their position. In this challenging criticism-generating context, the examiners evaluate the PhD dissertation critically so as to confirm its scholarly merit or lack of it. To identify the examiners’ expectations of the viva, this study used a conversation analytic approach for analyzing the data. The research is inductive in that it seeks to develop theory that is grounded in the data. The data comprised transcripts of the question and answer section of two applied linguistics doctoral defense sessions from two accredited Iranian state universities in 2019, both of which are among the top Iranian universities on the list of Times Higher Education World University Rankings. In spite of the similar shortcomings and deficiencies, for instance, in terms of innovation, development, sampling, and treatment, raised by the examiners, one of these defenses passed with distinction while the other was referred for re-submission. It seems that the outcome of a viva, in an EFL context, not only depends on adherence to the rules and regulations of doctoral research but is also influenced to a certain extent by the strictness of the examiners and the candidates’ language proficiency and effective negotiation and communication skills in this confrontational communicative event. The findings of this study provide evidence for the issues determining the success or failure of PhD candidates in displaying their claims of scholarship during their defense sessions. This study has implications for both applied linguistics doctoral students and academics in EFL contexts who try to prove and authenticate the doctorateness of a dissertation.

Keywords: academic discourse, conversation analysis, doctoral defense, doctorateness, EFL

Procedia PDF Downloads 136
430 Subclinical Renal Damage Induced by High-Fat Diet in Young Rats

Authors: Larissa M. Vargas, Julia M. Sacchi, Renata O. Pereira, Lucas S. Asano, Iara C. Araújo, Patricia Fiorino, Vera Farah

Abstract:

The aim of this study was to evaluate the occurrence of subclinical organ injuries induced by high-fat diet. Male wistar rats (n=5/group) were divided in control diet group (CD), commercial rat chow, and hyperlipidic diet (30% lipids) group (HD) administrated during 8 weeks, starting after weaning. All the procedures followed the rules of the Committee of Research and Ethics of the Mackenzie University (CEUA Nº 077/03/2011). At the end of protocol the animals were euthanized by anesthesia overload and the left kidney was removed. Intrarenal lipid deposition was evaluated by histological analyses with oilred. Kidney slices were stained with picrosirius red to evaluate the area of the Bowman's capsule (AB) and space (SB), and glomerular tuft area (GT). The renal expression of sterol regulatory element–binding protein (SREBP-2) was performed by Western Blotting. Creatinine concentration (serum and urine) and lipid profile were determined by colorimetric kit (Labtest). At the end of the protocol there was no differences in body weight between the groups, however the HD showed a marked increase in lipid deposits, glomeruli and tubules, and biochemical analysis for cholesterol and triglycerides. Moreover, in the kidney, the high-fat diet induced a reduction in the AB (13%), GT (18%) and SB (17%) associated with a reduction in glomerular filtration rate (creatinine clearance). The renal SRBP2 expression was increased in HD group. These data suggests that consumption of high-fat diet starting in childhood is associated with subclinical renal damage and function.

Keywords: high-fat diet, kidney, intrarenal lipid deposition, SRBP2

Procedia PDF Downloads 275
429 Blockchain Technology Applications in Patient Tracking Systems Regarding Privacy-Preserving Concerns and COVID-19 Pandemic

Authors: Farbod Behnaminia, Saeed Samet

Abstract:

The COVID-19 pandemic has paralyzed many lives until a vaccine was available, which caused the so-called “new normal.” According to the World Health Organization (WHO), COVID-19 is an infectious disease. It can cause significant illness or death in anyone. Governments and health officials tried to impose rules and regulations to avoid and slow down transmission. Therefore, software engineers worldwide developed applications to trace and track patients’ movements and notify others, mainly using Bluetooth. In this way, everyone could be informed whether they come in close contact with someone who has COVID-19 and takes proper safety precautions. Because most of the applications use technologies that can potentially reveal the user’s identity and location, researchers have debated privacy preservation and how to improve user privacy during such pandemics. Thanks to Distributed Ledger Technology (DLT), there have been some proposed methods to develop privacy-preserving Patient Tracking Systems in the last two years. As an instance of the DLT, Blockchain is like a decentralized peer-to-peer database that maintains a record of transactions. Transactions are immutable, transparent, and anonymous in this system. We conducted a comprehensive evaluation of the literature by looking for papers in the relevant field and dividing them into pre- and post-pandemic systems. Additionally, we discussed the many uses of blockchain technology in pandemic control. We found that two major obstacles facing blockchain implementation across many healthcare systems are scalability and privacy. The Polkadot platform is presented, along with a review of its efficacy in tackling current concerns. A more scalable healthcare system is achievable in the near future using Polkadot as well as a much more privacy-preserving environment.

Keywords: blockchain, electronic record management, EHR, privacy-preserving, patient tracking, COVID-19, trust and confidence, Polkadot

Procedia PDF Downloads 81
428 Query in Grammatical Forms and Corpus Error Analysis

Authors: Katerina Florou

Abstract:

Two decades after coined the term "learner corpora" as collections of texts created by foreign or second language learners across various language contexts, and some years following suggestion to incorporate "focusing on form" within a Task-Based Learning framework, this study aims to explore how learner corpora, whether annotated with errors or not, can facilitate a focus on form in an educational setting. Argues that analyzing linguistic form serves the purpose of enabling students to delve into language and gain an understanding of different facets of the foreign language. This same objective is applicable when analyzing learner corpora marked with errors or in their raw state, but in this scenario, the emphasis lies on identifying incorrect forms. Teachers should aim to address errors or gaps in the students' second language knowledge while they engage in a task. Building on this recommendation, we compared the written output of two student groups: the first group (G1) employed the focusing on form phase by studying a specific aspect of the Italian language, namely the past participle, through examples from native speakers and grammar rules; the second group (G2) focused on form by scrutinizing their own errors and comparing them with analogous examples from a native speaker corpus. In order to test our hypothesis, we created four learner corpora. The initial two were generated during the task phase, with one representing each group of students, while the remaining two were produced as a follow-up activity at the end of the lesson. The results of the first comparison indicated that students' exposure to their own errors can enhance their grasp of a grammatical element. The study is in its second stage and more results are to be announced.

Keywords: Corpus interlanguage analysis, task based learning, Italian language as F1, learner corpora

Procedia PDF Downloads 27
427 Analysing Competitive Advantage of IoT and Data Analytics in Smart City Context

Authors: Petra Hofmann, Dana Koniel, Jussi Luukkanen, Walter Nieminen, Lea Hannola, Ilkka Donoghue

Abstract:

The Covid-19 pandemic forced people to isolate and become physically less connected. The pandemic has not only reshaped people’s behaviours and needs but also accelerated digital transformation (DT). DT of cities has become an imperative with the outlook of converting them into smart cities in the future. Embedding digital infrastructure and smart city initiatives as part of normal design, construction, and operation of cities provides a unique opportunity to improve the connection between people. The Internet of Things (IoT) is an emerging technology and one of the drivers in DT. It has disrupted many industries by introducing different services and business models, and IoT solutions are being applied in multiple fields, including smart cities. As IoT and data are fundamentally linked together, IoT solutions can only create value if the data generated by the IoT devices is analysed properly. Extracting relevant conclusions and actionable insights by using established techniques, data analytics contributes significantly to the growth and success of IoT applications and investments. Companies must grasp DT and be prepared to redesign their offerings and business models to remain competitive in today’s marketplace. As there are many IoT solutions available today, the amount of data is tremendous. The challenge for companies is to understand what solutions to focus on and how to prioritise and which data to differentiate from the competition. This paper explains how IoT and data analytics can impact competitive advantage and how companies should approach IoT and data analytics to translate them into concrete offerings and solutions in the smart city context. The study was carried out as a qualitative, literature-based research. A case study is provided to validate the preservation of company’s competitive advantage through smart city solutions. The results of the research contribution provide insights into the different factors and considerations related to creating competitive advantage through IoT and data analytics deployment in the smart city context. Furthermore, this paper proposes a framework that merges the factors and considerations with examples of offerings and solutions in smart cities. The data collected through IoT devices, and the intelligent use of it, can create competitive advantage to companies operating in smart city business. Companies should take into consideration the five forces of competition that shape industries and pay attention to the technological, organisational, and external contexts which define factors for consideration of competitive advantages in the field of IoT and data analytics. Companies that can utilise these key assets in their businesses will most likely conquer the markets and have a strong foothold in the smart city business.

Keywords: data analytics, smart cities, competitive advantage, internet of things

Procedia PDF Downloads 100
426 Sports Business Services Model: A Research Model Study in Reginal Sport Authority of Thailand

Authors: Siriraks Khawchaimaha, Sangwian Boonto

Abstract:

Sport Authority of Thailand (SAT) is the state enterprise, promotes and supports all sports kind both professional and athletes for competitions, and administer under government policy and government officers and therefore, all financial supports whether cash inflows and cash outflows are strictly committed to government budget and limited to the planned projects at least 12 to 16 months ahead of reality, as results of ineffective in sport events, administration and competitions. In order to retain in the sports challenges around the world, SAT need to has its own sports business services model by each stadium, region and athletes’ competencies. Based on the HMK model of Khawchaimaha, S. (2007), this research study is formalized into each 10 regional stadiums to details into the characteristics root of fans, athletes, coaches, equipments and facilities, and stadiums. The research designed is firstly the evaluation of external factors: hardware whereby competition or practice of stadiums, playground, facilities, and equipments. Secondly, to understand the software of the organization structure, staffs and management, administrative model, rules and practices. In addition, budget allocation and budget administration with operating plan and expenditure plan. As results for the third step, issues and limitations which require action plan for further development and support, or to cease that unskilled sports kind. The final step, based on the HMK model and modeling canvas by Alexander O and Yves P (2010) are those of template generating Sports Business Services Model for each 10 SAT’s regional stadiums.

Keywords: HMK model, not for profit organization, sport business model, sport services model

Procedia PDF Downloads 288
425 Natural Law in the Mu’Tazilite Theology

Authors: Samaneh Khalili

Abstract:

Natural law theory, in moral philosophy, refers to a system of unchanging values held to be mutual to all humans and can be discovered through reason. The natural law theory is commonly associated with western Philosophers. In contrast, discussions on notions of natural law in Islamic intellectual history were relatively rare. This paper aims to show that the moral theory developed by the Mu'tazilite thinkers can be classified in the ideas of natural law. In doing so, this study will demonstrate that the objective and unchanging values, according to Mu'tazilite theologians, provide the guidelines for assessing the Islamic law rules in the field of human coexistence. The focus of the paper lies on ʿAbd al-Ğabbār, who was the most influential thinker in the late epoch of the Muʿtazila. Although ʿAbd al-Ǧabbār did not leave a text with a systematic discussion of natural law, his teaching of nature, human reason, and the moral values of actions are all scattered throughout his work -'al-Muġnī fī abwāb at-tawḥīd wa-l-'adl'. It is necessary to focus on ʿAbd al-Ǧabbār's theories on reason, nature, and ethics since natural law revolves around the basic concepts of nature, reason, and moral value. While analyzing the concept of the Nature, it will attempt to answer how he explains the world's physical structure and God's relationship to natural events. Moreover, from ʿAbd al-Ǧabbār's point of view, is nature a self-determined system that follows its inner principle in every kind of change, or is nature guided by an external power? Does causality govern natural events? About the concept of reason, an attempt is made to examine how human reason, according to ʿAbd al-Ǧabbār, conceives moral attributes. Finally, the Autor will discuss the concepts of objective values and the place of rights and duties derived from Islamic law in ʿAbd al-Ǧabbār's thought.

Keywords: Islamic law, Mu'tazilite theology, natural law in Islamic theology, objective and unchanging values.

Procedia PDF Downloads 71
424 Application of Improved Semantic Communication Technology in Remote Sensing Data Transmission

Authors: Tingwei Shu, Dong Zhou, Chengjun Guo

Abstract:

Semantic communication is an emerging form of communication that realize intelligent communication by extracting semantic information of data at the source and transmitting it, and recovering the data at the receiving end. It can effectively solve the problem of data transmission under the situation of large data volume, low SNR and restricted bandwidth. With the development of Deep Learning, semantic communication further matures and is gradually applied in the fields of the Internet of Things, Uumanned Air Vehicle cluster communication, remote sensing scenarios, etc. We propose an improved semantic communication system for the situation where the data volume is huge and the spectrum resources are limited during the transmission of remote sensing images. At the transmitting, we need to extract the semantic information of remote sensing images, but there are some problems. The traditional semantic communication system based on Convolutional Neural Network cannot take into account the global semantic information and local semantic information of the image, which results in less-than-ideal image recovery at the receiving end. Therefore, we adopt the improved vision-Transformer-based structure as the semantic encoder instead of the mainstream one using CNN to extract the image semantic features. In this paper, we first perform pre-processing operations on remote sensing images to improve the resolution of the images in order to obtain images with more semantic information. We use wavelet transform to decompose the image into high-frequency and low-frequency components, perform bilinear interpolation on the high-frequency components and bicubic interpolation on the low-frequency components, and finally perform wavelet inverse transform to obtain the preprocessed image. We adopt the improved Vision-Transformer structure as the semantic coder to extract and transmit the semantic information of remote sensing images. The Vision-Transformer structure can better train the huge data volume and extract better image semantic features, and adopt the multi-layer self-attention mechanism to better capture the correlation between semantic features and reduce redundant features. Secondly, to improve the coding efficiency, we reduce the quadratic complexity of the self-attentive mechanism itself to linear so as to improve the image data processing speed of the model. We conducted experimental simulations on the RSOD dataset and compared the designed system with a semantic communication system based on CNN and image coding methods such as BGP and JPEG to verify that the method can effectively alleviate the problem of excessive data volume and improve the performance of image data communication.

Keywords: semantic communication, transformer, wavelet transform, data processing

Procedia PDF Downloads 59
423 Analgesic Efficacy of IPACK Block in Primary Total Knee Arthroplasty (90 CASES)

Authors: Fedili Benamar, Beloulou Mohamed Lamine, Ouahes Hassane, Ghattas Samir

Abstract:

 Background and aims: Peripheral regional anesthesia has been integrated into most analgesia protocols for total knee arthroplasty which considered among the most painful surgeries with a huge potential for chronicization. The adductor canal block (ACB) has gained popularity. Similarly, the IPACK block has been described to provide analgesia of the posterior knee capsule. This study aimed to evaluate the analgesic efficacy of this block in patients undergoing primary PTG. Methods: 90 patients were randomized to receive either an IPACK, an anterior sciatic block, or a sham block (30 patients in each group + multimodal analgesia and a catheter in the KCA adductor canal). GROUP 1 KCA GROUP 2 KCA+BSA GROUP 3 KCA+IPACK The analgesic blocks were done under echo-guidance preoperatively respecting the safety rules, the dose administered was 20 cc of ropivacaine 0.25% was used. We were to assess posterior knee pain 6 hours after surgery. Other endpoints included quality of recovery after surgery, pain scores, opioid requirements (PCA morphine)(EPI info 7.2 analysis). Results: -groups were matched -A predominance of women (4F/1H). -average age: 68 +/-7 years -the average BMI =31.75 kg/m2 +/- 4. -70% of patients ASA2 ,20% ASA3. -The average duration of the intervention: 89 +/- 19 minutes. -Morphine consumption (PCA) significantly higher in group 1 (16mg) & group 2 (8mg) group 3 (4mg) - The groups were matched . -There was a correlation between the use of the ipack block and postoperative pain Conclusions :In a multimodal analgesic protocol, the addition of IPACK block decreased pain scores and morphine consumption ,

Keywords: regional anesthesia, analgesia, total knee arthroplasty, the adductor canal block (acb), the ipack block, pain

Procedia PDF Downloads 51
422 Ethical Implications of Gaps in the Implementation Process of the Circular Economy: Special Focus on Underdeveloped Countries

Authors: Sujith Gunawardhana

Abstract:

The circular economy is a system in which resources and energy are derived from renewable sources, utilized efficiently, recycled, and reused to reduce waste, reduce nonrenewable resource consumption, and mitigate negative environmental impacts. However, it poses moral questions about sustainability, the environment, and societal issues. Many societies face challenges when implementing the circular economy, as the concept is still young. The equitable distribution of the advantages and costs of circularity should be ensured during implementation, as some communities, particularly disadvantaged or marginalized ones, may suffer unfairly disproportionately from the harmful effects of production and recycling facilities. Prioritizing the health and safety of workers, communities, and the environment is essential, and strict rules must be implemented to guard against harm. However, most underdeveloped countries need a legal safeguard for this situation. The ultimate objective of the circular economy is to improve social, environmental, and economic performance, but its implementation also requires consideration of the ethics of care and non-epistemic values. Those are often hindered in underdeveloped countries, as the availability of infrastructure and technology, affordability, and legislative framework are poor. To achieve long-term success in the circular economy, evaluating implementation steps and considering health, safety, environmental, and social risks is crucial. To implement the circular economy, respect ethics of care and non-epistemic values. Adopt Kantian Ethics and control technology design to ensure equal benefits for all involved. Ethical gaps may lead underdeveloped countries to generate social pressure against the circular economy.

Keywords: circular economy, ethics, values, sustainability

Procedia PDF Downloads 75