首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
Evolution of supra-glacial lakes across the Greenland Ice Sheet   总被引:1,自引:0,他引:1  
We used 268 cloud-free Moderate-resolution Imaging Spectroradiometer (MODIS) images from 2003 and 2005-2007 to study the seasonal evolution of supra-glacial lakes in three different regions of the Greenland Ice Sheet. Lake area estimates were obtained by developing an automated classification method for their identification based on 250 m resolution MODIS surface reflectance observations. Widespread supra-glacial lake formation and drainage is observed across the ice sheet, with a 2-3 week delay in the evolution of total supra-glacial lake area in the northern areas compared to the south-west. The onset of lake growth varies by up to one month inter-annually, and lakes form and drain at progressively higher altitudes during the melt season. A positive correlation was found between the annual peak in total lake area and modelled annual runoff. High runoff and lake extent years are generally characterised by low accumulation and high melt season temperatures, and vice versa. Our results indicate that, in a future warmer climate [Meehl, G. A., Stocker, T. F., Collins W. D., Friedlingstein, P., Gaye, A. T., Gregory, J. M., Kitoh, A., Knutti, R., Murphy, J. M., Noda, A., Raper, S. C. B., Watterson, I. G., Weaver, A. J. & Zhao, Z. C. (2007). Global Climate Projections. In: Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change Solomon, S., D. Qin, M. Manning, Z. Chen, M. Marquis, K. B. Averyt, M. Tignor & H. L. Miller (eds.), Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA.], Greenland supra-glacial lakes can be expected to form at higher altitudes and over a longer time period than is presently the case, expanding the area and time period over which connections between the ice sheet surface and base may be established [Das, S., Joughin, M., Behn, M., Howat, I., King, M., Lizarralde, D., & Bhatia, M. (2008). Fracture propagation to the base of the Greenland Ice Sheet during supra-glacial lake drainage. Science, 5877, 778-781] with potential consequences for ice sheet discharge [Zwally, H.J., Abdalati, W., Herring, T., Larson, K., Saba, J. & Steffen, K. (2002). Surface melt-induced acceleration of Greenland Ice Sheet flow. Science, 297, 218-221.].  相似文献   

2.
The notion of irreducible forms of systems of linear differential equations with formal power series coefficients as defined by Moser [Moser, J., 1960. The order of a singularity in Fuchs’ theory. Math. Z. 379–398] and its generalisation, the super-irreducible forms introduced in Hilali and Wazner [Hilali, A., Wazner, A., 1987. Formes super-irréductibles des systèmes différentiels linéaires. Numer. Math. 50, 429–449], are important concepts in the context of the symbolic resolution of systems of linear differential equations [Barkatou, M., 1997. An algorithm to compute the exponential part of a formal fundamental matrix solution of a linear differential system. Journal of App. Alg. in Eng. Comm. and Comp. 8 (1), 1–23; Pflügel, E., 1998. Résolution symbolique des systèmes différentiels linéaires. Ph.D. Thesis, LMC-IMAG; Pflügel, E., 2000. Effective formal reduction of linear differential systems. Appl. Alg. Eng. Comm. Comp., 10 (2) 153–187]. In this paper, we reduce the task of computing a super-irreducible form to that of computing one or several Moser-irreducible forms, using a block-reduction algorithm. This algorithm works on the system directly without converting it to more general types of systems as needed in our previous paper [Barkatou, M., Pflügel, E., 2007. Computing super-irreducible forms of systems of linear differential equations via Moser-reduction: A new approach. In: Proceedings of ISSAC’07. ACM Press, Waterloo, Canada, pp. 1–8]. We perform a cost analysis of our algorithm in order to give the complexity of the super-reduction in terms of the dimension and the Poincaré-rank of the input system. We compare our method with previous algorithms and show that, for systems of big size, the direct block-reduction method is more efficient.  相似文献   

3.
In 2007, a spanning tree-based genetic algorithm approach for solving nonlinear fixed charge transportation problem proposed by Jo et al. [Jo, J. B., Li, Y., Gen, M. (2007). Nonlinear fixed charge transportation problem by spanning tree-based genetic algorithm. Computers & Industrial Engineering. doi:10.1016/j.cie.2007.06.022] was published in Computers & Industrial Engineering journal. In 2008, comments like calculation of total cost, indication of problem size were given by Kannan et al. [Kannan, G., Kumar, P. S., Vinay V. P. (2008). Comments on ‘‘Nonlinear fixed charge transportation problem by spanning tree-based genetic algorithm” by Jung-Bok Jo, Yinzhen Li, Mitsuo Gen, Computers & Industrial Engineering (2007). Computers & Industrial Engineering. doi:10.1016/j.cie.2007.12.019] for the published model of [Jo, J. B., Li, Y., Gen, M. (2007). Nonlinear fixed charge transportation problem by spanning tree-based genetic algorithm. Computers & Industrial Engineering.doi:10.1016/j.cie.2007.06.022]. In this note, as a response to the comments of Kannan et al., the formula for calculating the total cost of nonlinear fixed charge transportation problem is illustrated with examples, to which the near-optimal solutions are given.  相似文献   

4.
Computational fluid dynamic (CFD) studies of the flow inside a modelled human extra-thoracic airway (ETA) were conducted to evaluate the performance of several turbulence models in predicting flow inside this complex geometry. Veracity of the computational models is assessed for physiologically accurate flow rates of 10, 15, and 30 l/min by comparison of numerical results with hot-wire [Johnstone A, Uddin M, Pollard A, Heenan A, Finlay WH. The flow inside an idealised form of the human extra-thoracic airway. Exp Fluids 2004;37(5):673-89] and particle image velocimetry (PIV) [Heenan AF, Matida E, Pollard A, Finlay WH. Experimental measurements and computational modelling of the flow field in an idealised extra-thoracic airway. Exp Fluids 2003;35:70-84] mean velocity data for the central sagittal plane of the ETA. Furthermore, flow features predicted by numerical models are compared to those from experimental flow-visualisation studies [Johnstone et al., 2004]. The flow in the ETA is shown to be highly three-dimensional, having strong secondary flows.  相似文献   

5.
Using image hierarchies for visual categorization has been shown to have a number of important benefits. Doing so enables a significant gain in efficiency (e.g., logarithmic with the number of categories [16,12]) or the construction of a more meaningful distance metric for image classification [17]. A critical question, however, still remains controversial: would structuring data in a hierarchical sense also help classification accuracy? In this paper we address this question and show that the hierarchical structure of a database can be indeed successfully used to enhance classification accuracy using a sparse approximation framework. We propose a new formulation for sparse approximation where the goal is to discover the sparsest path within the hierarchical data structure that best represents the query object. Extensive quantitative and qualitative experimental evaluation on a number of branches of the Imagenet database [7] as well as on the Caltech-256 [12] demonstrate our theoretical claims and show that our approach produces better hierarchical categorization results than competing techniques.  相似文献   

6.
Detailed land use/land cover classification at ecotope level is important for environmental evaluation. In this study, we investigate the possibility of using airborne hyperspectral imagery for the classification of ecotopes. In particular, we assess two tree-based ensemble classification algorithms: Adaboost and Random Forest, based on standard classification accuracy, training time and classification stability. Our results show that Adaboost and Random Forest attain almost the same overall accuracy (close to 70%) with less than 1% difference, and both outperform a neural network classifier (63.7%). Random Forest, however, is faster in training and more stable. Both ensemble classifiers are considered effective in dealing with hyperspectral data. Furthermore, two feature selection methods, the out-of-bag strategy and a wrapper approach feature subset selection using the best-first search method are applied. A majority of bands chosen by both methods concentrate between 1.4 and 1.8 μm at the early shortwave infrared region. Our band subset analyses also include the 22 optimal bands between 0.4 and 2.5 μm suggested in Thenkabail et al. [Thenkabail, P.S., Enclona, E.A., Ashton, M.S., and Van Der Meer, B. (2004). Accuracy assessments of hyperspectral waveband performance for vegetation analysis applications. Remote Sensing of Environment, 91, 354-376.] due to similarity of the target classes. All of the three band subsets considered in this study work well with both classifiers as in most cases the overall accuracy dropped only by less than 1%. A subset of 53 bands is created by combining all feature subsets and comparing to using the entire set the overall accuracy is the same with Adaboost, and with Random Forest, a 0.2% improvement. The strategy to use a basket of band selection methods works better. Ecotopes belonging to the tree classes are in general classified better than the grass classes. Small adaptations of the classification scheme are recommended to improve the applicability of remote sensing method for detailed ecotope mapping.  相似文献   

7.
Very recently, Pan et al. [Proceedings of the 9th Annual Conference on Genetic and Evolutionary Computation, GECCO07, pp. 126–33] presented a new and novel discrete differential evolution algorithm for the permutation flowshop scheduling problem with the makespan criterion. On the other hand, the iterated greedy algorithm is proposed by [Ruiz, R., & Stützle, T. (2007). A simple and effective iterated greedy algorithm for the permutation flowshop scheduling problem. European Journal of Operational Research, 177(3), 2033–49] for the permutation flowshop scheduling problem with the makespan criterion. However, both algorithms are not applied to the permutation flowshop scheduling problem with the total flowtime criterion. Based on their excellent performance with the makespan criterion, we extend both algorithms in this paper to the total flowtime objective. Furthermore, we propose a new and novel referenced local search procedure hybridized with both algorithms to further improve the solution quality. The referenced local search exploits the space based on reference positions taken from a reference solution in the hope of finding better positions for jobs when performing insertion operation. Computational results show that both algorithms with the referenced local search are either better or highly competitive to all the existing approaches in the literature for both objectives of makespan and total flowtime. Especially for the total flowtime criterion, their performance is superior to the particle swarm optimization algorithms proposed by [Tasgetiren, M. F., Liang, Y. -C., Sevkli, M., Gencyilmaz, G. (2007). Particle swarm optimization algorithm for makespan and total flowtime minimization in permutation flowshop sequencing problem. European Journal of Operational Research, 177(3), 1930–47] and [Jarboui, B., Ibrahim, S., Siarry, P., Rebai, A. (2007). A combinatorial particle swarm optimisation for solving permutation flowshop problems. Computers & Industrial Engineering, doi:10.1016/j.cie.2007.09.006]. Ultimately, for Taillard’s benchmark suite, four best known solutions for the makespan criterion as well as 40 out of the 90 best known solutions for the total flowtime criterion are further improved by either one of the algorithms presented in this paper.  相似文献   

8.
Shaw and his colleagues [Shaw, B., Han, J., Kim, E., Gustafson, D., Hawkins, R., Cleary, C., et al. (2007). Effects of prayer and religious expression within computer support groups on women with breast cancer. Psycho-oncology, 16(7), 676–687] examined religious expression in breast cancer (BC) online support groups (OSG). Using Pennebaker’s LIWC text analysis to assess religious expression, they found that the more frequent the expression of words related to religion the lower the levels of negative emotions and the higher the levels of health self-efficacy and functional well-being. Our study goal was to replicate their findings. Specifically, we tested their central hypothesis that the percentage of religious words written by members of BC OSG’s are associated with improvement in psychological outcomes. Five BC OSG’s from our previous work [Lieberman, M. A., & Goldstein, B. (2005a). Not all negative emotions are equal: The role of emotional expression in online support groups for women with breast cancer. Psycho-oncology. 15, 160–168; Lieberman, M. A., & Goldstein, B. (2005b). Self-help online: An outcome evaluation of breast cancer bulletin boards. Journal of Health Psychology, 10(6), 855–862] studied 91 participants at baseline and 6 months post. Significant changes in depression and quality of life was found over time. In the current study linear regressions examined the relationship between religious statements and outcomes. The results did not support the hypotheses of a positive relationship between religious expression and positive outcome in both OSG samples. Reviews of studies examining the role of religion in health outcomes report equivocal results on the benefits of religious expression.  相似文献   

9.
In this paper, a globally optimal filtering framework is developed for unbiased minimum-variance state estimation for systems with unknown inputs that affect both the system state and the output. The resulting optimal filters are globally optimal within the unbiased minimum-variance filtering over all linear unbiased estimators. Globally optimal state estimators with or without output and/or input transformations are derived. Through the global optimality evaluation of this research, the performance degradation of the filter proposed by Darouach, Zasadzinski, and Boutayeb [Darouach, M., Zasadzinski, M., & Boutayeb, M. (2003). Extension of minimum variance estimation for systems with unknown inputs. Automatica, 39, 867-876] is clearly illustrated and the global optimality of the filter proposed by Gillijns and De Moor [Gillijns, S., & De Moor, B. (2007b). Unbiased minimum-variance input and state estimation for linear discrete-time systems with direct feedthrough. Automatica, 43, 934-937] is further verified. The relationship with the existing literature results is addressed. A unified approach to design a specific globally optimal state estimator that is based on the desired form of the distribution matrix from the unknown input to the output is also presented. A simulation example is given to illustrate the proposed results.  相似文献   

10.
The integration of technology by K-12 teachers was promoted to aid the shift to a more student-centered classroom (e.g., Roblyer, M. D., & Edwards, J. (2000). Integrating educational technology into teaching (2nd ed.). Upper Saddle River, NJ: Merrill). However, growth in the power of and access to technology in schools has not been accompanied by an equal growth in technology integration. Research into reasons for minimal technology integration has traditionally focused on post-teacher-education barriers to technology integration (e.g., Ertmer, P. A. (2005). Teacher pedagogical beliefs: The final frontier in our quest for technology integration? Educational Technology Research and Development, 53(4), 25–39; Ertmer, P. A., Gopalakrishnan, S., & Ross, E. M. (2001). Technology-using teachers: Comparing perceptions of exemplary use to best practice [Electronic copy]. Journal of Research on Computing in Education, 33(3) 1–2; Hew, K. F., & Brush, T. (2007). Integrating technology into K-12 teaching and learning: Current knowledge gaps and recommendations for future research. Educational Technology, Research and Development, 55(3), 223–252). In this paper, I first clarify the definition of technology integration and question the contention that barriers, particularly those related to teacher beliefs, are behind the lack of technology integration. Using the sociological concept of habitus, or set of dispositions, I then explore preservice teachers’ past experiences as a possible explanation for minimal technology integration and discuss implications for future research and teacher education.  相似文献   

11.
The technology of automatic document summarization is maturing and may provide a solution to the information overload problem. Nowadays, document summarization plays an important role in information retrieval. With a large volume of documents, presenting the user with a summary of each document greatly facilitates the task of finding the desired documents. Document summarization is a process of automatically creating a compressed version of a given document that provides useful information to users, and multi-document summarization is to produce a summary delivering the majority of information content from a set of documents about an explicit or implicit main topic. In our study we focus on sentence based extractive document summarization. We propose the generic document summarization method which is based on sentence clustering. The proposed approach is a continue sentence-clustering based extractive summarization methods, proposed in Alguliev [Alguliev, R. M., Aliguliyev, R. M., Bagirov, A. M. (2005). Global optimization in the summarization of text documents. Automatic Control and Computer Sciences 39, 42–47], Aliguliyev [Aliguliyev, R. M. (2006). A novel partitioning-based clustering method and generic document summarization. In Proceedings of the 2006 IEEE/WIC/ACM international conference on web intelligence and intelligent agent technology (WI–IAT 2006 Workshops) (WI–IATW’06), 18–22 December (pp. 626–629) Hong Kong, China], Alguliev and Alyguliev [Alguliev, R. M., Alyguliev, R. M. (2007). Summarization of text-based documents with a determination of latent topical sections and information-rich sentences. Automatic Control and Computer Sciences 41, 132–140] Aliguliyev, [Aliguliyev, R. M. (2007). Automatic document summarization by sentence extraction. Journal of Computational Technologies 12, 5–15.]. The purpose of present paper to show, that summarization result not only depends on optimized function, and also depends on a similarity measure. The experimental results on an open benchmark datasets from DUC01 and DUC02 show that our proposed approach can improve the performance compared to sate-of-the-art summarization approaches.  相似文献   

12.
Binary discriminant functions are often used to identify changed area through time in remote sensing change detection studies. Traditionally, a single change-enhanced image has been used to optimize the binary discriminant function with a few (e.g., 5-10) discrete thresholds using a trial-and-error method. Im et al. [Im, J., Rhee, J., Jensen, J. R., & Hodgson, M. E. (2007). An automated binary change detection model using a calibration approach. Remote Sensing of Environment, 106, 89-105] developed an automated calibration model for optimizing the binary discriminant function by autonomously testing thousands of thresholds. However, the automated model may be time-consuming especially when multiple change-enhanced images are used as inputs together since the model is based on an exhaustive search technique. This paper describes the development of a computationally efficient search technique for identifying optimum threshold(s) in a remote sensing spectral search space. The new algorithm is based on “systematic searching.” Two additional heuristic optimization algorithms (i.e., hill climbing, simulated annealing) were examined for comparison. A case study using QuickBird and IKONOS satellite imagery was performed to evaluate the effectiveness of the proposed algorithm. The proposed systematic search technique reduced the processing time required to identify the optimum binary discriminate function without decreasing accuracy. The other two optimizing search algorithms also reduced the processing time but failed to detect a global maxima for some spectral features.  相似文献   

13.
Wang et al. [Wang, K. H., Chan, M. C., & Ke, J. C. (2007). Maximum entropy analysis of the M[x]/M/1 queueing system with multiple vacations and server breakdowns. Computers & Industrial Engineering, 52, 192–202] elaborate on an interesting approach to estimate the equilibrium distribution for the number of customers in the M[x]/M/1 queueing model with multiple vacations and server breakdowns. Their approach consists of maximizing an entropy function subject to constraints, where the constraints are formed by some known exact results. By a comparison between the exact expression for the expected delay time and an approximate expected delay time based on the maximum entropy estimate, they argue that their maximum entropy estimate is sufficiently accurate for practical purposes. In this note, we show that their maximum entropy estimate is easily rejected by simulation. We propose a minor modification of their maximum entropy method that significantly improves the quality of the estimate.  相似文献   

14.
This work summarizes some results about static state feedback linearization for time-varying systems. Three different necessary and sufficient conditions are stated in this paper. The first condition is the one by [Sluis, W. M. (1993). A necessary condition for dynamic feedback linearization. Systems & Control Letters, 21, 277–283]. The second and the third are the generalizations of known results due respectively to [Aranda-Bricaire, E., Moog, C. H., Pomet, J. B. (1995). A linear algebraic framework for dynamic feedback linearization. IEEE Transactions on Automatic Control, 40, 127–132] and to [Jakubczyk, B., Respondek, W. (1980). On linearization of control systems. Bulletin del’Academie Polonaise des Sciences. Serie des Sciences Mathematiques, 28, 517–522]. The proofs of the second and third conditions are established by showing the equivalence between these three conditions. The results are re-stated in the infinite dimensional geometric approach of [Fliess, M., Lévine J., Martin, P., Rouchon, P. (1999). A Lie–Bäcklund approach to equivalence and flatness of nonlinear systems. IEEE Transactions on Automatic Control, 44(5), 922–937].  相似文献   

15.
We present a syntactic scheme for translating future-time LTL bounded model checking problems into propositional satisfiability problems. The scheme is similar in principle to the Separated Normal Form encoding proposed in [Frisch, A., D. Sheridan and T. Walsh, A fixpoint based encoding for bounded model checking, in: M.D. Aagaard and J.W. O'Leary, editors, Formal Methods in Computer-Aided Design; 4th International Conference, FMCAD 2002, Lecture Notes in Computer Science 2517 (2002), pp. 238–254] and extended to past time in [Cimatti, A., M. Roveri and D. Sheridan, Bounded verification of past LTL, in: A.J. Hu and A.K. Martin, editors, Proceedings of the 5th International Conference on Formal Methods in Computer Aided Design (FMCAD 2004), Lecture Notes in Computer Science (2004)]: an initial phase involves putting LTL formulae into a normal form based on linear-time fixpoint characterisations of temporal operators.As with [Cimatti, A., M. Roveri and D. Sheridan, Bounded verification of past LTL, in: A.J. Hu and A.K. Martin, editors, Proceedings of the 5th International Conference on Formal Methods in Computer Aided Design (FMCAD 2004), Lecture Notes in Computer Science (2004)] and [Latvala, T., A. Biere, K. Heljanko and T. Junttila, Simple bounded LTL model checking, in: Formal Methods in Computer-Aided Design; 5th International Conference, FMCAD 2004, Lecture Notes in Computer Science 3312 (2004), pp. 186–200], the size of propositional formulae produced is linear in the model checking bound, but the constant of proportionality appears to be lower.A denotational approach is taken in the presentation which is significantly more rigorous than that in [Frisch, A., D. Sheridan and T. Walsh, A fixpoint based encoding for bounded model checking, in: M.D. Aagaard and J.W. O'Leary, editors, Formal Methods in Computer-Aided Design; 4th International Conference, FMCAD 2002, Lecture Notes in Computer Science 2517 (2002), pp. 238–254] and [Cimatti, A., M. Roveri and D. Sheridan, Bounded verification of past LTL, in: A.J. Hu and A.K. Martin, editors, Proceedings of the 5th International Conference on Formal Methods in Computer Aided Design (FMCAD 2004), Lecture Notes in Computer Science (2004)], and which provides an elegant alternative way of viewing fixpoint based translations in [Latvala, T., A. Biere, K. Heljanko and T. Junttila, Simple bounded LTL model checking, in: Formal Methods in Computer-Aided Design; 5th International Conference, FMCAD 2004, Lecture Notes in Computer Science 3312 (2004), pp. 186–200] and [Biere, A., A. Cimatti, E. M. Clarke, O. Strichman and Y. Zhu, Bounded model checking, Advances in Computers 58 (2003)].  相似文献   

16.
Vernakalant (RSD1235) is an investigational drug that converts atrial fibrillation rapidly and safely in patients intravenously [Roy et al., J. Am. Coll. Cardiol. 44 (2004) 2355–2361; Roy et al., Circulation 117 (2008) 1518–1525] and maintains sinus rhythm when given orally [Savelieva et al., Europace 10 (2008) 647–665]. Here, modeling using AutoDock4 allowed exploration of potential binding modes of vernakalant to the open-state of the Kv1.5 channel structure. Point mutations were made in the channel model based on earlier patch-clamp studies [Eldstrom et al., Mol. Pharmacol. 72 (2007) 1522–1534] and the docking simulations re-run to evaluate the ability of the docking software to predict changes in drug–channel interactions. Each AutoDock run predicted a binding conformation with an associated value for free energy of binding (FEB) in kcal/mol and an estimated inhibitory concentration (Ki). The most favored conformation had a FEB of −7.12 kcal/mol and a predicted Ki of 6.08 μM (the IC50 for vernakalant is 13.8 μM; [Eldstrom et al., Mol. Pharmacol. 72 (2007) 1522–1534]). This conformation makes contact with all four T480 residues and appears to be clearly positioned to block the channel pore.  相似文献   

17.
The design and management of human–automation teams for future air traffic systems require an understanding of principles of cognitive systems engineering, allocation of function and team adaptation. The current article proposes a framework of human–automation team adaptable control that incorporates adaptable automation [Oppermann, R., Simm, H., 1994. Adaptability: user-initiated individualization. In: Oppermann, R. (Ed.), Adaptive User Support: Ergonomic Design of Manually and Automatically Adaptable Software. Lawrence Erlbaum Associates, Hillsdale, NJ, pp. 14–64] with an Extended Control Model of Joint Cognitive System functioning [Hollnagel, E., Nåbo, A., Lau, I., 21–24 July 2003. A systemic model for driver-in-control. In: Paper Presented at the Second International Driving Symposium on Human Factors in Driver Assessment, Training, and Vehicle Design, Public Policy Center, University of Iowa, Park City, UT] nested within a dynamic view of team adaptation [Burke, C.S., Stagl, K.C., Salas, E., Pierce, L., Kendall, D., 2006. Understanding team adaptation: a conceptual analysis and model. Journal of Applied Psychology 91, 1189–1207]. Modeling the temporal dynamics of the coordination of human–automation teams under conditions of Free Flight requires an appreciation of the episodic, cyclical nature of team processes from transition to action phases, along with the distinction of team processes from emergent states [Marks, M.A., Mathieu, J.E., Zaccaro, S.J., 2001. A temporally based framework and taxonomy of team processes. Academy of Management Review 26, 356–376]. The conceptual framework of human–automation team adaptable control provides a basis for future research and design.

Relevance to industry

The current article provides a conceptual framework to direct future investigations to determine the optimal design and management of Human–automation teams for Free Flight-based air traffic management systems.  相似文献   

18.
Sovereign rating has had an increasing importance since the beginning of the financial crisis. However, credit rating agencies opacity has been criticised by several authors highlighting the suitability of designing more objective alternative methods. This paper tackles the sovereign credit rating classification problem within an ordinal classification perspective by employing a pairwise class distances projection to build a classification model based on standard regression techniques. In this work the ϵ-SVR is selected as the regressor tool. The quality of the projection is validated through the classification results obtained for four performance metrics when applied to Standard & Poors, Moody's and Fitch sovereign rating data of U27 countries during the period 2007–2010. This validated projection is later used for ranking visualization which might be suitable to build a decision support system.  相似文献   

19.
In Georgiou and Smith (1992), the following question was raised: Consider a linear, shift-invariant system on L2[0, ∞). Let the graph of the system have Fourier transform (MN)H2 (i.e., the system has a transfer function P=N/M) where M, N are elements of CA={f∈H: f is continuous on the compactified right-half plane}. Is it possible to normalize M and N (i.e., to ensure |M|2+|N|2=1) in CA? The author shows by example that this is not always possible  相似文献   

20.
The overall goal of CSCL research is to design software tools and collaborative environments that facilitate social knowledge construction via a valuable assortment of methodologies, theoretical and operational definitions, and multiple structures [Hadwin, A. F., Gress, C. L. Z., & Page, J. (2006). Toward standards for reporting research: a review of the literature on computer-supported collaborative learning. In Paper presented at the 6th IEEE International Conference on Advanced Learning Technologies, Kerkrade, Netherlands; Lehtinen, E. (2003). Computer-supported collaborative learning: an approach to powerful learning environments. In E. De Corte, L. Verschaffel, N. Entwistle & J. Van Merriëboer (Eds.), Unravelling basic components and dimensions of powerful learning environments (pp. 35–53). Amsterdam, Netherlands: Elsevier]. Various CSCL tools attempt to support constructs associated with effective collaboration, such as awareness tools to support positive social interaction [Carroll, J. M., Neale, D. C., Isenhour, P. L., Rosson, M. B., & McCrickard, D. S. (2003). Notification and awareness: Synchronizing task-oriented collaborative activity. International Journal of Human–Computer Studies 58, 605] and negotiation tools to support group social skills and discussions [Beers, P. J., Boshuizen, H. P. A. E., Kirschner, P. A., & Gijselaers, W. H. (2005). Computer support for knowledge construction in collaborative learning environments. Computers in Human Behavior 21, 623–643], yet few studies developed or used pre-existing measures to evaluate these tools in relation to the above constructs. This paper describes a review of the measures used in CSCL to answer three fundamental questions: (a) What measures are utilized in CSCL research? (b) Do measures examine the effectiveness of attempts to facilitate, support, and sustain CSCL? And (c) When are the measures administered? Our review has six key findings: there is a plethora of self-report yet a paucity of baseline information above collaboration and collaborative activities, findings in the field are dominated by ‘after collaboration’ measurement, there is little replication and an over reliance on text-based measures, and an insufficient collection of tools and measures for examining processes involved in CSCL.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号